id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.00151
The Complexity of Satisfiability Checking for Symbolic Finite Automata
We study the satisfiability problem of symbolic finite automata and decompose it into the satisfiability problem of the theory of the input characters and the monadic second-order theory of the indices of accepted words. We use our decomposition to obtain tight computational complexity bounds on the decision problem for this automata class and an extension that considers linear arithmetic constraints on the underlying effective Boolean algebra.
Rodrigo Raya
2023-06-30T22:01:40Z
http://arxiv.org/abs/2307.00151v1
# The Complexity of Satisfiability Checking for Symbolic Finite Automata ###### Abstract We study the satisfiability problem of symbolic finite automata and decompose it into the satisfiability problem of the theory of the input characters and the monadic second-order theory of the indices of accepted words. We use our decomposition to obtain tight computational complexity bounds on the decision problem for this automata class and an extension that considers linear arithmetic constraints on the underlying effective Boolean algebra. ## 1 Introduction Symbolic finite automata (SFAs) are an extension of finite automata that allow transitions to be labelled with monadic predicates over some universe rather than symbols from a finite alphabet. They were first mentioned in [29], but they attracted renewed interest starting in [28]. SFAs have been used in a variety of applications including the analysis of regular expressions [4, 28], string encoders, sanitizers [9, 14, 16], functional programs [7], code generation, parallelization [21] and symbolic matching [22]. A series of theoretical investigations has been carried out on this automata model, including [2, 4, 25]. In particular, the authors of [27] observed that such an automata model had been studied previously by Bes in [3]. In his paper, Bes introduced a class of multi-tape synchronous finite automata whose transitions are labelled by first-order formulas. He then proved various properties of the languages accepted by such automata including closure under Boolean, rational, and the projection operations, logical characterizations in terms of MSO logic and the Eilenberg-Elgot-Shepherdson formalism as well as decidability properties. Remarkably [27], the paper showed that recognizability for such automata coincides with definability for certain generalized weak powers, first-studied by Feferman and Vaught in [11]. The techniques of Feferman and Vaught allow decomposing the decision problem for the first-order theory of a product of structures, \(Th(\prod_{i}\mathcal{M}_{i})\) into the first-order theory of the structures \(\mathcal{M}_{i}\), \(Th(\mathcal{M}_{i})\), and the monadic second-order theory of the index set \(I\), \(Th^{mon}(\langle I,\ldots\rangle)\), where the structure \(\langle I,\ldots\rangle\) may contain further relations such as a finiteness predicate, a cardinality operator, etc. If the theory of the components \(Th(\mathcal{M}_{i})\) is decidable, then the decision problem reduces to that of the theory \(Th^{mon}(\langle I,\ldots\rangle)\). To analyse these structures, Feferman and Vaught extend results that go back to Skolem [24]. Technically, the decomposition is expressed in terms of so-called reduction sequences. It is known [8] that many model-theoretic constructions incur in non-elementary blow-ups in the formula size. This includes the case of the size of the Feferman-Vaught reduction sequences in the case of disjoint unions. Perhaps for this reason, no computational complexity results have been obtained for the theory of symbolic automata and related models. Instead, the results in the literature [5, 6, 13] refer to the decidability of the satisfiability problem of the monadic predicates or provide asymptotic run-times rather than a refined computational complexity classification. As a **main contribution**, we show how to reduce the satisfiability problem for finite symbolic automata to the satisfiability problem of the existential first-order theory of the theory of the elements and the existential monadic second-order theory of the indices. This decomposition allows us to derive tight complexity bounds for the decision problem of the automaton in the precise sense of Corollary 1. We then study an extension of the formalism of symbolic finite automata which also imposes linear arithmetic constraints on the cardinalities of the Venn regions of the underlying effective Boolean algebra. In particular, this extension allows expressing the number of occurrences of a particular kind of letter in a word. We show in Corollary 2 that the computational complexity of the corresponding satisfiability problem is the same as the one for the simpler model without cardinalities. Similar extensions for related models of automata are considered in the literature [12]. **Organisation of the paper.** Section 2 introduces symbolic finite automata. Section 3 gives the Feferman-Vaught decomposition of symbolic finite automata in terms of the theory of the elements and the theory of the indices. Section 4 describes the decision procedure with which, in Section 5, after presenting the quantifier-free theory of Boolean algebra with Presburger arithmetic, we obtain the tight complexity bounds announced. Section 6 describes the extension of symbolic finite automata that uses linear arithmetic constraints over the cardinalities of the automaton's underlying effective Boolean algebra and proves the corresponding upper bounds for the associated satisfiability problem. Section 7 concludes the paper. ## 2 Symbolic Finite Automata (SFA) Symbolic automata are run over Boolean algebras of interpreted sets. The family of monadic predicates used for these interpretations needs to be closed under Boolean operations and contain formulae denoting the empty set and the universe. Furthermore, in the original formulation, checking non-emptiness of these interpreted sets needs to be decidable. In Section 4, we will refine this assumption with a complexity-theoretic bound. **Definition 1** ([6]).: An effective Boolean algebra \(\mathcal{A}\) is a tuple \[(\mathfrak{D},\Psi,[\![\cdot]\!],\bot,\top,\vee,\wedge,\neg)\] where \(\mathfrak{D}\) is a set of domain elements, \(\Psi\) is a set of unary predicates over \(\mathcal{D}\) that are closed under the Boolean connectives, with \(\bot,\top\in\Psi\) and \([\![\cdot]\!]:\Psi\to 2^{\mathcal{D}}\) is a function such that \(1\). \([\![\bot]\!]=\emptyset\), \(2\). \([\![\top]\!]=\mathfrak{D}\), and \(3\). For all \(\psi,\psi_{1},\psi_{2}\in\Psi\), we have that (a) \([\![\psi_{1}\vee\psi_{2}]\!]=[\![\psi_{1}]\!]\cup[\![\psi_{2}]\!]\) (b) \([\![\psi_{1}\wedge\psi_{2}]\!]=[\![\psi_{1}]\!]\cap[\![\psi_{2}]\!]\) (c) \([\![\neg\psi]\!]=\mathfrak{D}\backslash[\![\psi]\!]\). \(4\). Checking \([\![\psi]\!]\neq\emptyset\) is decidable. A predicate \(\psi\in\Psi\) is atomic if it is not a Boolean combination of predicates in \(\Psi\). Our initial motivation was to generalise the complexity results obtained for array theories in [1, 20]. The notion of SMT algebra [6, Example 2.3] precisely corresponds to the language introduced in [20, Definition 5] without cardinality constraints. We take this as a first example of effective Boolean algebra. **Example 1**.: The SMT algebra for a type \(\tau\) is the tuple \((\mathcal{D},\Psi,[\![\cdot]\!],\bot,\top,\vee,\wedge,\neg)\) where \(\mathcal{D}\) is the domain of \(\tau\), \(\Psi\) is the set of all quantifier-free formulas with one fixed free variable of type \(\tau\), \([\![\cdot]\!]\) maps each monadic predicate to the set of its satisfying assignments, \(\bot\) denotes the empty set, \(\top\) denotes the universe \(\mathcal{D}\) and \(\vee,\wedge,\neg\) denote the Boolean algebra operations of union, intersection, and complement respectively. This example should be contrasted with other representations of the predicates that take into account implementation details. An example of the latter is the \(k\)-bit bitvector effective Boolean algebra described in [26]. **Example 2**.: The powerset algebra \(2^{bv(k)}\) is the tuple \((D,\Psi,[\![\cdot]\!],\bot,\top,\vee,\wedge,\neg)\) where \(\mathcal{D}\) is the set \(bv(k)\) of all non-negative integers less than \(2^{k}\) or equivalently, all \(k\)-bit bit-vectors for some \(k>0\), \(\Psi\) is the set of BDDs of depth \(k\), \([\![\cdot]\!]\) maps each BDD \(\beta\) to the set of all integers \(n\) such that the binary representation of \(n\) is a solution of \(\beta\), \(\bot\) denotes the BDD representing the empty set, \(\top\) denotes the BDD representing the universal set and \(\vee,\wedge,\neg\) denote the Boolean algebra operation of union, intersection, and complement as they are implemented for BDDs. We now introduce the automata model we will investigate in the paper. **Definition 2** ([6]).: A symbolic finite automaton (s-FA) is a tuple \[M=(\mathcal{A},Q,q_{0},F,\Delta)\] where \(1\). \(\mathcal{A}\) is an effective Boolean algebra. \(2\). \(Q\) is a finite set of states. \(3\). \(q_{0}\in Q\) is the initial state. \(4\). \(F\subseteq Q\) is the set of final states. \(5\). \(\Delta\subseteq Q\times\Psi_{\mathcal{A}}\times Q\) is a finite set of transitions. A symbolic transition \(\rho=(q_{1},\psi,q_{2})\in\Delta\), also denoted \(q_{1}\stackrel{{\psi}}{{\to}}q_{2}\), has source state \(q_{1}\), target state \(q_{2}\), and guard \(\psi\). For \(d\in\mathfrak{D}\), the concrete transition \(q_{1}\stackrel{{ d}}{{\to}}q_{2}\) denotes that \(q_{1}\stackrel{{\psi}}{{\to}}q_{2}\) and \(d\in\llbracket\psi\rrbracket\) for some \(\psi\). A string \(w=d_{1}d_{2}\dots d_{k}\) is accepted at state \(q\) if and only if for \(1\leq i\leq k\), there exist transitions \(q_{i-1}\stackrel{{ d_{i}}}{{\to}}q_{i}\) such that \(q_{0}=q\) and \(q_{k}\in F\). The set of strings accepted at \(q\) is denoted by \(\mathcal{L}_{q}(M)\) and the language of \(M\) is \(\mathcal{L}(M)=\mathcal{L}_{q_{0}}(M)\). We now give examples of automata running over the algebras of Examples 1 and 2. We use the traditional graphical representation used in automata theory textbooks [15]. **Example 3** ([6]).: We consider the language of linear arithmetic over the integers. We set two formulae \(\psi_{>0}(x)\equiv x>0\) satisfied by all positive integers and \(\psi_{\mathrm{odd}}(x)\equiv x\mod 2=1\) satisfied by all odd integers. The following symbolic finite automaton accepts all strings of even length consisting only of positive odd numbers. **Example 4** ([26]).: We consider the language of BDDs over bit-vectors of length six. The following symbolic finite automaton accepts all strings that start by a bit-vector representing either of the numbers \(6,14,22,38\) or \(54\) followed by an arbitrary number of bit-vectors. ## 3 Feferman-Vaught Decomposition for SFAs Let \(M=(\mathcal{A},Q,q_{0},F,\Delta)\) be a symbolic finite automaton and let \(\psi_{1},\dots,\psi_{k},\dots\) be the atomic predicates in \(\mathcal{A}\). The definition of symbolic finite automaton allows assuming that the set of these predicates is finite. **Lemma 1**.: There exists a symbolic finite automaton \(M^{\prime}=(\mathcal{A}^{\prime},Q,q_{0},F,\Delta)\) such that \(\mathcal{L}(M)=\mathcal{L}(M^{\prime})\) and the cardinality of \(\Psi_{\mathcal{A}^{\prime}}\) is finite. Proof.: The automaton has a finite number of transitions. We take \(\Psi_{\mathcal{A}^{\prime}}\) to be the Boolean closure of the predicates occurring in these transitions. It follows that \(\Psi_{\mathcal{A}^{\prime}}\) is a finite set. Otherwise, we define the components of \(\mathcal{A}^{\prime}\) as those in \(\mathcal{A}\). Since the automaton is unchanged, \(\mathcal{L}(M)=\mathcal{L}(M^{\prime})\). Since \(\Psi_{\mathcal{A}}\) can be assumed to be finite, it follows that the set of atomic predicates is finite too. In the remaining of the paper, we let \(\phi_{1},\ldots,\phi_{k}\) be the generators of the effective Boolean algebra used by the symbolic finite automaton \(M\). Similarly, we let \(\psi_{1},\ldots,\psi_{m}\) denote the actual predicates used in the transitions of \(M\). We will decompose the study of \(\mathcal{L}(M)\) into the study of the properties of the elements in \(\mathcal{D}\) and the ordering properties induced by the transition structure of the automaton. Both kinds of properties will refer to sets of indices to stay in sync with each other [30]. To specify the properties of the elements in \(\mathcal{D}\), we use set interpretations of the form \[S=\{\,n\in\mathbb{N}\mid\psi(d(n))\,\}=\llbracket\psi\rrbracket \tag{1}\] where \(d(n)\) is the \(n\)-th element occurring in \(d\in\mathcal{D}^{*}\). These sets can be pictured via a Venn diagram of interpreted sets, such as the one in Figure 1. Each formula in \(\Psi_{\mathcal{A}}\) corresponds to a particular Venn region in this diagram and can be referred to using a Boolean algebra expression on the variables \(S_{1},\ldots,S_{k}\), thanks to the set interpretation (1). A concrete transition \(q_{1}\stackrel{{ d}}{{\rightarrow}}q_{2}\) requires a value \(d\in\mathcal{D}\). This value will lie in some elementary Venn region of the diagram in Figure 1, i.e. in a set of the form \(S_{1}^{\beta_{1}}\cap\ldots\cap S_{k}^{\beta_{k}}\) where \(\beta=(\beta_{1},\ldots,\beta_{k})\in\{0,1\}^{k},S^{0}:=S^{c}\) and \(S^{1}:=S\). We will denote such Venn region with the bit-string \(\beta\). To specify the transition structure of the automaton, what is relevant to us is the region of the Venn diagram, not the specific value that it takes there. It follows that a run of the automaton can be encoded as a sequence of bit-strings \(\overline{t}=(t_{1},\ldots,t_{k})\in(\{0,1\}^{\llbracket\overline{t}\rrbracket })^{k}\) and that these bit-strings only need to satisfy the propositional formulae corresponding to the predicates labelling the transitions of the automaton. Figure 2 represents one such run over an uninterpreted Venn diagram. **Example 5**.: If in Example 3 we take as atomic formulae the predicates \(\psi_{\mathrm{odd}}\) and \(\psi_{>0}\) then the formula \(\psi_{\mathrm{odd}}\wedge\psi_{>0}(x)\), which labels the automaton transitions, corresponds to the propositional formula \(S_{1}\wedge S_{2}\). We denote by \(L_{1},\ldots,L_{m}\) such propositional formulae and by \(M(L_{1},\ldots,L_{m})\) the set of bit-string runs accepted by \(M\), which we call _tables_[17]. **Lemma 2**.: \[\mathcal{L}(M)=\Big{\{}d\in\mathcal{D}^{\star}\Big{|}\exists\overline{t} \in M(L_{1},\ldots,L_{m}).\] \[\bigwedge_{i=1}^{k}S_{i}=\{\,n\in\mathbb{N};\phi_{i}(d(n))\,\}=\{\,n\in \mathbb{N};t_{i}(n)\,\}\,\Big{\}}\] Proof.: The proof uses the definition of \(\mathcal{L}(M)\) and \(M(L_{1},\ldots,L_{m})\). In one direction, one defines \(\overline{t}\) from the membership of the values \(d(i)\) in elementary Venn regions \(\beta_{i}\). In the other direction, the definition of \(M(L_{1},\ldots,L_{m})\) ensures that there is an accepting run corresponding to these values and any witness of the formula in the associated elementary Venn regions can be taken to conform the word \(d\). In the next sections, we make use of this decomposition to devise a decision procedure for symbolic finite automata, which, will refine the existing computational complexity results for the corresponding satisfiability problem. ## 4 Decision Procedure for Satisfiability of SFAs **Definition 3**.: The satisfiability problem for a symbolic finite automaton \(M\) is the problem of determining whether \(\mathcal{L}(M)\neq\emptyset\). By Lemma 2, checking non-emptiness of the language of a symbolic finite Figure 1: A Venn diagram representing a finitely generated effective Boolean algebra with atomic predicates \(\psi_{1},\psi_{2}\) and \(\psi_{3}\). Figure 2: A table accepted by a symbolic automaton represented over an uninterpreted Venn diagram. automaton reduces to checking whether the following formula is true: \[\begin{split}&\exists S_{1},\dots,S_{k}.\exists d.\bigwedge_{i=1}^{k}S_ {i}=\{\,n\in\mathbb{N};\phi_{i}(d(n))\,\}\wedge\\ &\exists\overline{t}\in M(L_{1},\dots,L_{m}).\bigwedge_{i=1}^{k}S_ {i}=\{\,n\in\mathbb{N};t_{i}(n)\,\}\end{split} \tag{2}\] To establish the complexity of deciding formulae of the form (2), we will have to analyse further the set \(M(L_{1},\dots,L_{m})\). Each table \(\overline{t}\) in \(M(L_{1},\dots,L_{m})\) corresponds to a _symbolic table_\(\overline{s}\) whose entries are the propositional formulae that the bit-strings of \(\overline{t}\) satisfy. More generally, these symbolic tables are generated by the symbolic automaton obtained by replacing the predicates of the symbolic automaton by propositional formulae. The set of symbolic tables accepted by the automaton \(M\) is a regular set and will be denoted by \(M_{S}(L_{1},\dots,L_{m})\). **Example 6**.: The automaton in Example 3 corresponds, according to Example 5, to the symbolic automaton shown. The symbolic tables generated by this automaton are of the form \(((S_{1}\wedge S_{2})(S_{1}\wedge S_{2}))^{*}\). The corresponding tables would be of the form \(((1,1)(1,1))^{*}\). Consider first the case where the propositional formulae \(L_{1},\dots,L_{m}\) for the automaton \(M\) denote disjoint Venn regions. In this case, all we need to do to check the satisfiability of formula (2) is whether there exists a symbolic table \(\overline{s}\) such that whenever the number of times a certain propositional letter occurs is non-zero, then the corresponding Venn region interpreted according to (1) has a satisfiable defining formula. From this, it follows that our decision procedure will need to compute the so-called Parikh image of the regular language \(M_{S}(L_{1},\dots,L_{m})\). **Definition 4** (Parikh Image).: The Parikh image of \(M_{S}(L_{1},\dots,L_{m})\) is the set \[\mathsf{Parikh}(M_{S}(L_{1},\dots,L_{m}))=\{(|\overline{s}|_{L_{1}},\dots,| \overline{s}|_{L_{m}})|\overline{s}\in M_{S}(L_{1},\dots,L_{m})\}\] where \(|\overline{s}|_{L_{i}}\) denotes the number of occurrences of the propositional formula \(L_{i}\) in the symbolic table \(\overline{s}\). We will use a description of the Parikh image in terms of linear-size existential Presburger arithmetic formulae first obtained by Seidl, Schwentick, Muscholl and Habermehl. **Lemma 3** ([23]).: The set \(\mathsf{Parikh}(M_{S}(L_{1},\ldots,L_{n}))\) is definable by an existential Presburger formula \(\rho\) of size \(O(|M|)\) where \(|M|\) is the number of symbols used to describe the automaton \(M\). When propositional letters denote overlapping Venn regions, a partitioning argument is required. This is formalised in Theorem 1. First, we fix some notation. We set \(p_{\beta}:=\bigcap_{i=1}^{k}S_{i}^{\beta_{i}}\) where \(\beta\in\{0,1\}^{k}\), \(p_{L}:=\bigcup\limits_{\beta\models L}p_{\beta}\) where \(L\) is a propositional formula and \(\varphi^{\beta}(d):=\bigwedge_{i=1}^{k}\varphi_{i}^{\beta(i)}(d)\). We write \(S_{1}\dot{\cup}S_{2}\) to denote the set \(S_{1}\cup S_{2}\) where it is known that \(S_{1}\cap S_{2}=\emptyset\). Finally, we write \([n]:=\{\,1,\ldots,n\,\}\). **Theorem 1**.: Formula (2) is equivalent to the formula \[\begin{split}\exists s\in&[m].\sigma:[s] \hookrightarrow[m].\exists\beta_{1},\ldots,\beta_{s}\in\{0,1\}^{k}.\bigwedge_{ j=1}^{s}\exists d.\phi^{\beta_{j}}(d)\wedge\\ \exists k_{1},&\ldots,k_{m}.\exists S_{1},\ldots,S_{ k},P_{1},\ldots,P_{s}.\\ &\rho(k_{1},\ldots,k_{m})\wedge\bigwedge_{i=1}^{s}P_{i}\subseteq p _{L_{\sigma(i)}}\wedge\cup_{i=1}^{m}p_{L_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\wedge \\ &\bigwedge_{i=1}^{s}|P_{i}|=k_{\sigma(i)}\wedge\bigwedge_{i=1}^{s} p_{\beta_{i}}\cap P_{i}\neq\emptyset\end{split} \tag{3}\] where \(\sigma\) is an injection from \(\{\,1,\ldots,s\,\}\) to \(\{\,1,\ldots,m\,\}\) and \(\rho\) is the arithmetic expression in Lemma 3. Formula (3) has two parts. The first part corresponds to the subterm \(\bigwedge_{j=1}^{s}\exists d.\varphi^{\beta_{j}}(d)\) and falls within the theory of the elements in \(\mathcal{D}\), \(Th_{\exists^{*}}(\mathcal{D})\). The second part corresponds to the remaining subterm and falls within the quantifier-free first-order theory of Boolean Algebra with Presburger arithmetic (QFBAPA) [19], which can be viewed as the monadic second order theory \(Th_{\exists^{*}}^{mon}(\langle\mathbb{N},\subseteq,\sim\rangle)\) where \(\sim\) is the equicardinality relation between two sets. Formula (2) is distilled from a non-deterministic decision procedure for the formulae of the shape (2). The existentially quantified variables \(s,\sigma,\beta_{1},\ldots,\beta_{s}\) are guessed by the procedure. These guessed values are then used by specialised procedures for \(Th_{\exists^{*}}(\mathcal{D})\) and \(Th_{\exists^{*}}^{mon}(\langle\mathbb{N},\subseteq,\sim\rangle)\). For the convenience of the reader, we describe here what these values mean. The value of \(s\) represents the number of Venn regions associated to the formulae \(L_{1},\ldots,L_{m}\) that will be non-empty. \(\sigma\) indexes these non-empty regions. \(\beta_{1},\ldots,\beta_{s}\) are elementary Venn regions contained in the non-empty ones. The reason to introduce the partition variables \(P_{1},\ldots,P_{s}\) is that the Venn regions may overlap. **Example 7**.: Consider the situation where \(S_{1}\wedge S_{2}\) and \(S_{2}\wedge S_{3}\) are two propositional formulae labelling the transitions of the symbolic automaton. These formulae correspond to the Venn regions \(S_{1}\cap S_{2}\) and \(S_{2}\cap S_{3}\), which share the region \(S_{1}\cap S_{2}\cap S_{3}\). Given a model of \(S_{1},S_{2}\) and \(S_{3}\), how do we guarantee that the indices in the region \(S_{1}\cap S_{2}\cap S_{3}\) are consistent with a run of the automaton? For instance, the automaton may require one element in \(S_{1}\cap S_{2}\) and another in \(S_{2}\cap S_{3}\). Placing a single index in \(S_{1}\cap S_{2}\cap S_{3}\) would satisfy the overall cardinality constraints, but not the fact that overall we need to have two elements. Trying to specify this in the general case would reduce to specifying an exponential number of cardinalities. We proceed next to the proof of the theorem. Proof of Theorem 1.: \(\Rightarrow\)): If formula (2) is satisfiable, then there are sets \(S_{1},\ldots,S_{k}\), a word \(d\) and a table \(\overline{t}\) satisfying \[\bigwedge_{i=1}^{k}S_{i}=\{\,n\in\mathbb{N};\phi_{i}(d)\,\}\wedge\overline{t} \in M(L_{1},\ldots,L_{s})\wedge\bigwedge_{i=1}^{k}S_{i}=\{\,n\in\mathbb{N};t_{i} (n)\,\}\] Let \(\overline{s}\in M(L_{1},\ldots,L_{s})\) be the symbolic table corresponding to \(\overline{t}\). We define \(k_{i}:=|\overline{s}|_{L_{i}},s=|\,\{\,i\mid k_{i}\neq 0\,\}\,|\), \(\sigma\) mapping the indices in \([s]\) to the indices of the terms for which \(k_{i}\) is non-zero and \(P_{i}=\{\,n\in\mathbb{N};\overline{s}(n)=L_{\sigma(i)}\,\}\). It will be convenient to work out the following equalities: \[p_{L_{i}} =\bigcup_{\beta\models L_{i}}\bigcap_{j=1}^{k}S_{j}^{\beta_{j}}= \bigcup_{\beta\models L_{i}}\left\{\,n\in\mathbb{N}\,\left|\, \bigwedge_{j=1}^{k}t_{j}^{\beta_{j}}(n)\,\right.\right\}=\big{\{}\,n\in \mathbb{N}\,\big{|}\,\overline{t}(n)\models L_{i}\,\big{\}} \tag{4}\] \[p_{L_{i}} =\bigcup_{\beta\models L_{i}}\bigcap_{j=1}^{k}S_{j}^{\beta_{j}}= \bigcup_{\beta\models L_{i}}\left\{\,n\in\mathbb{N}\,\left|\, \bigwedge_{j=1}^{k}\phi_{j}^{\beta_{j}}(d)\,\right.\right\}=\big{\{}\,d\in \mathcal{D}\,\big{|}\,L_{i}(\overline{\phi}(d))\,\big{\}}\] where \(L_{i}(\overline{\phi}(d(n)))\) is the propositional formula obtained by substituting set variables by the formulae \(\phi_{i}(d(n))\). We now deduce formula (3): * \(\rho(k_{1},\ldots,k_{m})\): from \(\overline{s}\in P(L_{1},\ldots,L_{m})\), we have that \[(k_{1},\ldots,k_{m})\in\mathsf{Parikh}(M_{S}(L_{1},\ldots,L_{m}))\] and therefore \(\rho(k_{1},\ldots,k_{m})\). * \(P_{i}\subseteq p_{L_{\sigma(i)}}\): since \(\overline{s}\) corresponds to \(\overline{t}\), for all \(n\in\mathbb{N}\) we have \(\overline{t}(n)\models\overline{s}(n)\) and the inclusion follows from the definition of \(P_{i}\) and equation 4. * \(|P_{i}|=k_{\sigma(i)}\): since \(|P_{i}|=\Big{|}\,\big{\{}\,n\in\mathbb{N}\,\big{|}\,\,\overline{s}(n)=L_{ \sigma(i)}\,\}\,\Big{|}=|\overline{s}|_{L_{\sigma(i)}}=k_{\sigma(i)}\). * Each pair of sets \(P_{i},P_{j}\) with \(i<j\) is disjoint: \[P_{i}\cap P_{j} =\big{\{}\,n\in\mathbb{N}\,\big{|}\,\,\overline{s}(n)=L_{\sigma(i )}\,\}\cap\big{\{}\,n\in\mathbb{N}\,\big{|}\,\,\overline{s}(n)=L_{\sigma(j)} \,\big{\}}=\] \[=\big{\{}\,n\in\mathbb{N}\,\big{|}\,\,\overline{s}(n)=L_{\sigma(i )}=L_{\sigma(j)}\,\big{\}}=\emptyset\] using that the letters \(L\) are chosen to be distinct and that \(\sigma\) is an injection (so \(\sigma(i)\neq\sigma(j)\)). * \(p_{L_{1}}\cup\ldots\cup p_{L_{m}}=P_{1}\dot{\cup}\ldots\dot{\cup}P_{s}\): since by definition \(P_{i}=\left\{\,n\in\mathbb{N}\,\,\big{|}\,\,\overline{s}(n)=L_{\sigma(i)}\,\right\}\), \(p_{L_{i}}=\left\{\,n\in\mathbb{N}\,\,\big{|}\,\,\overline{t}(n)\models L_{i}\,\right\}\) and by definition of \(\sigma\) it follows that the only letters that can appear in \(\overline{s}\) are \(L_{\sigma(1)},\ldots,L_{\sigma(s)}\). Thus, we have \(p_{L_{1}}\cup\ldots\cup p_{L_{m}}=[1,|\overline{t}|]=[1,|\overline{s}|]=P_{1} \dot{\cup}\ldots\dot{\cup}P_{s}\). * There exists \(\beta_{1},\ldots,\beta_{s}\in\{0,1\}^{k}\), such that \(\bigwedge_{i=1}^{s}P_{\beta_{i}}\cap P_{i}\neq\emptyset\): note that \(P_{i}\neq\emptyset\) by definition of \(\sigma\). Thus, there must exist some \(\beta_{i}\) such that \(p_{\beta_{i}}\cap P_{i}\neq\emptyset\). We pick any such \(\beta_{i}\). * \(\bigwedge_{j=1}^{s}\exists d.\varphi^{\beta_{j}}(d)\): follows from \(p_{\beta_{j}}\cap P_{j}\neq\emptyset\) and formula (4). \(\Leftarrow\)) Conversely, if formula (3) is satisfiable, then there is an integer \(s\in[n]\), an injection \(\sigma:[s]\hookrightarrow[n]\), bit-strings \(\beta_{1},\ldots,\beta_{s}\in\{0,1\}^{k}\), integers \(k_{1},\ldots,k_{m}\) and sets \(S_{1},\ldots,S_{k},P_{1},\ldots,P_{s}\) satisfying \[\begin{split}&\bigwedge_{j=1}^{s}\exists d.\varphi^{\beta_{j}}(d) \wedge\rho(k_{1},\ldots,k_{m})\wedge\bigwedge_{i=1}^{s}P_{i}\subseteq p_{L_{ \sigma(i)}}\wedge\cup_{i=1}^{m}p_{L_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\wedge\\ &\bigwedge_{i=1}^{s}|P_{i}|=k_{\sigma(i)}\wedge\bigwedge_{i=1}^{s }p_{\beta_{i}}\cap P_{i}\neq\emptyset\end{split} \tag{5}\] From \(\psi(k_{1},\ldots,k_{n})\) follows that there is a symbolic table \(\overline{s}\in M_{S}(L_{1},\ldots,L_{m})\) such that \(|\overline{s}|_{L_{i}}=k_{i}\) for each \(L_{i}\in\left\{\,L_{1},\ldots,L_{m}\,\right\}\). From formula (4) and \[p_{L_{1}}\cup\ldots\cup p_{L_{m}}=P_{1}\dot{\cup}\ldots\dot{\cup}P_{s}\wedge \bigwedge_{i=1}^{s}P_{i}\subseteq p_{L_{\sigma(i)}}\wedge\bigwedge_{i=1}^{s}|P _{i}|=k_{\sigma(i)}\] follows that we can replace the formulae \(L_{i}\) occurring in the symbolic table \(\overline{s}\) by the bit-strings representing the elementary Venn regions to which the indices of the sets \(P_{i}\) belong. Moreover, thanks to the condition \(\bigwedge_{i=1}^{s}p_{\beta_{i}}\cap P_{i}\neq\emptyset\) follows that we can replace the letters \(L_{i}\) by the bit-strings \(\beta_{i}\), defining \(\overline{t}\) as \(\overline{t}(n)=\left\{\beta_{i}\quad\text{if}\,\,n\in P_{i}\,\right.\). In this way, we obtain a table \(\overline{t}\in M(L_{1},\ldots,L_{s})\). We then define the corresponding word over \(\mathcal{D}\), thanks to the property \(\bigwedge_{i=1}^{s}\exists d.\phi^{\beta_{i}}(d)\). Naming the witnesses of these formulae as \(d_{i}\), we define \(d(n)=\left\{d_{i}\quad\text{if}\,\,n\in P_{i}\,\right.\). To conclude, note that: \[\left\{\,n\in\mathbb{N}\,\,|\,\,t_{j}(n)\,\right\}=\cup_{\left\{\,\,1\leq i \leq k\,\,|\,\,\beta_{i}(j)=1\,\,\right\}}P_{i}=\left\{\,n\in\mathbb{N}\,\,|\, \,\phi_{j}(d(n))\,\right\}\] Thus, we have that formula (2) is satisfied by the set variables \[S_{j}:=\left\{\,n\in\mathbb{N}\,\,|\,\,t_{j}(n)\,\right\}=\left\{\,n\in \mathbb{N}\,\,|\,\,\phi_{j}(d(n))\,\right\}\] ## 5 Quantifier-free Boolean Algebra with Presburger Arithmetic The arguments following the statement of Theorem 1 sketch a non-deterministic procedure for the satisfiability problem of symbolic finite automata, based on the existence of decision procedures for \(Th_{\exists^{*}}(\mathcal{D})\) and \(Th_{\exists^{*}}^{mon}(\langle\mathbb{N},\subseteq,\sim\rangle)\). In this section, we recall the non-deterministic polynomial time decision procedure for \(Th_{\exists^{*}}^{mon}(\langle\mathbb{N},\subseteq,\sim\rangle)\). As a consequence, we obtain Corollary 1 which situates the decision problem of symbolic finite automata in the classical complexity hierarchy. This section should also prepare the reader for the extension of these results, where the automaton can require linear arithmetic constraints on the cardinalities of the effective Boolean algebra. This extension is carried out in Section 6. Instead of working with \(Th_{\exists^{*}}^{mon}(\langle\mathbb{N},\subseteq,\sim\rangle)\) directly, we use the logic QFBAPA [19] which has the same expressive power [18, Section 2]. The syntax of QFBAPA is given in Figure 3. The meaning of the syntax is as follows. \(F\) presents the Boolean structure of the formula, \(A\) stands for the top-level constraints, \(B\) gives the Boolean restrictions and \(T\) the Presburger arithmetic terms. The operator dvd stands for the divisibility relation and \(\mathcal{U}\) represents the universal set. The remaining interpretations are standard. The satisfiability problem of this logic is reducible to propositional satisfiability in polynomial time. Our proofs will rely on the method of [19], which we sketch briefly here. The basic argument to establish a NP complexity bound on the satisfiability problem of QFBAPA is based on a theorem by Eisenbrand and Shmonin [10], which in our context says that any element of an integer cone can be expressed in terms of a polynomial number of generators. Figure 4 gives a verifier for this basic version of the algorithm. The algorithm uses an auxiliary verifier \(V_{PA}\) for the quantifier-free fragment of Presburger arithmetic. The key step is showing equisatisfiability between 2.(b) and 2.(c). If \(x_{1},\ldots,x_{k}\) are the variables occurring in \(b_{0},\ldots,b_{p}\) then we write \(p_{\beta}=\bigcap\limits_{i=1}^{k}x_{i}^{e_{i}}\) for \(\beta=(e_{1},\ldots,e_{k})\in\{0,1\}^{k}\) where we define \(x^{1}:=x\) and \(x^{0}:=\mathcal{U}\setminus x\) as before. If we define \(\llbracket b_{i}\rrbracket_{\beta_{j}}\) as the evaluation of \(b_{i}\) as a propositional formula with the assignment given in \(\beta\) and introduce variables \(l_{\beta}=|p_{\beta}|\), then \(|b_{i}|=\sum\limits_{j=0}^{2^{e}-1}\llbracket b_{i}\rrbracket_{\beta_{j}}l_{ \beta_{j}}\), so the restriction \(\bigwedge\limits_{i=0}^{p}|b_{i}|=k_{i}\) in 2.(b) becomes \(\bigwedge\limits_{i=0}^{p}\sum\limits_{j=0}^{2^{e}-1}\llbracket b_{i} \rrbracket_{\beta_{j}}l_{\beta_{j}}=k_{i}\) which can be seen as a linear combination in the Figure 3: QFBAPA’s syntax set of vectors \(\{(\llbracket b_{0}\rrbracket_{\beta_{j}},\ldots,\llbracket b_{p}\rrbracket_{ \beta_{j}}).j\in\{0,\ldots,2^{e}-1\}\}\). Eisenbrand-Shmonin's result allows then to derive 2.(c) for \(N\) polynomial in \(|x|\). In the other direction, it is sufficient to set \(l_{\beta_{j}}=0\) for \(j\in\{0,\ldots,2^{e}-1\}\setminus\{i_{1},\ldots,i_{N}\}\). Thus, we have: On input \(\langle x,w\rangle\): 1. Interpret \(w\) as: 1. a list of indices \(i_{1},\ldots,i_{N}\in\{0,\ldots,2^{e}-1\}\) where \(e\) is the number of set variables in \(x\). 2. a certificate \(C\) for \(V_{PA}\) on input \(x^{\prime}\) defined below. 2. Transform \(x\) into \(x^{\prime}\) by: 1. rewriting boolean expressions according to the rules: \[b_{1}=b_{2}\mapsto b_{1}\subseteq b_{2}\wedge b_{2}\subseteq b_{1}\] \[b_{1}\subseteq b_{2}\mapsto|b_{1}\cap b_{2}^{c}|=0\] 2. introducing variables \(k_{i}\) for cardinality expressions: \[G\wedge\bigwedge_{i=0}^{p}|b_{i}|=k_{i}\] where \(G\) is the resulting quantifier-free Presburger arithmetic formula. 3. rewriting into: \[G\wedge\bigwedge_{j=i_{1},\ldots,i_{N}}l_{\beta_{j}}\geq 0\wedge\bigwedge_{i=0}^{p} \sum_{j=i_{1},\ldots,i_{N}}\llbracket b_{i}\rrbracket_{\beta_{j}}\cdot l_{ \beta_{j}}=k_{i}\] 3. Run \(V_{PA}\) on \(\langle x^{\prime},C\rangle\). 4. Accept iff \(V_{PA}\) accepts. **Theorem 2** ([19]).: The satisfiability problem of QFBAPA is in NP. From Theorems 1 and 2, we obtain the following improvement of [6, Theorem 2.8]: Figure 4: Verifier for QFBAPA **Corollary 1**.: Let \(Th_{\exists^{*}}(\mathcal{D})\) be the existential first-order theory of the formulae used in the transitions of the symbolic finite automaton \(M\). * If \(Th_{\exists^{*}}(\mathcal{D})\in\mathrm{P}\) then \(\mathcal{L}(M)\neq\emptyset\in\mathrm{NP}\). * If \(Th_{\exists^{*}}(\mathcal{D})\in\mathrm{C}\) for some \(\mathrm{C}\supseteq\mathrm{NP}\) then \(\mathcal{L}(M)\neq\emptyset\in\mathrm{C}\). ## 6 Decision Procedure for Satisfiability of SFAs with Cardinalities We now consider the following generalisation of the language of a finite symbolic automaton from Lemma 2. **Definition 5**.: A symbolic finite automaton with cardinalities accepts a language of the form: \[\mathcal{L}(M)=\left\{d\in\mathcal{D}^{*}\left|\begin{array}{l}F(S_{1}, \ldots,S_{k})\wedge\bigwedge_{i=1}^{k}S_{i}=\{\,n\in\mathbb{N}\mid\phi_{i}(d(n ))\,\}\wedge\\ \exists\overline{t}\in P(L_{1},\ldots,L_{m}).\bigwedge_{i=1}^{k}S_{i}=\{\,n\in \mathbb{N}\mid t_{i}(n)\,\}\end{array}\right.\right\}\] where \(F\) is a formula from QFBAPA. Thus, checking non-emptiness of the language of a symbolic finite automaton with cardinalities reduces to checking whether the following formula is true: \[\begin{split}\exists S_{1},\ldots,S_{k}.& F(S_{1},\ldots,S_{k})\wedge\\ &\exists d.\bigwedge_{i=1}^{k}S_{i}=\{\,n\in\mathbb{N}\mid\phi_{i}(d(n ))\,\}\wedge\\ &\exists\overline{t}\in M(L_{1},\ldots,L_{m})\wedge\bigwedge_{i=1}^{k} S_{i}=\{\,n\in\mathbb{N}\mid t_{i}(n)\,\}\end{split} \tag{6}\] To show that Theorem 1 and Corollary 1 stay true with linear arithmetic constraints on the cardinalities, we need to repeat part of the argument in Theorem 1 since if \(F\) denotes the newly introduced QFBAPA formula and \(G,H\) are the formulae shown equivalent in Theorem 1, then from: \[\exists S_{1},\ldots,S_{k}.F(S_{1},\ldots,S_{k})\wedge G(S_{1},\ldots,S_{k})\] and \[\left[\exists S_{1},\ldots,S_{k}.G(S_{1},\ldots,S_{k})\right]\iff\left[\exists S _{1},\ldots,S_{k}.H(S_{1},\ldots,S_{k})\right]\] it does not follow that \(\exists S_{1},\ldots,S_{k}.F(S_{1},\ldots,S_{k})\wedge H(S_{1},\ldots,S_{k})\). Instead, the algorithm derives the cardinality constraints from each theory and then uses the sparsity of solutions _over the satisfiable regions_. In the proof, we set \(\llbracket\beta_{j}\models b_{i}\rrbracket\) to be one, if the bit-string \(\beta_{j}\) satisfies the Boolean expression \(b_{i}\) as a propositional assignment and zero otherwise. We also write \(l_{\beta}=|p_{\beta}|\) for \(\beta\in\{0,1\}^{k}\). **Theorem 3**.: Formula (6) is equivalent to: \[\begin{split}\exists N\leq p(|F|),&\exists s\in[m]. \sigma:[s]\hookrightarrow[m].\exists\beta_{1},\ldots,\beta_{N}\in\{0,1\}^{k}. \bigwedge_{j=1}^{N}\exists d.\phi^{\beta_{j}}(d)\wedge\\ &\exists k_{1},\ldots,k_{m}.\exists S_{1},\ldots,S_{k},P_{1}, \ldots,P_{s}.\\ &\rho(k_{1},\ldots,k_{m})\wedge\bigwedge_{i=1}^{s}P_{i}\subseteq p _{L_{\sigma(i)}}\wedge\cup_{i=1}^{m}p_{L_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\wedge \\ &\bigwedge_{i=1}^{s}|P_{i}|=k_{\sigma(i)}\wedge\cup_{i=1}^{N}p_{ \beta_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\end{split} \tag{7}\] where \(p\) is a polynomial and \(|F|\) is the number of symbols used to write \(F\). Proof.: \(\Rightarrow\)) If formula (6) is true, then there are sets \(S_{1},\ldots,S_{k}\), a finite word \(d\) and a table \(\overline{t}\) such that: \[\begin{split}& F(S_{1},\ldots,S_{k})\wedge\bigwedge_{i=1}^{k}S_{i}= \{\,n\in\mathbb{N}\mid\phi_{i}(d(n))\,\}\wedge\\ &\overline{t}\in M(L_{1},\ldots,L_{m})\wedge\bigwedge_{i=1}^{k}S_ {i}=\{\,n\in\mathbb{N}\mid t_{i}(n)\,\}\end{split} \tag{8}\] Thus, there exists a symbolic table \(\overline{s}\in M_{S}(L_{1},\ldots,L_{s})\) corresponding to \(\overline{t}\). We define \(k_{i}:=|\overline{s}|_{L_{i}},s=|\,\{\,i\mid k_{i}\neq 0\,\}|\), \(\sigma\) maps the indices in \([s]\) to the indices of the terms for which \(k_{i}\) is non-zero and \(P_{i}=\big{\{}\,n\in\mathbb{N}\mid\overline{s}(n)=L_{\sigma(i)}\,\big{\}}\). As in Theorem 1, we have the equalities \(p_{L_{i}}=\big{\{}\,n\in\mathbb{N}\mid\overline{t}(n)\models L_{i}\,\big{\}}\), \(p_{L_{i}}=\big{\{}\,n\in\mathbb{N}\mid L_{i}(\overline{\phi}(d))\,\big{\}}\) and we can show that the following formula holds: \[\begin{split}&\rho(k_{1},\ldots,k_{m})\wedge\bigwedge_{i=1}^{m}P_{i} \subseteq p_{L_{\sigma(i)}}\wedge\cup_{i=1}^{m}p_{L_{i}}=\dot{\cup}_{i=1}^{s}P _{i}\wedge\\ &\bigwedge_{i=1}^{s}|P_{i}|=k_{\sigma(i)}\wedge F(S_{1},\ldots,S_ {k})\end{split} \tag{9}\] We need to find a sparse model of (9). To achieve this, we follow the methodology in Theorem 2. This leads to a system of equations of the form: \[\exists c_{1},\ldots,c_{p}.G\wedge\sum_{j=0}^{2^{e}-1}\begin{pmatrix}[\![ \beta_{j}\models b_{0}]\!]\\ \cdots\\ [\![\beta_{j}\models b_{p}]\!]\end{pmatrix}\cdot l_{\beta_{j}}=\begin{pmatrix}c _{1}\\ \ldots\\ c_{p}\end{pmatrix}\] We remove those elementary Venn regions where \(l_{\beta}=0\). This includes regions whose associated formula in the interpreted Boolean algebra is unsatisfiable, and regions corresponding to table entries not occurring in \(\overline{t}\). This transformation gives a reduced set of indices \(\mathcal{R}\) participating in the sum. Using Eisenbrand-Shmonin's theorem, we have a polynomial (in the size of the original formula) family of Venn regions \(\beta_{1},\ldots,\beta_{N}\) and corresponding cardinalities \(l^{\prime}_{\beta_{1}},\ldots,l^{\prime}_{\beta_{N}}\), which we can assume to be non-zero, such that \[\exists c_{1},\ldots,c_{p}.G\wedge\sum_{\beta\in\{\beta_{1},\ldots,\beta_{N}\} \subseteq\mathcal{R}}\begin{pmatrix}\llbracket\beta_{j}\models b_{0}\rrbracket\\ \ldots\\ \llbracket\beta_{j}\models b_{p}\rrbracket\end{pmatrix}\cdot l^{\prime}_{\beta_{j }}=\begin{pmatrix}c_{1}\\ \ldots\\ c_{p}\end{pmatrix} \tag{10}\] The satisfiability of formula (10) implies the existence of sets of indices \(p^{\prime}_{\beta}\) satisfying the conditions derived in formula (9). However, it does not imply which explicit indices belong to these sets and which are the contents corresponding to each index. From the condition \[\psi(k_{1},\ldots,k_{n})\wedge\bigwedge_{i=1}^{n}P^{\prime}_{i}\subseteq p^{ \prime}_{L_{\sigma(i)}}\wedge\cup_{i=1}^{n}p^{\prime}_{L_{i}}=\dot{\cup}_{i=1 }^{s}P^{\prime}_{i}\wedge\bigwedge_{i=1}^{s}|P^{\prime}_{i}|=k_{\sigma(i)}\] follows that there is a symbolic table \(\overline{s}^{\prime}\) satisfying \(M_{S}(L_{1},\ldots,L_{n})\) with \(k_{\sigma(i)}\) letters \(L_{\sigma(i)}\) and that these letters are made concrete by entries in \(P^{\prime}_{i}\) for each \(i\in\{1,\ldots,s\}\). We take the Venn regions \(\beta\in\{\beta_{1},\ldots,\beta_{N}\}\) such that \(P^{\prime}_{i}\supseteq p_{\beta}\) and label the corresponding entries in \(\overline{s}^{\prime}\) with \(\beta\). In this way, we obtain a corresponding concrete table \(\overline{t}^{\prime}\). This makes the indices in each Venn region concrete. To make the contents of the indices concrete, note that for each \(\beta\in\mathcal{R}\), since \(l_{\beta}\neq 0\), the formula \(\exists d.\phi^{\beta}(d)\) is true. In particular, this applies to each \(\beta\in\{\beta_{1},\ldots,\beta_{N}\}\). Thus, we obtain witnesses \(d_{1},\ldots,d_{N}\). We form a word by replacing each letter \(\beta\) in \(\overline{t}^{\prime}\) by the corresponding value \(d_{\beta}\). \(\Leftarrow\) If formula (7) is true, then there is \(N\leq p(|F|)\) where \(p\) is a polynomial, \(s\in[m]\), \(\beta_{1},\ldots,\beta_{N}\in\{0,1\}^{k},k_{1},\ldots,k_{m}\in\mathbb{N}\) and sets \(S_{1},\ldots,S_{k},P_{1},\ldots,P_{s}\) such that \[\bigwedge_{j=1}^{N}\exists d.\phi^{\beta_{j}}(d)\wedge\rho(k_{1}, \ldots,k_{m})\wedge\bigwedge_{i=1}^{s}P_{i}\subseteq p_{L_{\sigma(i)}}\wedge \cup_{i=1}^{m}p_{L_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\wedge\] \[\bigwedge_{i=1}^{s}|P_{i}|=k_{\sigma(i)}\wedge\cup_{i=1}^{N}p_{ \beta_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\] From \(\rho(k_{1},\ldots,k_{n})\) follows that there is a symbolic table \(\overline{s}\in R(L_{1},\ldots,L_{m})\) such that \(|\overline{s}|_{L_{i}}=k_{i}\) for each \(L_{i}\in\{\,L_{1},\ldots,L_{m}\,\}\). From formula 8 and \[p_{L_{1}}\cup\ldots\cup p_{L_{m}}=P_{1}\dot{\cup}\ldots\dot{\cup}P_{s}\wedge \bigwedge_{i=1}^{s}P_{i}\subseteq p_{L_{\sigma(i)}}\wedge\bigwedge_{i=1}^{s}|P _{i}|=k_{\sigma(i)}\] follows that we can replace the formulae \(L_{i}\) occurring in the symbolic table \(\overline{s}\) by the bit-strings representing the elementary Venn regions to which the indices of the sets \(P_{i}\) belong. Moreover, thanks to the condition \(\cup_{i=1}^{N}p_{\beta_{i}}=\dot{\cup}_{i=1}^{s}P_{i}\), it follows that we can replace the letters \(L_{i}\) by the bit-strings \(\beta_{i}\). In this way, we obtain a table \(\overline{t}\in R(L_{1},\ldots,L_{m})\). We then define the corresponding word over \(\mathcal{D}\), thanks to the property \(\bigwedge_{i=1}^{N}\exists d.\phi^{\beta_{i}}(d)\). To conclude, note that: \[\{\,n\in\mathbb{N}\mid t_{j}(n)\,\}=\cup_{\{\,i\mid\,\beta_{i}(j)=1\,\}}P_{i}= \{\,n\in\mathbb{N}\mid\phi_{j}(d(n))\,\}\] Thus, we have that formula 2 is satisfied by the set variables \[S_{j}:=\{\,n\in\mathbb{N}\mid t_{j}(n)\,\}=\{\,n\in\mathbb{N}\mid\phi_{j}(d(n) )\,\}\] We can thus formulate the analogous to Corollary 1 in the case of finite symbolic automata with cardinalities. **Corollary 2**.: Let \(Th_{\exists^{*}}(\mathcal{D})\) be the theory of the formulae used in the transitions of a symbolic finite automaton with cardinality constraints. * If \(Th_{\exists^{*}}(\mathcal{D})\in\mathrm{P}\) then \(\mathcal{L}(M)\neq\emptyset\in\mathrm{NP}\). * If \(Th_{\exists^{*}}(\mathcal{D})\in\mathrm{C}\) for some \(\mathrm{C}\supseteq\mathrm{NP}\) then \(\mathcal{L}(M)\neq\emptyset\in\mathrm{C}\). ## 7 Conclusion We have revisited the model of symbolic finite automata as it was reintroduced in [28]. We have obtained tight complexity bounds on their satisfiability problem. Our methodology follows the Feferman-Vaught decomposition technique in that it reduces the satisfiability problem of the automaton to the satisfiability problem of the existential first-order theory of the characters accepted by the automaton and the satisfiability problem of the existential monadic second-order theory of the indices. To combine these two distinct theories we use the ideas from the combination method through sets and cardinalities of Wies, Piskac and Kuncak [30] and the computation of an equivalent linear-sized existentially quantified Presburger arithmetic formula from the Parikh image of a regular language by Seidl, Schwentick, Muscholl and Habermehl [23]. A crucial step in the proofs is a partitioning argument for the underlying Venn regions. We profit from the analysis in [19] to extend our arguments to the satisfiability problem of finite symbolic automata that consider linear arithmetic restrictions over the cardinalities of the Boolean algebra associated with the symbolic finite automaton. In future work, we plan to extend our methods to other variants of symbolic automata to which we believe similar techniques may be applicable. Another interesting research direction would be to consider extensions of the language that allow free variables in set interpretations of the form (1), which seems to have applications to various satisfiability problems.
2307.16733
Painting baryons onto N-body simulations of galaxy clusters with image-to-image deep learning
Galaxy cluster mass functions are a function of cosmology, but mass is not a direct observable, and systematic errors abound in all its observable proxies. Mass-free inference can bypass this challenge, but it requires large suites of simulations spanning a range of cosmologies and models for directly observable quantities. In this work, we devise a U-net - an image-to-image machine learning algorithm - to ``paint'' the IllustrisTNG model of baryons onto dark-matter-only simulations of galaxy clusters. Using 761 galaxy clusters with $M_{200c} \gtrsim 10^{14}M_\odot$ from the TNG-300 simulation at $z<1$, we train the algorithm to read in maps of projected dark matter mass and output maps of projected gas density, temperature, and X-ray flux. The models train in under an hour on two GPUs, and then predict baryonic images for $\sim2700$ dark matter maps drawn from the TNG-300 dark-matter-only (DMO) simulation in under two minutes. Despite being trained on individual images, the model reproduces the true scaling relation and scatter for the $M_{DM}-L_X$, as well as the distribution functions of the cluster X-ray luminosity and gas mass. For just one decade in cluster mass, the model reproduces three orders of magnitude in $L_X$. The model is biased slightly high when using dark matter maps from the DMO simulation. The model performs well on inputs from TNG-300-2, whose mass resolution is 8 times coarser; further degrading the resolution biases the predicted luminosity function high. We conclude that U-net-based baryon painting is a promising technique to build large simulated cluster catalogs which can be used to improve cluster cosmology by combining existing full-physics and large $N$-body simulations.
Urmila Chadayammuri, Michelle Ntampaka, John ZuHone, Àkos Bogdàn, Ralph Kraft
2023-07-31T14:53:42Z
http://arxiv.org/abs/2307.16733v3
# Painting baryons onto _N_-body simulations of galaxy clusters with image-to-image deep learning ###### Abstract Galaxy cluster mass functions are a function of cosmology, but mass is not a direct observable, and systematic errors abound in all its observable proxies. Mass-free inference can bypass this challenge, but it requires large suites of simulations spanning a range of cosmologies and models for directly observable quantities. In this work, we devise a U-net -- an image-to-image machine learning algorithm -- to "paint" the IllustrisTNG model of baryons onto dark-matter-only simulations of galaxy clusters. Using 761 galaxy clusters with \(M_{200c}\gtrsim 10^{14}M_{\odot}\) from the TNG300 simulation at \(z<1\), we train the algorithm to read in maps of projected dark matter mass and output maps of projected gas density, temperature, and X-ray flux. Despite being trained on individual images, the model reproduces the true scaling relation and scatter for the \(M_{DM}-L_{X}\), as well as the distribution functions of the cluster X-ray luminosity and gas mass. For just one decade in cluster mass, the model reproduces three orders of magnitude in \(L_{X}\). The model is biased slightly high when using dark matter maps from the DMO simulation. The model performs well on inputs from TNG300-2, whose mass resolution is 8 times coarser; further degrading the resolution biases the predicted luminosity function high. We conclude that U-net-based baryon painting is a promising technique to build large simulated cluster catalogs which can be used to improve cluster cosmology by combining existing full-physics and large _N_-body simulations. keywords: galaxies: clusters: intracluster medium - cosmology: large-scale structure of Universe - machine learning ## 1 Introduction In any cosmological model, the growth of cosmological structure is driven by the interplay between the gravitational collapse of the dark and baryonic matter on the one hand and Hubble expansion and acceleration by dark energy on the other (Frenk et al., 1988). In particular, the mass function of galaxy clusters, the largest virialised structures in the Universe today, is a concrete prediction of a given cosmological model as a function of cosmic time (Mo and White, 1996; Bryan and Norman, 1998; Voit, 2005; Tinker et al., 2008, and references therein). The latest generation of _N_-body simulations predicts the cluster mass function for hundreds, or up to tens of thousands, of different values of key cosmological parameters - the relative energy densities of matter \(\Omega_{m}\) and dark energy \(\Omega_{\Lambda}\), the normalisation of the matter power spectrum at scales of 8 Mpc/_h_\(\sigma_{8}\), the dark energy equation of state \(w\), and the spectral index of the primordial overdensity spectrum \(n_{s}\) - by simulating only gravitational interactions over extremely large volumes (e.g. Prada et al., 2012; Bhattacharya et al., 2013; Klypin et al., 2016; Villaescusa-Navarro et al., 2020; Maksimova et al., 2021; Ishiyama et al., 2021). This process has been accelerated even further by the advent of differentiable simulations (Modi et al., 2021; Li et al., 2022), and is crucial for enabling likelihood-free or simulation-based inference (see, e.g., Alsing et al., 2019; Cranmer et al., 2020, and references therein). Since they probe the growth of structure, galaxy clusters provide cosmological constraints almost orthogonal to geometric measurements such as the Cosmic Microwave Background (CMB, Planck Collaboration et al., 2014, 2020), Type Ia supernovae (SNIa Astier et al., 2006) and Baryon Acoustic Oscillations (BAO Alam et al., 2017). Improving cluster cosmology constraints thus dramatically improves the error bars on cosmological parameters, especially on the clustering parameter \(S_{8}=\sigma_{8}\left(\Omega_{m}/0.3\right)^{1/4}\), when combined with the geometric methods (Pillepich et al., 2018). However, crucial systematic issues remain in deriving cosmological parameters from galaxy cluster observations. One issue is that the cluster mass function is systematically affected by baryonic effects, which are not accounted for in analytic models or _N_-body simulations. Radiative cooling, star formation, feedback from stars and supermassive black holes, cosmic rays, and magnetic fields all act to mod ify the cluster population. Incorporating all these processes requires very computationally expensive and high resolution (magneto-)hydrodynamical simulations of cosmological volumes; only a few dozen of these exist today (Borgani et al., 2004; Nagai et al., 2007; Le Brun et al., 2014; Planelles et al., 2014; Schaye et al., 2015; Dolag et al., 2016; Suto et al., 2017; Tremmel et al., 2017; Dave et al., 2019; Pillepich et al., 2018a), most using either Planck Collaboration et al. (2016) or WMAP-7 Komatsu et al. (2011) cosmology. These have shown that simply including baryons changes the measured \(\Omega_{m}\) and \(S_{8}\) from an X-ray selected cluster sample by 4% and 12%, respectively (Bocquet et al., 2016; Castro et al., 2021), even if the cluster masses are measured perfectly. The result holds even for lensing surveys, highlighting that the baryons affect the dark matter halos themselves (Ferlito et al., 2023). Second, measuring cluster masses is far from trivial. In galaxy clusters, for example, the space between galaxies is filled with a plasma, known as the intracluster medium (ICM). To first order, this plasma is in hydrostatic equilibrium with the total gravitational potential, so that it can be used to infer the total mass; subtracting the directly observed gas mass then yields the dark matter mass. Most commonly, this is done via the X-ray emission from the cooling cluster electrons (Reiprich & Bohringer, 2002; Vikhlinin et al., 2006) and inverse compton (IC) scattering of CMB photons, known as the Sunyaev-Zel'dovich effect (SZE Sunyaev & Zeldovich, 1972). However, the ICM also experiences other processes, like radiative cooling, subcluster- and cluster-cluster mergers, and jet and radiative feedback from stellar evolution (Battaglia et al., 2012) and supermassive black holes (SMBH) (McNamara & Nulsen, 2007; Battaglia et al., 2010; McCarthy et al., 2011, and references therein). In the central regions, the ICM deviates from gravity-only (a.k.a. self-similar) predictions primarily due to active galactic nucleus (AGN) feedback, while in the outskirts it is due to clumpiness and cosmological accretion. In summary, the intracluster medium is an imperfect tracer of the dark matter potential. For a few dozen low- and intermediate-redshift, high-mass, dynamically relaxed clusters, it is possible to measure the total mass from X-ray emission profiles under the assumption of hydrostatic equilibrium (Vikhlinin et al., 2006). These are used to construct scaling relations, which are then extrapolated to less massive and/or more distant clusters, potentially viewed with lower resolution. Even studies that use only visually relaxed clusters, which are expected to be in hydrostatic equilibrium (Allen et al., 2002; Ettori et al., 2019), are known to suffer from hydrostatic bias at the 5-10% level (Meneghetti et al., 2010). The mass scaling relation of even relatively good mass proxies like the X-ray Compton-like parameter \(Y_{X}=M_{gas}\times Y_{X}\)(Kravtsov et al., 2006) has an intrinsic scatter of 0.3 dex (Chiu et al., 2022), which adds 40% uncertainty to cosmological parameter estimates (Planck Collaboration et al., 2014). The masses are often calibrated using gravitational lensing (Allen et al., 2001; Hoekstra et al., 2012; Mahdavi et al., 2013; von der Linden et al., 2014; Merten et al., 2015; Mantz et al., 2015), although it has been shown that weak lensing carries an intrinsic scatter of 20% at cluster mass scales due in part to halo triaxility, before even incorporating baryonic effects (Becker & Kravtsov, 2011). The net result of the scatter and biases is that is that the cluster mass conversion varies widely in the X-ray literature and produces cosmological parameter estimates that differ from each other by up to \(2.5\sigma\). As a result, studies like Planck Collaboration et al. (2020) have avoided using clusters for cosmological constraints altogether. The dominant approach to improving cluster cosmology involves reducing the scatter and/or bias in the scaling relations between X-ray or SZE observables and cluster mass (Shi et al., 2016). Cutting out the central 0.1-0.15\(R_{500}\) of the ICM, for example, has proved an effective way to do so (Maughan, 2007). However, this is only possible for nearby clusters observed with high-resolution instruments like _Chandra_ and _XMM-Newton_, whereas survey telescopes like _eROSITA_ have a much lower resolution, making it unfeasible to reliably mask out only photons associated with cluster cores. Since _eROSITA_ is expected to find \(\sim 10^{5}\) clusters with virial mass \(M_{vir}=M_{200c}>10^{13}M_{\odot}\), the bulk of them at lower masses, higher redshifts and/or with shallow exposures (Predehl et al., 2021), it is crucial to use all the information from the X-ray images to test cosmological models. The same can be said of upcoming SZE surveys like CMB-S4 and the Simons Observatory, which will have beam sizes of order \(1^{\prime}\)(Abazajian et al., 2016; Ade et al., 2019; Abazajian et al., 2019). Restricting analyses to low-redshift, relaxed objects would eliminate the bulk of the unprecedented cluster sample obtained by these long-anticipated surveys. A crucial, complementary approach is improving the incorporation of baryons into theoretical predictions. In the field of small-scale, near-field cosmology, a long-standing "problem" of missing satellites, numerous small-scale halos predicted by dark-matter-only simulations of Milky-Way like systems, can be solved entirely by incorporating baryonic feedback into the simulations (Brooks et al., 2013; Del Popolo et al., 2014), although more exotic solutions continue to be proposed. While baryonic feedback cannot disrupt cluster-scale halos in the same way, it can certainly affect their appearance at X-ray and SZE wavelengths by significantly reshaping the diffuse intracluster medium (Nagai et al., 2007; Martizzi et al., 2012; Bryan et al., 2013; Bocquet et al., 2016; Castro et al., 2021). Two bottlenecks stand in the way of incorporating baryons into cosmological simulations. First is the computational infeasibility of running many large-volume boxes, each with a different cosmology and large enough to contain a significant number of galaxy clusters while simultaneously implementing baryonic physics. Second is the uncertainty in the baryonic models themselves - no simulation so far has reproduced every observation of galaxy properties and their evolution over time. "Baryon painting" is the post-processing of dark-matter-only simulations to capture the net effect of the baryons, had they been implemented directly. Since it uses properties already computed by the N-body simulations, baryon painting is an extremely cheap process. This addresses both the bottlenecks above - it allows us to paint baryons following many different prescriptions, and onto many N-body simulations, for a marginal computational cost. Baryon painting has already been undertaken using (semi-)analytic prescriptions that map halo properties to baryonic observables in hydrodynamic simulations (Lu et al., 2022; Osato & Nagai, 2023; Williams et al., 2023; Keruzore et al., 2023; Zhong et al., 2023). However, such halo-based models assume that the baryons in a halo have always been dynamically related to the dark matter in the same halo, whereas studies have shown that they in fact carry dynamical information from much further out in the cosmic web, from where they were transported in (Kimm et al., 2011; Liao et al., 2017). Machine learning offers a powerful new toolkit to target the problem of painting baryons onto dark-matter-only simulations. Convolutional neural networks (CNNs) are particularly good at extracting complex features from multi-dimensional inputs by learning a series of filters in order to minimise the error in predicting known properties of the training sample. They have been used extensively to predict cluster masses from mock observations (Ntampaka et al., 2015, 2016; Ho et al., 2019; Ntampaka et al., 2019; Gupta & Reichardt, 2020). Generative neural networks have been used to expand samples of galaxy cluster SZE maps given a sample from one simulation (Troster et al., 2019); they can also predict SZE maps using halo properties from dark-matter-only simulations (Thiele et al., 2020, 2022). de Andres et al. (2023) trained a variety of random forest and gradient boost algorithms to predict baryonic properties from The Three Hundred simulations (Cui et al., 2018) using a data vector of 26 quantities from the corresponding halo in the MDPL _N_-body simulation (Klypin et al., 2016) as input. They were able to recover gas mass, mass-weighted gas temperature and other baryonic properties with root-mean-squared errors of 4-8%. Using a similar technique of using vectors of halo properties to predict observable signals, Delgado et al. (2023) and Pandey et al. (2023) quantified how varying baryonic physics affects the matter power spectrum in the CAMELS simulations (Villasescusa-Navarro et al., 2022). We would like to reproduce not only the mean mapping, but also the scatter and the diversity of the cluster population, so as to produce distribution functions of and scaling relations between directly observable quantities from dark-matter-only simulations. This will allow direct comparison between the numerous existing _N_-body simulations that explore a wide variety of cosmologies, and observations. Andrianomena et al. (2022) have trained generative adversarial networks (GANs) with 2D images from CAMELS to generate images of gas mass, neutral hydrogen (HI), and magnetic field strength that statistically matched the properties of the training set and encoded the same cosmological information; Bernardini et al. (2022) achieved similar success with images from the FIRE simulations. Wu & Kragh Jespersen (2023) was able to predict the stellar masses of galaxies using 2D maps of the dark matter mass. These studies tell us that additional cosmological information is encoded in the spatial distribution of the baryonic and dark matter properties, over and above what can be learned from azimuthally averaged quantities or summary scalars. In this paper, we aim to combine these two insights - that the baryonic properties can be predicted from dark matter properties, and that cosmological information is encoded in the spatial maps of the dark matter - to train a machine-learning algorithm with 2D images of cluster-scale halos extracted from the magnetohydrodynamic TNG300 simulation. We apply the trained model to dark matter maps from the dark-matter-only simulation run from identical initial conditions to quantify the effect of baryons on the cluster X-ray luminosity and gas mass functions. We quantify the effect of resolution by predicting these distribution functions from lower-resolution runs of the FP simulation. In this way, we present a resolution-calibrated model to paint baryons onto existing _N_-body simulations and set the stage for cluster cosmology from direct observables. We describe the simulations and the projected images in SS2.1. An overview of CNN-based autoencoders and our implementation of one is provided in SS2.2. We show our results in SS3, share caveats and future directions in SS4, and close with conclusions. ## 2 Methods ### Input simulations The training data for our model comes from the TNG300 simulation (Pillepich et al., 2018; Nelson et al., 2019). TNG300 is well-suited to our problem in several ways. First, it offers dark-matter-only (DMO) as well as full-physics (FP) runs simulated from identical initial conditions, allowing us to quantify directly the effect of including baryons. Second, its relatively large volume of (205 Mpc/h)\({}^{3}\), i.e. -(302.6 Mpc)\({}^{3}\), produces a significant number of galaxy clusters - almost 1000 halos with \(M_{200c}>10^{14}M_{\odot}\) at \(z\lesssim 1\). Projecting these along several viewing angles further amplifies our training sample size. Lastly, the simulation is run from identical initial conditions at 3 different mass resolutions, with the coarser runs matching the resolution of existing large-volume _N_-body simulations. We assess how a model trained on a high-resolution simulation performs on its low-resolution counterpart, i.e. if this is a viable method of super-resolution painting. The TNG suite uses the moving-mesh code AREPO (Springel, 2010) to evolve dark-matter particles and gas cells in a cosmological context. Gas can cool through atomic, molecular, metal line and Bremsstrahlung channels; when it meets certain density, temperature and metallicity criteria, it forms star particles, each representing a single stellar population (Pillepich et al., 2018). Feedback from supernovae and massive stars is treated in a sub-grid manner. Black holes of mass 8\(\times 10^{5}M_{\odot}\) are seeded in Friend-of-Friends (FoF) halos of mass \(5\times 10^{10}M_{\odot}\), after which they accrete matter following a modified Bondi-Hoyle prescription; a fraction of the accreted material is reprocessed as kinetic or thermal feedback, depending on the Eddington ratio (Weinberger et al., 2017). TNG is also currently the only high resolution, cosmological volume, hydrodynamical suite to include magnetohydrodynamics. It is certainly the highest resolution cosmological volume simulation suite to include such a wide array of baryonic processes, with softening lengths as low as 250 pc and effective gas cell sizes as small as 47 pc (Nelson et al., 2019). Due to the chaotic nature of the _N_-body problem, even with identical initial conditions, the "same" dark matter halo looks very different between the DMO and FP runs. Matching halos between the runs is non-trivial, since the halo catalogs are rank ordered by the mass from the Friends-of-Friends catalog at the specific snapshot; this rank ordering can vary subtly due to baryonic effects as well as due to the inherently chaotic nature of the _N_-body problem. Halos between the DMO and FP runs have previously been carefully matched using SubFind (Rodriguez-Gomez et al., 2015) and LHaloTree (Nelson et al., 2015), with the latter ensuring a bijective match. Fig 1 shows pairs of matched halos between the two runs. Despite having similar masses and positions, their spatial structures are often not similar at all, largely due to the chaotic nature of the _N_-body problem. This means that it is not useful to train an algorithm to learn a mapping between baryonic images from the FP run and dark matter properties from the DMO run; these things are not spatially correlated at late times at all. Instead, we break the problem up into two steps - first training between baryons and dark matter in the FP simulation, and then quantifying the systematic differences between the halos in the FP and DMO simulations. To create our training sample, we produce maps of the dark matter mass and gas density as an unweighted projection along each of the \(x\), \(y\) and \(z\) axes; thus, the dark matter map shows the projected mass along the line of sight, which can be linearly rescaled to a lensing convergence \(\kappa\), and the gas maps are a column density. Temperature is projected with the spectral-like weighting of Mazzotta et al. (2004), \(w_{i}=n^{2}T^{-3/4}\), where \(n\) is the number density of a gas cell and \(T\) its temperature. Lastly, we use the yt and PyXSIM packages (Turk et al., 2011; ZuHone and Hallman, 2016) to generate mock X-ray surface brightness maps of the clusters. The X-ray emission from the ICM is modelled as a thermal plasma described by the Astrophysical Plasma Emission Code (APEC) model (Smith et al., 2001), which depends on the density, temperature, and metallicity of each hot gas particle \(i\), defined as meeting the following criteria: \[T_{i} >3\times 10^{5}K \tag{1}\] \[SFR_{i} =0\] (2) \[\rho_{g} <5\times 10^{-25}\mathrm{g/cm^{3}} \tag{3}\] The TNG suite traces independently the evolution of nine key elements, which produce nearly all of the X-ray emissivity at ICM temperatures and densities (Pillepich et al., 2018). The emissivity is predicted for the 0.5-2.0 keV energy range, commonly used in X-ray studies of the ICM. The X-ray emission computed here does not include the AGN luminosity itself, which is expected to be proportional to the instantaneous accretion rate and follow a power-law spectrum rather than APEC (Biffi et al., 2018). In principle, using only halo cutouts could produce a bias compared to observational surveys, since there could be contributing emission from other structures along the line of sight. ZuHone et al. (2022) quantified this effect by creating mock X-ray images from halo cutouts and comparing them to full-box projections at a fixed snapshots as well as using a complete light cone, interpolating between simulation snapshots. They showed that using projections from halo cutouts was biased low by only up to 5% compared to projecting the full lightcone, with the bulk of the sample showing much lower bias. Therefore, we opt to ignore the effects of line-of-sight structure and use the simpler model of including only the gas particles that are bound to a given halo. The goal is to create high-resolution maps of the gas properties, which can then be passed through a mock observation pipeline to reproduce the resolution of the instrument of choice. We therefore produce simulated cluster images that are 4 Mpc wide and contain 512x512 pixels, centered on the gravitational potential minimum of the Friend-of-Friends halo associated with each cutout; the images thus have a resolution of 7.8 kpc/pix. At the 0.5" resolution of _Chandra_, this corresponds to 2.0/0.6/0.4 pixels at z = 0.1/0.5/1. For a survey telescope like _eROSITA_ with an average PSF FWHM of 25', this is far smaller than a single pixel in our images. From the TNG300 snapshots at \(z=[1.0,0.5,0.3,0.0]\) we extract 761 halos that meet our virial mass criterion of \(M_{200c}>10^{14}M_{\odot}\). We project each cluster along the \(x\), \(y\) and \(z\) axes, since most clusters are significantly triaxial; thus, we have 2283 sets of images. In principle, CNNs are not invariant to rotation or reflection (e.g. Zeiler and Fergus, 2013, and references therein), but we do not add rotations or flips of the data as augmentations since we consider the training sample large enough, and because in practice the clusters are already randomly oriented with respect to the Cartesian axes of the simulation box, so that the lack of rotation does not systematically bias our training sample. The properties of the galaxy clusters are shown in Fig 2. The star formation rates, supermassive black hole (SMBH) masses, and SMBH accretion rates all follow nearly log-normal distributions, representing an unbiased selection in the amount of stellar feedback and instantaneous and cumulative AGN feedback in the sample clusters. These feedback processes are expected to be the major contributors, besides gravitational potential, to the X-ray luminosity of a cluster. We randomise the order of the images and split them into 80% training, 10% validation, and 10% testing sets. The ma Figure 1: Examples of dark matter halos from the full-physics (FP) run and their counterparts in the dark-matter-only (DMO) run. The color shows the projected dark matter mass along the line of sight. The halos have been matched bijectively (i.e. in both directions, DMO \(\leftrightarrow\) FP) by tracing particle IDs from the initial conditions, i.e. these halos contain mostly the same DM particles from the initial snapshot. Nevertheless, they look very different by z\(\leq\)1, due to the chaotic nature of the N-body problem. chine learning model is thus trained on 1826 images, unsorted by any cluster property; the model is validated at each step with 228 image pairs. We remind the reader that the training never aims to reduce the validation loss, only the training loss. Computing the validation loss on the fly allows us to assess whether the model is overfitting, i.e. learning features that only pertain to the training sample. All the results presented in this paper are for the test sample, which the network has never seen in the training process. Finally, the neural network is trained to reproduce the gas maps given the dark matter maps as input. Cost-minimising algorithms perform best when the inputs and outputs span a somewhat uniform range between 0 and 1. We test two different normalizations. The first, which we call'minmax, maps all the pixel values to the space between (0,1). The second, which we call '4\(\sigma\)', ensures that the (0,1) range contains \(\mu\pm 4-\sigma\), where \(\mu\) and \(\sigma\) are the mean and standard deviation of the pixel values, respectively. This removes about 0.2% of the pixels but allows the remaining values to fill a much greater portion of the training parameter space. When applying the model trained on FP maps to maps from the DMO simulation, it is crucial to remember that the dark matter halos have slightly different structure in the absence of baryons. In other words, the minima, maxima, mean and standard deviation of the dark matter mass maps are different between the two runs. The model, however, assumes a transformation from the (0,1) to physical space based on the FP maps. Therefore, we use the parameters from the FP maps to normalise the dark matter maps from the DMO simulation as well, before passing them through the trained model. ### The neural network The task of baryon painting can be framed as an image-to-image task, where the input is the projected _N_-body simulation results and the output is a 2D observable (e.g., X-ray surface brightness map) or other desired 2D output (e.g., projected gas mass, stellar mass, star formation rate) that is derived from a full-physics simulation. The U-net architecture (e.g. Ronneberger et al., 2015) is a class of image-to-image deep learning algorithms that is very popular in the field of medical imaging, due to its ability to capture features on a variety of scales. U-nets have been used to automate the process of image segmentation (e.g., Ronneberger et al., 2015), super-resolution image reconstruction (e.g., Mao et al., 2016; Yang et al., 2016), and image colorization (e.g., Zhang et al., 2016). Painting baryons is analogous to the task of image colorization; the input gray-scale image is the dark matter map, and the colorised counterpart is the baryonic image, which can have multiple colors, often refererred to as "channels" in the ML literature. U-nets have already been applied to a variety of image processing tasks in astronomy (e.g. Giusarma et al., 2019; Jeffrey et al., 2020; Vojtekova et al., 2021). Figure 2: Properties of the galaxy clusters used in the training sample. All clusters are drawn from the TNG300 simulation (Pillepich et al., 2018). The sample has a mean (median) virial mass \(M_{200c}=1.6(1.3)\times 10^{14}M_{\odot}\). The star formation rates, SMBH masses (which trace the cumulative AGN feedback over the SMBH history) and the instantaneous SMBH accretion rate all follow nearly log-normal distributions. The instantaneous stellar feedback and AGN feedback rates, which are tied to the SFR and SMBHAR, therefore span three orders of magnitude each. Besides cluster merger and accretion activity, these are expected to be the major contributors to the total X-ray luminosity. A U-net is a subclass of deep convolutional neural networks, which typically employ a series of convolutional and pooling layers to extract features from the input image. U-nets reduce the input image to a sparse representation, and then expand the sparse representation to create an output image. To retain information about small-scale features, U-nets use skip connections that append blocks of similar sizes across the model. Figure 3 shows a schematic of the U-net used in our baryon painting model. The contracting path comprises a series of convolutional and pooling layers, shown as green encoder blocks that reduce the input image to a sparse representation. The expanding path comprises a series of deconvolution layers, shown as blue decoder blocks that reconstruct the output image. Horizontal black arrows show the skip connections, the feature that distinguishes U-Nets from a more traditional encoder architecture. These skip connections append each layer in the expanding path with the similarly sized outputs from encoder blocks in the contracting path. These connections ensure that spatial fidelity is not lost in the convolution process and that the output preserves the small-scale spatial structure of the input. Full architecture details are given in Appendix A. Our U-net model is modeled on an example from the Keras team 1 and is described in detail in Table A1 implemented in Keras (Chollet, 2015) with a Tensorflow (Abadi et al., 2016) backend. We use a ReLU activation (Agarap, 2018) for all but the final output, which instead uses a tanh activation. Figure 3: Architecture of the U-Net algorithm. This is an image-to-image deep learning architecture, which has proved very successful in the fields of image colorisation and super-resolution painting. The contracting path is shown on the left, and the expanding path on the right; horizontal arrows indicate skip connections. Each convolution block includes a 2D convolution with a kernel size of (3,3), a batch normalization, and a ReLU activation, with the whole sequence repeated twice. The encoder block consists of a convolution followed by a MaxPool of kernel (2,2). Each layer extracts features on ever larger spatial scales, reducing the image to a sparser representation. The decoder is an inverse 2D convolution, a concatenation with the corresponding contracting path layer, and another convolution. At each step, therefore, it expands the sparse representation of the previous layer, while concatenating with the corresponding layer of the contracting path ensures that features are reproduced not just statistically, but at the same locations. We utilize "same" padding for each convolutional layer. The model has \(\sim\)31 million free parameters and is compiled with an Adam optimiser (Kingma & Ba, 2014) with the default learning rate and trained to minimize a pixel-by-pixel sum of the mean squared error. The validation set is used to assess for overfitting. Several model variations are explored; these are described in the next Section. ### Model variations For each baryonic property, we train three models. In the base model, we used the'minmax' normalization on both the input and output, and passed it through the U-net. In the next iteration, we added a step to the end of the U-net that set all the output pixels to zero if the corresponding dark matter pixel is empty. This set of models is labelled'minmax-mask'. It encodes the physical intuition that baryons follow the dark matter, and there should not be emission or gas mass in the absence of dark matter. U-net algorithms, and convolutional neural networks in general, operate by identifying features on different scales, and therefore may not perfectly capture boundaries, especially when they vary dramatically between training images. If a lot of empty pixels are erroneously assigned even small, non-zero values, and the algorithm is minimising the mean squared error, it compensates by underestimating many of the remaining pixels. Lastly, we changed the normalization to \(4\sigma\) (see SS2.1), and retained the mask. The'minmax' normalisation can result in the training values filling a very small region of the (0,1) space, in order that a very small number of outlier pixels are included in this range. By definition, \(\mu\pm 4-\sigma\) excludes only 1 in 15,787 pixels, or 0.006%, but as the right column of Fig 5 shows, this allows the remaining values to fill the (0,1) space much more evenly. This allows the training to capture smaller differences between the true and predicted values. On the other hand, the densest, hottest pixels may contribute disproportionately to the total emissivity of the ICM, and it is possible that excluding them may cause significant biases in the prediction. For most models, we took the logarithm of both the input and output quantities before the renormalisation step. Since the temperature maps have limited dynamical range, we also trained a model to reproduce the temperature in linear, rather than log, space; we call this model \(T_{lin}\). ## 3 Results Here, we present the results of our trained models and highlight technical lessons. We then apply the trained model to simulations of progressively lower resolution, to quantify the systematic effects of resolution and baryons on the cluster X-ray luminosity function and scaling relations. ### Predicting baryon maps from dark matter mass in the FP simulations First, we trained the model to predict baryonic maps - projected gas mass, spectral-weighted projected temperature, and X-ray surface brightness - given the projected dark matter mass as input. While the gas density is not a direct observable, it is a key determinant of the X-ray luminosity; once the temperature is measured, the gas density profile can be reconstructed from surface brightness maps. For each input-output pair in Table 2, we trained three models with the parameters in Table 1. The best-fit model for the X-ray luminosity is shown in Fig 4. The top row shows the dark matter map, the second shows the true X-ray luminosity, the third row shows the model prediction, and the bottom row shows the fractional difference between the true and predicted gas properties in the unphysical, normalised training space where the loss was minimised. Immediately, there is excellent agreement in the magnitude as well as structure between the true and predicted maps. While the models were trained to minimise the MSE, Table 2 shows the mean percentage error MPE = \((L_{\rm X,pred}-L_{\rm X,true})/L_{\rm X,true}\). Here, \(L_{\rm X}\) is the sum of surface brightness over each image, reconstructed to physical units, and its median value for the best-fit model is just -1.67%; the temperature performs similarly well. The gas density is underpredicted at the 10% level even in the best fit model. The first column shows one of the more dramatic - although still minor - cases of model underprediction, which is mostly in infalling groups at the outskirts. The X-ray luminosity maps were generated using only the hot gas, i.e. \(T>3\times 10^{5}K\) and \(\rho_{\rm g}<5\times 10^{-25}\)g/cm\({}^{3}\), criteria which are unlikely to be met by most of the gas in low-mass groups. Groups that are infalling into a cluster environment can be significantly shock heated to uncharacteristic degrees in the X-ray. Such early-stage group infalls are rare in our sample of 761 clusters, and we presume that this kind of system would be predicted better by an algorithm trained on a larger, more diverse cluster sample. The gas density and temperature maps (Figs. 1 and 2) perform remarkably well, although they predict less substructure than actually produced in the simulations. This is likely the highly non-linear effect of radiative cooling, which does not follow directly from the instantaneous gravitational potential. Nevertheless, the median reconstruction errors for these two models are 11.4% and 1.02%, respectively. ### Effect of input normalisation and masking Fig 5 compares the distributions of the pixel values in the true and predicted images for each of the twelve trained models. The left and center columns use the'minmax' normalisation, while the right uses '\(4\sigma\)'; the center and right columns additionally mask the outputs in all the pixels where the input (DM mass) is 0. The base'minmax' model, in the left column, performs best when predicting \(T_{lin}\), recovering the mean and only underestimating the standard deviation of the pixel values by \(\sim 25\%\). For all the other outputs, however, the predicted images have a different mean from the ground truth - the density and X-ray emissivity are systematically overpredicted, while the temperature is underpredicted. This reflects the tendency \begin{table} \begin{tabular}{l c c} **Name** & **Normalization** & **Mask DM=0 pixels** \\ \hline minmax & minmax & No \\ minmax-mask & minmax & Yes \\ \(4\sigma\)-mask & \(4\sigma\) & Yes \\ \end{tabular} \end{table} Table 1: For each of the baryonic properties in Table 2, we trained the algorithm three times with different choices of input normalisation and output masking, as named here. of CNNs to bias predictions towards the mean. A very small number of very dense pixels contain very high gas density and X-ray emissivity, and due to radiative cooling some of these pixels will have very low temperatures. Since they are so unlikely, however, the model will assign them values that are closer to the mean; to compensate, it will also adjust the predictions in the rest of the distribution so that the MSE over the entire image is low. For the gas density and X-ray emissivity, this means that densest/brightest pixels are underpredicted, but the remaining pixels are denser/brighter on average; similarly, the coldest pixels are predicted to be hotter than the true value, but to compensate, many of the \begin{table} \begin{tabular}{l l l l l} **Name** & **Output** & Best model & Median MPE\({}^{\ddagger}\) (\%) & Mean MPE (\%) \\ \hline \(\Sigma_{DM}\rightarrow\rho_{g}\) & log(Projected hot gas density) & 4\(\sigma\)-mask & -11.4 & -16.1 \\ \(\Sigma_{DM}\to T_{X}\) & log(Mazzotta-weighted temperature of hot gas) & minmax-mask\({}^{*}\) & -0.79 & -0.90 \\ \(\Sigma_{DM}\to L_{X}\) & log(Projected X-ray luminosity) & minmax-mask & -1.67 & 0.88 \\ \end{tabular} \({}^{\ddagger}\) MPE – Mean Percentage Error, computed over the entire image. * Marginally worse MSE with 4\(\sigma\)-mask. \end{table} Table 2: Summary of the best-fit models. In each case, the input is the projected dark matter mass in each pixel, \(\Sigma_{DM}\). \({}^{\ddagger}\) MPE – Mean Percentage Error, computed over the entire image. * Marginally worse MSE with 4\(\sigma\)-mask. Figure 4: Results of a model trained to reproduce the projected X-ray flux given the dark matter maps as input. The projected dark matter mass density is shown in the top row, followed by true (second row) and predicted (third row) X-ray flux. The bottom row shows the fractional error between the true and predicted gas maps in the unphysical training space. Converted back to the physical space, the errors on a pixel-to-pixel level can be factors of several, but when summed over the entire image, the median (mean) MSE is 6.11 (13.36)%. Figure 5: PDF of pixel values of the true (blue) and predicted (orange) images for the various models, in non-physical units between 0 and 1. In the left panels, the inputs are transformed so that the logarithm of their values fits in the (0, 1) space; this is the baseline or ‘minimax’ model. Masking out the empty regions of the dark matter maps, as shown in the middle row, improves the similarity between the true and predicted distributions; this is the ‘minimax-mask’ model. In the right panels, in addition to masking the outputs where input is 0, the input normalization is changed so that the (0,1) range contains \((\mu\pm 4-\sigma)\) of the values rather than the full range of input values, which otherwise may fill a very limited range of the training space. This is the ‘\(4\sigma\)-mask’ model. All the models tend to underpredict the values of the brightest pixels, and conversely overpredict the faintest pixels; this is the well known problem of bias towards the mean in regression models. remaining pixels are predicted to be colder than they ought to be. In all cases, the effect is dominated by the highest density pixels. Masking out the output pixels where the input pixels are 0, as shown in the middle panel, limits the number of pixels where the CNN can compensate for this bias. Where earlier it could assign a small, non-zero value to the 0 pixels, which would add up to counter the predicted deficit in the densest pixels, it now cannot. This forces it to improve the prediction in the space where there actually is dark and baryonic matter. Hence the middle column of Fig 5 performs significantly better than the left column for all output properties. Some properties, like the gas density and X-ray flux, have a small number of outliers that have uncharacteristically high or low values. If all the pixels are required to live in the (0,1) space, most of the pixels available for training actually live in a small fraction of that space. The \(4-\sigma\) normalization, by dropping just 0.06% of the pixels, allows the remaining pixels to fill the (0,1) space much more evenly, as shown in the right column of Fig 5. This should, in principle, allow a smoother mapping between the input and output quantities, since the dynamic range of each is effectively expanded. The trade-off is that a few very faint or very bright pixels fall outside the training domain. The gas density shows a clear preference for the \(4\sigma\) normalisation, whereas for the others, it is not clear by eye whether the peakier predicted distribution with the'minmax' normalisation offsets the other differences between the distributions. When integrated over the entire image, however, Table 2 shows that the temperature and X-ray flux are better predicted with the'minmax' normalisation with output masking. ### Radial profiles Fig 6 reduces the 2D images to azimuthally averaged radial profiles, and shows the excellent agreement between the true (black) and predicted (blue) values. The solid line in each plot shows the median profile at that radius out of the test sample of 228 clusters. In the top rows, the shaded region ranges from the minimum to the maximum value in each radial bin; in the bottom row, the shaded region is the interquartile range, i.e. from the 25th to the 75th percentile. The comparison shows that while there are some spiky features in some of the true profiles, these are rare outliers that do not appear in the interquartile range. Image-to-image baryon painting thus also reproduces the radial profiles of the intracluster medium and most of its diversity. ### Predicting X-ray and gas mass distribution functions Summing up the X-ray flux and gas column density over each of the images yields a single value for X-ray luminosity Figure 6: Median profiles of the azimuthally averaged quantities predicted from the best-fit models for each property. Profiles were predicted for only the test sample. In the top row, the shaded region represents the full range of the true and predicted profiles; in the bottom row, it shows the interquartile range, i.e. between the 25th and 75th percentiles at each radius. This shows that while some of the true profiles show some spiky features at all radii, these are outliers that do not appear with the interquartile range. and and gas mass \(M_{\rm g}\) for each cluster. We compare the distribution functions of these properties to the true values from the FP simulation in Fig 7. This is not something the model was explicitly train to reproduce - it only learned the mapping between a given dark matter map and its baryonic counterpart. The dark matter mass function does constrain the distribution functions to first order, but because galaxy clusters exhibit a large scatter in ICM properties at fixed halo mass, this does not have to translate into a correct distribution function of baryonic properties. We further emphasise that while these are integrated quantities, they do in fact make use of the 2D information. If instead we had trained a model where the dark matter mass was provided as a single value per halo, the output could not have the scatter seen in the "true" simulated population. Using 2D images is equivalent to constructing an input vector that contains the halo mass, mass accretion history, shape parameters, redshift, and other quantities that drive the scatter in cluster observables. The best fitting models --'mimmax-mask' for \(L_{X}\) and '\(4\sigma\)-mask' for \(M_{\rm g}\) -- produce cluster distribution functions that align very well with the true values within the Poisson \(1\)-\(\sigma\) uncertainties. Without masking the DM-free pixels, both distribution functions are biased high; excluding the rare, but very bright, pixels with the \(4-\sigma\) normalisation has a greater effect on \(L_{X}\propto n_{\rm g}^{2}\) than \(M_{\rm g}\propto n_{\rm g}\). Despite training over less than one decade in halo mass, we reproduce the cluster luminosity function over three orders of magnitude. ### Painting a DMO simulation with a model trained on full-physics simulations As shown in Fig 1, the dark matter halos produced in the DMO runs look different from their FP counterparts. This is partly due to the stochasticity introduced by the _N_-body problem; furthermore, baryons introduce a variety of systematic but highly non-linear effects on their dark matter halos in ways that depend on the halo mass and the baryonic feedback prescriptions (Kochanek & White, 2001; Pedrosa et al., 2009; Duffy et al., 2010). We therefore expect that training our model on DM properties extracted from the FP simulation will be biased compared to if we had DM halos from a simulation without baryons, if somehow the stochasticity of the _N_-body problem could be removed. The next step, therefore, is to check whether models trained on dark matter maps from the FP simulation perform adequately when applied to dark matter maps from the DMO simulation. The model assumes that the mapping between the physical and training (0,1) space is based on the FP simulation; therefore, we use the same numbers to renormalise the DMO maps to the training space. Since the input is now from an _N_-body simulation, there is no ground truth for the baryonic properties. Nevertheless, we can test whether the model can predict the correct distribution function of baryonic observables. Fig 8 shows that the models do indeed succeed in reproducing the FP distribution functions even when using inputs from the DMO run. The X-ray luminosity function is slightly overestimated at \(10^{43.5-45}\) erg/s, whereas the gas mass distribution from the best-fit ('\(4\sigma\)-mask') model agrees remarkably Figure 7: The true (black, solid) and predicted distribution function for the cluster X-ray luminosity (left) and gas mass (right), computed as a simple sum over the images. The shaded bands indicate Poisson uncertainties. The gas mass function is best reproduced using the \(\mu\pm 4-\sigma\) mask on the inputs (orange, dotted), whereas \(L_{X}\) performs better if the renormalised dark matter projected mass is allowed to fill the entire (0,1) space (green, dot-dashed); this is likely because the few dense pixels contribute super-linearly to the luminosity but only linearly to gas mass. Masking out the empty pixels improves all the models. well with the FP distribution within the entire training domain. Both models slightly overpredict the distribution function when applied to the _N_-body simulation, and because the \(\Phi(M_{g})\) was slightly underpredicted using inputs from the FP run, the two effects cancel out, producing a very good agreement between the predicted gas mass function from DMO and the true distribution from FP. This captures a well known difference between _N_-body simulations and their full-physics counterparts. The former systematically produce more ultra-dense pixels, which in the FP runs get smoothed out by baryonic feedback. We checked that using the '\(4\sigma\)-mask' overcompensates for this effect in \(\Phi(L_{X})\), and slightly underpredicts the galaxy cluster X-ray luminosity function. In practice, both these biases can be computed and accounted for. For each bin in \(L_{X}\) or \(M_{g}\), we can compute the bias between the true and predicted distribution functions. This correction factor is the net effect of the CNNs bias towards the mean, and the tendency of _N_-body simulations to produce more ultra-dense clumps than their full-physics counterparts. ### X-ray - mass scaling relation Fig 9 shows the scaling relation between the dark matter mass and X-ray luminosity in the FP simulation, compared with predictions for the FP test sample and for the DMO sample. The best-fit scaling relations are remarkably similar. Further, the U-net naturally predicts the scatter in these scaling relations, which are captured in the detailed spatial structure of the cluster and therefore the intracluster medium. While the scatter may physically source from gas clumping, star formation, and stellar and AGN feedback, the success of this model tells us that all these physical processes are correlated with the detailed spatial structure of the halo; baryons follow the dark matter, and denser regions correspond to more active baryonic feedback. The U-net architecture is therefore capable of numerically describing these spatial correlations and reproducing the observed richness of cluster properties. The fact that the machine learning model can reproduce the X-ray luminosity function, \(L_{X}-M_{DM}\) scaling relation and the scatter therein, means that it can be applied to any DMO simulation to paint the Illustris-TNG physics model onto it. Predicting a single property for \(\sim 2700\) DMO images on 2 GPUs took 73 seconds. This means that every DMO simulation today can be converted into a large-volume full-physics realisation of Illustris-TNG. Similar models can, and must, be trained on other simulations like EAGLE (Schaye et al., 2015) and SIMBA (Dave et al., 2019) to numerically capture the existing uncertainty between theoretical models of galaxy evolution. ### Resolution effects Lastly, TNG300-1 has an exquisite sub-kpc resolution that is much finer than most large-volume _N_-body simulations. To apply this model to existing _N_-body simulations, it is important to assess how it performs on dark matter maps from lower-resolution simulations. Fig 10 shows close agreement between the predictions for FP-2, which has 2 times lower resolution in space (and therefore 8 times lower in mass) than the training sample, and the ground truth from FP-1, for all but the brightest and most massive galaxies. Figure 8: The X-ray luminosity (left) and gas mass (right) distribution functions predicted using the corresponding best-fit models and dark matter maps from the FP (blue) and DMO (orange) simulations. The shaded regions, again, indicate Poisson noise. There is excellent agreement between the predicted and true distribution functions, which the model was not trained to reproduce. Note further the three orders of magnitude in \(L_{X}\) predicted from one order of magnitude in \(M_{DM}\). of training on a suite of simulations that is not currently in the public domain, we leave this test to future work. ### Multi-wavelength predictions While the ongoing _eROSITA_ all-sky survey motivated an emphasis on the X-ray luminosity function, the coming years will see game-changing SZ surveys such as CMB-S4 (Abazajian et al., 2019) and the Simons Observatory (Ade et al., 2019); at radio wavelengths, LOFAR (Cassano et al., 2010; Savini et al., 2019), MeerKAT (Knowles et al., 2022) and ASCAP are already setting an exciting stage for SKA (Gitti et al., 2018). Modeling observations at both sub-mm and radio wavelengths must be done very carefully to account for systematics in the interferometric observation pipeline. In addition, radio synchrotron emission depends not only on the bulk properties of the gas, but also relic populations of relativistic electrons and the details of cosmic ray feedback. This makes radio observations harder to link to cosmology than X-ray and SZ. However, calculating \(Y_{SZ}\) from the simulations is simple, and pipelines do exist for mocking its observations with real telescopes. Applying our technique to the \(Y_{SZ}\) function of galaxy clusters could allow us to catch unnoticed systematics in X-ray surveys, or vice versa. ### The effect of adding dynamical, redshift and SMBH information We trained additional models to use additional input channels besides the projected dark matter mass. First, since the intracluster medium is known to be sensitive to the mass accretion history of the cluster, we added maps of the average velocity magnitude of the dark matter particles. Second, since AGN feedback is known to regulate the thermodynamics of the ICM, we created an additional channel that included the positions and instantaneous accretion rates of all the SMBH within the cluster. Somewhat surprisingly, neither of these models performed better than the base model. This is likely because dynamical and heating effects are local, and lost in projected images. Further, since AGN reside preferentially in the densest regions of clusters, information about their effect may have been encoded in the central dark matter distribution of each cluster. We remind the reader again that direct emission from the AGN is not included in our maps; this would be a helpful next step, which would allow the direct interpretation of X-ray luminosity functions from lower-resolution surveys like _eROSITA_ where deconvolving the broad PSF of the AGN can prove challenging. Finally, we trained models using only clusters at fixed redshift, i.e. one model each for z = 1.0, 0.5, 0.3 and 0. If there is additional information encoded in the cluster redshift, these models ought to outperform the one where clusters at all redshifts are treated equally. We find that this is not the case. If there is additional information in the cluster redshift, it is not sufficient to compensate for the reduced training size. This could nevertheless be a useful feature to incorporate into future models as and when larger training sets become available. Figure 10: Distribution functions of the X-ray luminosity and gas mass predicted for dark matter mass maps from the medium (FP-2) and low (FP-3) resolution runs of TNG300, which have mass resolutions 8 and 64 times lower than FP-1, on which the training was performed. Predictions for FP-2 agree with the ground truth for luminosities below \(10^{45}\) erg/s and gas mass below \(10^{14}M_{\odot}\); for more massive/brighter clusters, the model trained on FP-1 over- (under-) predicts the \(L_{X}\) (\(M_{\rm g}\)) distribution function. ### Improving super-resolution baryon painting Fig 10 showed that if the model is applied to dark matter maps from simulations that have far lower native resolution than the training set, the predicted distribution functions have a different shape from the ground truth. This is because both over- and under-densities on the smallest scales are smoothed out, such that they are no longer captured by the convolutional filters trained at higher resolution. Super-resolution models have successfully been trained using CNN architectures (McCann et al., 2017; Fukami et al., 2019; Soltis et al., 2022), and are a logical next step in this line of study. A particularly promising approach to super-resolution baryon painting is using known physical relations between the dark matter and baryonic processes, which are described in the simulation sub-grid models as differential equations (DEs). Solving the equation is equivalent to minimising the difference between the two sides of the equation. Adding this difference to the loss function of a machine learning model lies at the crux of Physics-Informed Neural Networks (PINNs, e.g., Raissi et al., 2019; Cai et al., 2021). ## 5 Conclusions We trained a U-Net convolution neural network (CNN) to paint X-ray surface brightness, projected gas density and spectroscopic-like temperatures onto maps of projected mass from dark-matter-only simulations. * We find that this method works very well, with median fractional errors (on the test sample) of -11.4% for the gas column density, \(-0.79\%\) for spectral-weighted projected temperature and -1.67% for X-ray luminosity. Individual pixels may vary by up to factors of a few, but the distribution over each cluster is remarkably well recovered. * Using just the dark matter mass maps, the model is very successful at reproducing the baryonic structure in clusters undergoing complex mergers, where hydrostatic equilibrium cannot be assumed. It does, however, underpredict the luminosity from low-mass infalling groups, where the effect of non-gravitational heating from shocks is unusually high compared to most of the training sample. This will likely improve with larger training samples that include more mergers. * Despite being trained only to reproduce individual images, the model also reproduces the radial profiles, X-ray luminosity function and gas mass function of galaxy clusters along with almost all their inherent diversity. The model further reproduces not only the scaling relation between the dark matter mass and X-ray luminosity of galaxy clusters, but also the scatter therein. * The predictions work very well for dark matter maps drawn from the DMO simulation, even though they were trained on the FP simulation, where the dark matter structure is slightly different due to baryonic effects. The predicted distribution functions are biased slightly high, following the well-known phenomenon that the DM maps from DMO simulations contain more ultra-dense clumps than their FP counterparts, where baryonic feedback smooths them out. * The model also performs remarkably well on dark matter maps drawn from the FP-2 simulation, whose spatial (mass) resolution is 2 (8) times lower than the training sample. Further degrading the resolution results in over-predicting the X-ray luminosity function, since low-resolution simulations produce more ultra-dense pixels which in higher resolution runs would consist of several, less dense pixels. We therefore caution against using models trained at a given spatial resolution on dark matter maps from a simulation whose native spatial resolution is more than 2 times coarser. We conclude that U-nets are a powerful technique for learning the mapping between dark matter only (DMO) and full-physics (FP) simulations. The physical processes that alter the observable properties of galaxy clusters are correlated with the detailed structure of the dark matter, and CNNs excel at capturing such spatial correlations. The key predictive features of the dark matter maps are preserved in simulations with slightly lower resolution. This work sets the stage for further research in super-resolution baryon painting, and makes it possible to perform galaxy cluster cosmology with existing _N_-body simulations while accounting for the complex, non-linear effects of baryons on their observable properties. ## 6 Data availability statement All the data from the IllustrisTNG suite is available at [https://www.tng-project.org/data/](https://www.tng-project.org/data/). The code for creating images from TNG300 and training the U-net, as well as all the trained models mentioned in this paper, can be found at [https://github.com/milchada/MLBaryonPainting](https://github.com/milchada/MLBaryonPainting). This work uses only data in the public domain and is entirely reproducible. ###### Acknowledgements. We thank the anonymous referee for their very helpful suggestions. UC, JAZ, AB, RPK acknowledge support from the Smithsonian Institution and the Chandra High Resolution Camera Project through NASA contract NAS8-03060. The material presented is based upon work supported by NASA under award No. 80NSSC22K0821. Helpful advice was provided by Cecilia Garfaffo and the AstroAl group at the CfA, Camille Avestruz, Francisco (Paco) Villasecusa-Navarro, Antonio Ragagnin, and Joop Schaye.
2309.14488
When Automated Assessment Meets Automated Content Generation: Examining Text Quality in the Era of GPTs
The use of machine learning (ML) models to assess and score textual data has become increasingly pervasive in an array of contexts including natural language processing, information retrieval, search and recommendation, and credibility assessment of online content. A significant disruption at the intersection of ML and text are text-generating large-language models such as generative pre-trained transformers (GPTs). We empirically assess the differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs. To do so, we propose an analysis framework that encompasses essay scoring ML-models, human and ML-generated essays, and a statistical model that parsimoniously considers the impact of type of respondent, prompt genre, and the ML model used for assessment model. A rich testbed is utilized that encompasses 18,460 human-generated and GPT-based essays. Results of our benchmark analysis reveal that transformer pretrained language models (PLMs) more accurately score human essay quality as compared to CNN/RNN and feature-based ML methods. Interestingly, we find that the transformer PLMs tend to score GPT-generated text 10-15\% higher on average, relative to human-authored documents. Conversely, traditional deep learning and feature-based ML models score human text considerably higher. Further analysis reveals that although the transformer PLMs are exclusively fine-tuned on human text, they more prominently attend to certain tokens appearing only in GPT-generated text, possibly due to familiarity/overlap in pre-training. Our framework and results have implications for text classification settings where automated scoring of text is likely to be disrupted by generative AI.
Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan Zhang, Yi Gan, Kai Yang, Ahmed Abbasi
2023-09-25T19:32:18Z
http://arxiv.org/abs/2309.14488v1
When Automated Assessment Meets Automated Content Generation: Examining Text Quality in the Era of GPTs ###### Abstract The use of machine learning (ML) models to assess and score textual data has become increasingly pervasive in an array of contexts including natural language processing, information retrieval, search and recommendation, and credibility assessment of online content. A significant disruption at the intersection of ML and text are text-generating large-language models such as generative pre-trained transformers (GPTs). We empirically assess the differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs. To do so, we propose an analysis framework that encompasses essay scoring ML-models, human and ML-generated essays, and a statistical model that parsimoniously considers the impact of type of respondent, prompt genre, and the ML model used for assessment model. A rich testbed is utilized that encompasses 18,460 human-generated and GPT-based essays. Results of our benchmark analysis reveal that transformer pretrained language models (PLMs) more accurately score human essay quality as compared to CNN/RNN and feature-based ML methods. Interestingly, we find that the transformer PLMs tend to score GPT-generated text 10-15% higher on average, relative to human-authored documents. Conversely, traditional deep learning and feature-based ML models score human text considerably higher. Further analysis reveals that although the transformer PLMs are exclusively fine-tuned on human text, they more prominently attend to certain tokens appearing only in GPT-generated text, possibly due to familiarity/overlap in pre-training. Our framework and results have implications for text classification settings where automated scoring of text is likely to be disrupted by generative AI. 1 Footnote 1: University of Notre Dame, Indiana, USA {[email protected], [email protected], [email protected], [email protected], [email protected], [email protected]} 2 Footnote 2: Georgia Tech, Atlanta, Georgia, USA {[email protected]} 3 Footnote 3: Shenzhen University, Shenzhen, China {[email protected]} ## 1 Introduction The use of machine learning (ML) models to quantify or "score" textual data has become increasingly pervasive over the past 25-30 years. In natural language processing (NLP), there are a myriad of text sequence classification problems related to categorization of text based on topics, sentiment, affect, and psychometric dimensions [47, 5, 24]. In information retrieval (IR), text-based documents are scored for relevance assessment in various application and system contexts, including search relevance for search engines [72, 76] and recommendation engines used in recommender systems [95, 65]. The uptick in digital content and activity has been accompanied by the rise of illicit digital content. Accordingly, ML-based scoring has also become important for credibility assessment in an array of online settings including detection of web spam, phishing websites, deceptive reviews, and fake news [4, 49]. Transformers [29, 93], and foundational large-language models (LLMs) such as generative pre-trained transformer (GPT) [66, 21], represent a major disruption at the intersection of ML and text, affording unprecedented opportunities for ML-based assessment and generation of text [40, 86]. Nonetheless, much of the evidence of LLM performance on traditionally human processes and tasks is anecdotal first-person (experiential) accounts, often disseminated through social media and blog postings. There is a nascent yet emerging body of literature highlighting GPT's performance on various tasks. Some of the most robust early evidence of capabilities comes from assessments such as licensing and graduate coursework exams [45, 85, 80] and other curriculum activities [33]. There remains a need for rigorous empirical and experimental research on the implications of generative AI in a bevy of contexts. The potential for man-machine hybridization recently received scrutiny across an array of activities [37, 15, 75]. Accordingly, _the research objective of this study is to empirically assess the impact of hybrid environments on ML-based text assessment/scoring, where both the assessment and generation of textual content involve a mixture and/or interplay between humans and ML._ Alternatively stated, rather than exploring the effectiveness of ML generated content directly against a human gold standard, as done in emerging literature [33, 85], we explore differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs. The specific context we use to explore this gap is automated essay scoring (AES). In AES, traditionally, ML-models are used to score human-generated essays [67, 41]. There are several AES contexts where such ML-based assessments happen. Most notably in educational settings, but also potentially in talent analytics, personnel selection, and job-to-candidate recommender systems [71, 103]. In educational testing, ML-models are used to assess essays in K-12 education and other tests such as the writing portions of the GMAT, GRE, and TOEFL [70, 77]. AES provides a suitable experimental setting due to the abundance of publicly available human-generated essays, expert ratings, and accompanying prompts well-suited for GPT-based essay generation. Further, although there is limited benchmarking of ML methods for AES, the state-of-the-art resembles methods used in related ML-based text scoring research such as sentiment, user experience ratings, psychometrics, and personality [47, 5, 97]. Our three research questions are as follows: * **RQ1:** How effective are state-of-the-art feature-based and deep learning models for AES? * **RQ2:** How do AES models trained on human-generated content rate text generated by GPT models? What is the moderating effect of different document genres on such assessments? * **RQ3:** What linguistic categories and cues are most different for human versus GPT-generated text? To address these questions, we propose a rich analysis framework that encompasses state-of-the-art ML-models for essay scoring, human and ML-generated essays with a prompt engineering protocol for the latter, and a statistical model that holistically considers essay genre types, human versus machine authorship, and the scoring model to parsimoniously infer main effects and interactions. We couple our framework with a rich testbed encompassing 15,437 human-generated and 3,023 GPT-based essays (1,537 GPT-3.5 and 1,486 GPT-4) associated with 68 prompts related to 6 document genres. Results of our benchmark analysis (RQ1) reveal that transformer language model-based methods such as BERT and RoBERTa more accurately score human essay quality as compared to CNN/RNN and feature-based ML methods, attaining scoring mean squared errors that are 10-40 percent lower. Interestingly, in regards to RQ2, we find that the transformer language model-based methods also tend to score GPT-generated text 10-15% higher on average, relative to documents authored by humans. This is interesting because the GPT data is out-of-sample for these models trained on human-only text. Conversely, the traditional deep learning and feature-based ML models score human-generated text considerably higher than GPT text. Moreover, these effects are most pronounced for certain types of document genres such as narrative, argument, and response. Further analysis related to RQ3 reveals that the transformer-based scoring models attend to certain tokens appearing in GPT-generated text more prominently. Our main contributions are two-fold. First, we develop an analysis framework for contexts involving automated content assessment and generation mechanisms, including ML scoring models, GPT-generated texts based on prompt design adaptation, and parsimonious statistical models for evaluating such intersections. We intend to make all benchmarking code, analysis models, and prompt design processes publicly available. Second, we use AES to offer various empirical insights, including how best-in-class text scoring methods, based on transformer language models, may score certain genres of GPT-generated content higher even when exclusively trained on human content, possibly due to familiarity. That is, overlap in language modeling data sources and transformer-based attention head mechanisms. Although set in the context of AES, our framework and results have implications for many text classification and IR settings where automated scoring of text/documents is likely to be disrupted by generative AI in the coming years, often in unintended and uninvited ways, including search relevance (e.g., of web pages, blogs, social media content), content recommendation, online credibility assessment, anti-aliasing, and fake news detection, just to name a few. The results of our work may be especially alarming for adversarial settings where generative models could disrupt the traditional profit-formula for return-on-fraud by alleviating barriers to entry for the creation of high-quality automated content. The remainder of this paper is organized as follows. In the ensuing section, we discuss relevant prior work related to AES. We then delve into the effectiveness of generative models such as GPT for human ability tasks, and review the state-of-the-art for ML-based scoring of human-generated content. In Section 3, we describe our analysis framework and testbed. Section 4 presents results for our benchmark evaluation of ML models for AES, as well as our statistical model-based empirical results for how ML scoring models rate human versus GPT text. Section 5 uses text content analysis and language model visualization tools to shed light on the underlying mechanisms driving our results. In Section 6, we offer concluding remarks. ## 2 Related Work In this section we review work on automated essay scoring (AES), large-language models (LLMs) and text quality, and ML methods for scoring text. ### Automated Essay Scoring (AES) Essay assessment entails evaluating quality of textual content generated by human authors. It is viewed as a tool for assessing one's ability to retain knowledge, synthesize ideas, interpret data, and express oneself through written language [92, 20]. Automation of essay assessment through AES was motivated by similar objectives as other ML-based text scoring use cases such as topic categorization and sentiment analysis: limited bandwidth and availability of expert raters, large volume of text needing assessment, timeliness, subjectivity of human judges, and so forth. The traditional process of scoring typically relies on the subjective judgement of fallible human assessors who are overworked, and often underpaid[78, 59]. Moreover, the sheer volume of essays in these assessments has led to the implementation of AES to handle the workload as a co-rater, alleviating the sole reliance on human assessors[30]. Although traditional AES approaches pre-date the rise of ML [58], the underlying computational technology has evolved from rule-based scoring to ML models [77], including feature-based and deep learning ML models. Examples of popular AES systems are Project Essay Grader(tm) (PEG), Intelligent Essay Assessor(tm) (IEA), and E-rater(r)[70]. In educational contexts, Massive Open Online Course (MOOC) organizations such as edX, MIT and Harvard's MOOC federation, utilize automated scoring [13]. Beyond the practical constraints necessitating AES, the reliability of AES is also often deemed on par with or better than human assessments due to training on a large set of documents that adds consistency in ratings relative to sole reliance on a large group of individual assessors that may yield high variance [30]. Despite the widespread growth and adoption of AES, it is not without its detractors. Some are hesitant to incorporate AES systems into their organizational settings due to the differences in scores that machines may produce [19]. In regards to MOOCs, in contrast to edX's extensive use of AES, Coursera, a Stanford-based MOOC, adheres to the traditional human assessment of essays [13]. Presently, benchmarking of ML methods for AES is limited in terms of the methods evaluated, the testbeds employed, and the evaluation metrics utilized [41, 67]. Our key research gap is as follows. _Given our interest in exploring the interplay between ML-based assessment and generative AI versus human text, an important prerequisite research gap becomes performing an in-depth benchmark analysis of the performance of state-of-the-art ML methods for AES on multiple testbeds._ ### Large Language Models and Text Quality The NLP space has made tremendous strides in the word embedding and language modeling space in the past 10 years, from static embeddings to contextualized embeddings and transformer-based large language models (LLM) [54, 21, 8]. Perhaps the most prominently featured and discussed amongst these LLMs are GPTs such as ChatGPT and other transformer-based models such as Llama and PaLM-2 [57, 88, 8]. While a majority of research concerning the implications of such models is still to be conducted, initial success is already seen in such LLMs' capabilities on knowledge tasks. In taking the United States Medical Licensing Exam (USMLE), which consists of three separate tests, ChatGPT performed "at or near the passing threshold" and displayed both consistency among its responses and comprehension of the presented topic [45]. More recent LLM benchmarks place their performance on such exams as being even higher [80]. Similar findings concerning ChatGPT's abilities are also seen in its performance on an MBA course's final exam [85]. ChatGPT3 was successful in passing the exam with a B/B- grade due to its Operations Management knowledge and explanations, in addition to its ability to correct itself after receiving human hints. While it was successful in passing, ChatGPT did not receive a higher grade due to its simple mathematical mistakes and inability to handle advanced process analysis questions. Conversely, the Codex LLM designed for code-based language modeling is capable of generating and solving university-level math problems [33]. ChatGPT has also seen some initial success in taking law school exams. Researchers at the University of Minnesota Law School discovered that ChatGPT achieved a passing grade on four different course exams, and may be considered a "mediocre" law student that could attain a JD degree from a reputable law school [25]. Moreover, the PaLM-2 model can pass written proficiency exams for Chinese, Japanese, Italian, French, and Spanish [8]. ChatGPT was even recently used to assess the quality of essays, though it's assessment performance lagged behind that of feature-based machine learning methods [55]. Outside of formal examination contexts, much of the evidence and discussion of generative AI's ability to produce quality content in knowledge tasks, relative to human-generated content, remains anecdotal and underexplored. There is ample conjecture of the potential impact, opportunities, and challenges associated with LLMs in an array of contexts and occupations including within academia [51], [44], [48]. On the positive side, some believe LLMs may act as a proponent of interactive learning [12]. Text quality is a critical component of AES, which measures an essay's overall success in delivering its intended message. Text quality has been the focus of much research in various fields including NLP. Researchers have explored the factors that contribute to high-quality text, such as coherence, cohesion, lexical diversity, and grammatical accuracy [27]. They have also investigated the cognitive processes involved in producing high-quality text, such as planning, revising, and self-evaluation [87]. In the context of automated assessment and content generation, research has focused on the effectiveness of automated tools for evaluating and generating high-quality text. ML algorithms have been used to evaluate the quality of student writing and provide feedback to learners [102, 17]. Similarly, NLP techniques have been used to generate text that is stylistically consistent with a particular writer's voice [36]. The dynamics of text quality assessment and generation become more crucial when introducing ML generated content into a human-ML generation-assessment process, particular with the advent of sophisticated models such as GPTs. Although such language models are equipped with skills to author logical and semantically accurate writings, they may need help creating compelling, subtle or enthusiastic narratives [28, 66]. Human-authored essays, on the contrary, are inclined to have these humanistic tendencies because they are produced by people who are more knowledgeable and experientially attached with the subject matter, or at the very least, possess personal views and opinions. The main research gap we explore is the following. _Given the nascent and emergent nature of studies that evaluate the capabilities of generative LLMs such as GPTs, and the confluence of ML-based scoring/assessments, there remains a need for studies that offer in-depth empirical evidence of the effectiveness of such LLMs in terms of quality of generated text across an array of prompt/genre types._ ### Machine Learning (ML) Methods for Automated Assessment of Text In this section, we review ML methods commonly used to score sequences of text, with specific emphasis on literature related to assessment of text quality and/or traits manifesting in (human) user-generated content. Relevant prior methods can be broadly grouped into three distinct categories: feature-based methods, deep learning convolutional and recurrent neural networks (CNN/RNN), and transformer-based pretrained language models (PLMs). Details are as follows. #### 2.3.1 Feature-based ML Until recent years, the mode for automated NLP research focusing on text categorization problems was manual feature engineering approaches. These features were typically combined with supervised ML classifiers such as k-nearest neighbors (KNN), support vector machines (SVMs), and gradient boosted trees [64, 83]. For instance, predefined features such as number of words in a document, average word length, and number of spelling errors, that were input to a ML algorithm [7]. The effectiveness of these models is heavily influenced by the choice and quality of features used for training [99, 6, 90, 84]. Feature-based machine ML models, such as KNN, SVR, and XGBoost, used in conjunction with lexicons such as the linguistic inquiry and word count (LIWC), other domain-specific lexicons, and lexical measures related to word, sentence, and paragraph composition have shown promise in several text quality and trait scoring tasks [97, 41]. A study by [18] involved transforming essays into a vector space model and employing TF-IDF (term frequency inverse document frequency) and information gain for feature selection from words, phrases, and arguments. Training the KNN algorithm with the different feature selection methods led to a precision rate exceeding 76 percent on the CLEC corpus. Approaching automated scoring as a regression task, [26] proposed an SVR-based model which combines histogram intersection string kernels and bag-of-super-word-embeddings features. They attained better performance on the AES task compared to other feature ML approaches. SVM and SVR have also been used in automated scoring studies to quantify scores for content and linguistic features input into the models [60, 34, 83]. Similarly, XGBoost, an eXtreme Gradient Boosting algorithm, has garnered significant attention due to its exceptional performance in various data-driven tasks[23]. In contexts relevant to our study, XGBoost attained high accuracies for scoring text quality [73] and traits manifesting in the text [83]. #### 2.3.2 Deep learning CNN/RNN Methods Beyond the feature-based machine learning approach, deep learning techniques have emerged as a new paradigm in automated essay scoring. While traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays[90, 7] deep learning techniques, on the other hand, use end-to-end representation learning [6, 84, 39, 91]. Recent studies have focused on neural network approaches because of their ability to garner enhanced text assessment performance [79], giving better results compared to statistical models with handcrafted features[31]. Both Convolutional Neural Networks (CNN) [43] and recurrent neural networks (RNN) such as long-short term memory (LSTM) and gated recurrent units (GRU) networks have been used to automatically score input text (including quality and related trait scoring tasks), typically in conjunction with word embeddings [101, 97]. For instance, several studies have employed single-layer LSTM over word embeddings for AES [6, 84]. Similarly, [16] demonstrated that using pre-trained word embeddings along with GRUs enhanced the model's ability to understand the semantic nuances of essays, leading to improved AES accuracy. In regards to CNNs, 1D convolution over embedding vectors of text token sequences have been employed in a myriad of studies [31, 101, 52]. #### 2.3.3 Transformer-based Pre-trained Language Models (PLMs) Text scoring and sequence classification has seen significant advancement with the emergence of transformer-based pre-trained language models (PLMs) such as BERT, RoBERTa, and GPT. These models have revolutionized various NLP tasks and have shown promise in enhancing the accuracy and efficiency of automated assessment of text quality and traits. BERT (Bidirectional Encoder Representations from Transformers), a pretrained language model, has been widely adopted in various NLP applications due to its ability to capture contextual information from both left and right contexts in a sentence[29, 68]. It has achieved state-of-the-art results in various NLP tasks[42]. BERT has been applied to AES[91], analysis of language competencies [74], automated short-answer assessment [82], and inference of traits from textual data [27], providing state-of-the-art performance in many cases. BERT-based assessment models excel in handling complex literary devices and where a nuanced understanding of the topic is beneficial [68]. RoBERTa (A Robustly Optimized BERT Pretraining Approach)[50] builds upon the architecture of BERT by further optimizing the pre-training process via greater hyperparameter tuning, inclusion of additional training data, and more parameters [50]. For instance, BERT uses 3.3 million tokens from the BookCorpus and Wikipedia. RoBERTa also uses BookCorpus and Wikipedia, but in addition to common crawl news (CC-News), open web text, and stories, resulting in over 30 million tokens used for pre-training [50]. Both BERT and RoBERTa have performed well on text quality assessment and trait scoring tasks due to their ability to capture rich contextual information and linguistic nuances present in user-generated text [32, 38, 97]. The key research gap we tackle is as follows. _With advancements in generative capabilities due to LLMs, in the context involving automated scoring of text using ML, it is unclear how various types of ML assessment methods designed for and trained on human content might assess GPT-based text; and how commonalities/differences between assessment models and those used to generate text might factor into the human versus GPT scoring dynamics._ ## 3 Proposed Research Design and Analysis Framework Guided by the research gaps identified and our three research questions, Figure 1 presents an overview of our research design. As shown in the figure, we wish to use ML-based automated assessment of text (top center) as the grounds for exploring the interplay between human and LLM-generated text (top right and left in Figure 1, respectively). Human text encompasses many stylistic and trait-based tendencies. These include digital traces of authors' syntactic, semantic, and lexical stylistic preferences and proclivities [2], as well as their choices regarding conveyance of emotions and opinions [47, 24] in authored texts. Moreover, human text also engenders variability in literary capabilities and likelihood of misspellings and grammatical mistakes [5]. Furthermore, as noted in our review of ML methods for scoring text, human texts also encompass personality traces [97, 98, 52]. Conversely, text generated by LLMs is likely to encompass greater stylistic uniformity, consistency in literary capabilities, and less emotion and syntactic mistakes [66, 28]. Moreover, hallucinations remain a concern [10], and persona-calibrated LLM text generation remains an open avenue of inquiry. The contrast between human and LLM generated text is further moderated by the use of ML to assess such text. As noted in the introduction, we imagine generative AI content being injected into a myriad of processes and scenarios where ML models are currently scoring purportedly (human) user-generated content. In order to answer our three research questions, using AES as our focal testbed and evaluation setting, we propose automated assessment benchmarking, quality scoring, and content analysis (top center of Figure 1. We begin by benchmarking quality of scores for ML models (RQ1), and then use statistical models to examine the interplay between human and LLM text using different ML models for scoring, across document genres and prompt types (RQ2). Finally, we employ information theoretic content analysis methods to shed light on the underlying mechanisms (RQ3) that may be responsible for observed effects from our statistical models. ### Analysis Framework Figure 2 presents our proposed analysis framework. Consistent with prior surveys of state-of-the-art methods [41, 67, 97], we incorporated three types of ML methods for scoring/assessing text: transformer-based deep learning, traditional CNN/LSTM deep learning methods, and feature-based ML techniques. Two types of testbeds were incorporated - existing data sets comprising human-written essays and an LLM text testbed generated using ChatGPT. The ML assessment models were applied to the two types of text, resulting in three types of analysis corresponding to our three research questions: benchmarking, statistical analysis, and content analysis. Details about the proposed Analysis Framework are as follows. ### Testbed Overview #### 3.2.1 Human Text Testbeds The human text testbed included user generated text from two corpora: (1) the Automated Student Assessment Prize (ASAP); (2) the Cambridge Learner Corpus-First Certificate in English (CLC-FCE). ASAP1 was developed by the Hewlett Foundation with the goal of advancing the state-of-the-art for AES by consolidating and evaluating ML-based AES innovations. As depicted in the top-half of Table 1, ASAP is comprised of eight essay sets. Each set varies based on the genre/prompt type of essay, writer grade level, dataset size, average length of essays in words, and quality score range. In total, we included the 12,977 essays from the training set segment of ASAP, because these are the ones comprising score labels. In regards to the prompts or genres of the essays, ASAP essay sets encompass three genres: argumentative, response, and narrative. The language levels of these writers range from US grades seven to ten. ASAP is considered a large dataset due to the number of essays per prompt and the resulting total number of essays [41]. The score range differs among the eight essay sets; some ranges are more narrow, such as essay sets three through six, whereas others are wider as seen in essay sets seven and eight. Since its inception, the ASAP dataset is a commonly used corpus for holistic scoring, in which the quality of an essay is represented by one score [41]. Due to the creation and widespread utilization of this dataset, a Figure 1: Overview of Research Design number of advancements have been made in ML-based AES techniques and models [84, 62, 39]. Accordingly, we incorporate it for our human essay testbed. CLC-FCE emerged from an amalgamation of the CLC project conducted by Cambridge University Press and Cambridge Assessment, as well as the First Certificate in English (FCE) exam. The CLC portion corresponds to essay sets 1-4 in Table 1. The testbed also contains prompts and responses from the FCE, which is used to evaluate one's English at a higher level of learning. For these, authors must complete responses to one of two sets of prompts in which they are asked to write either an article, letter, report or short story. These are denoted by the last two rows in Table 1 corresponding to essay sets 5a and 5b. In total, CLC-FCE includes 2,460 texts spanning five different genres of prompts: argumentative, comment, letter, narrative, and suggestion. Responses to the prompts are between 200 and 400 words long, and scores ranging from 0-5 along a continuous scale are provided for each task. CLC-FCE has also been used extensively in prior AES benchmarking studies [67, 41], thereby warranting inclusion in our study. Prior information retrieval and information systems literature has underscored the importance of document genres [100, 9, 22]. Genres manifesting in user-generated text provide insight into actions and intentions [100, 9]. The ability to generate argumentative versus commentary or suggestion styles of text has important implications for sense-making and computational intelligence [1]. In AES, different prompt types elicit content corresponding to genres of text. In total, the human text testbed was comprised of 68 prompts related to the aforementioned six text genres. Table 2 provides examples of the six genres of prompts manifesting across the ASAP and CLC-FCE testbeds. Figure 2: Overview of Analysis Framework #### 3.2.2 LLM Text Testbed Consistent with prior studies on the use of ChatGPT, a prompt design and engineering process was used to develop the LLM text testbed [104]. A prompt design and engineering team comprised of six members went through a series of meetings to discuss ideal strategies for prompting. Initially, the prompters took an inventory of the full set of prompts spanning the ASAP and CLC-FCE human essay testbeds. This resulted in a total of 68 unique prompt IDs associated with the six genres depicted in Table 2. Given some of the CLC-FCE prompts, namely 5a and 5b in Table 2, provide authors with up to 5 choices, the team decided it would be best to make each choice a separate prompt. After a few meetings, the team finalized the ChatGPT prompts with the goal of making them as similar to the human prompts as possible 2. This expansion resulted in approximately 150 prompts associated with the 68 prompt IDs. The team decided to use zero-shot learning to better align the LLM task with the human text generation process. Hence, each prompt was provided to ChatGPT 10 times, resulting in 1537 total documents in the LLM text testbed. Some of the prompts required respondents to share opinions and experiences. ChatGPT often provides a disclaimer before generating responses to such prompts. The team elected to exclude any disclaimer text. These 1537 texts were generated using GPT-3.5. However, a follow-up collection constructed a second GPT testbed using the same prompts and procedures, but with GPT-4. Notably, ChatGPT powered by GPT-4 elected to not respond to two of the 68 prompts on account of "I'm sorry for any misunderstanding, but as of my training data cut-off in September 2021, there is no book titled "[Book title]" by [Publisher] specifically available or documented." Hence, the GPT-4 version of the ChatGPT testbed encompassed 66 prompts and 1486 total documents. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Dataset** & **Essay** & **Type** & **Grade** & **Dataset** & **Average** & **Scoring** \\ & **Set** & & **Level** & **Size** & **Length** & \\ & & & & **(Essays)** & **(Words)** & \\ \hline ASAP & 1 & ARG & 8 & 1,785 & 350 & 2-12 \\ \hline ASAP & 2 & ARG & 10 & 1,800 & 350 & NA \\ \hline ASAP & 3 & RESP & 10 & 1,728 & 150 & 0-3 \\ \hline ASAP & 4 & RESP & 10 & 1,772 & 150 & 0-3 \\ \hline ASAP & 5 & RESP & 8 & 1,805 & 150 & 0-4 \\ \hline ASAP & 6 & RESP & 10 & 1,800 & 150 & 0-4 \\ \hline ASAP & 7 & NARR & 10 & 1,730 & 250 & 0-30 \\ \hline ASAP & 8 & NARR & 7 & 918 & 650 & 0-60 \\ \hline FCE & 1 & LETT & NA & 10 & 200-400 & 0-40 \\ \hline FCE & 2 & ARG, COMM, NARR, SUGG & NA & 10 & 200-400 & 0-5 \\ \hline FCE & 3 & ARG, COMM, LETT, NARR & NA & 10 & 200-400 & 0-5 \\ \hline FCE & 4 & ARG, COMM, LETT, NARR & NA & 10 & 200-400 & 0-5 \\ \hline FCE & 5a & ARG, COMM, LETT, SUGG & NA & 10 & 200-400 & 0-5 \\ \hline FCE & 5b & ARG, COMM, LETT & NA & 10 & 200-400 & 0-5 \\ \hline \end{tabular} \end{table} Table 1: Overview of Human Text Testbeds ### ML Models for Assessment As described in our review of methods used for automated assessment, prior work has mostly leveraged three types of ML methods [41, 67]: feature-based classifiers in conjunction with manually crafted, domain-specific features; word embeddings in coupled with CNN/RNN models; transformer-based PLMs fine-tuned on training data. Accordingly, representative methods from all three categories were incorporated in our set of ML models used for benchmarking and ML predictions input into the statistical analysis model. Details are as follows. The feature-based ML methods were ones used extensively in prior text classification work, including research that examined textual traits of individuals [97], as well as text quality [67]. In particular, three models were incorporated: XGBoost [83, 73], SVM [26, 34], and KNN [18, 34]. Following prior work, XGBoost [83], SVM [34] and KNN [34] all used Linguistic Inquiry and Word Count (LIWC) [61], a series of language-oriented lexicons, and lexical measures related to sentence/word lengths, number of sentences, etc., as input features. The occurrence of lexicon tags and individual items were included as binary presence vectors (i.e., "1" if present in that text, "0" \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dataset** & **Prompt ID** & **Genre** & **Prompt** \\ \hline AES & 2 & ARG & ”All of us can think of a book that we hope none of our children or any other children have taken off the shelf. But if I have the right to remove that book from the shelf – that work I abhor – then you also have exactly the same right and so does everyone else. And then we have no books left on the shelf for any of us.” –Katherine Paterson, Author. Write a persuasive essay to a newspaper reflecting your views on censorship in libraries. Do you believe that certain materials, such as books, music, movies, magazines, etc., should be removed from the shelves if they are found offensive? \\ & & & Support your position with convincing arguments from your own experience, observations, and/or reading. \\ \hline FCE & 58 & COMM & You have had a class discussion on shopping. Your teacher has now asked you to write a composition, giving your opinions on the following statement. \\ & & & Shops should be open 24 hours a day, seven days a week. \\ \hline FCE & 45 & LETT & You recently spent two days at an annual international arts festival. Most of the events you went to were good, but you feel that the festival could be even better next year. \\ \hline AES & 8 & NARR & We all understand the benefits of laughter. For example, someone once said, “Laughter is the shortest distance between two people.” Many other people believe that laughter is an important part of any relationship. Tell a true story in which laughter was one element or part. \\ \hline AES & 5 & RESP & Describe the mood created by the author in the memoir. Support your answer with relevant and specific information from the memoir. \\ \hline FCE & 22 & SUGG & A group of American students has just arrived in your town and the group leader has asked for information on an interesting building to visit. Write a report for the group leader, describing one building and giving reasons for your recommendation. \\ \hline \end{tabular} \end{table} Table 2: Example Prompt Types Associated with Six Genres in Human Text Testbeds if absent). Hyperparameters were tuned on the training data. For XGBoost, the XGB tree-based regressor model was used to run each iteration, with mean squared error as loss function, a learning rate of 0.1, gamma set to 0, and with a max depth of 3 to avoid over-fitting on lexicon items/tokens occurring highly infrequently. SVM and KNN were run using the scikit-learn library. For SVM, we used the support vector regression (SVR) with an RBF kernel, the regularization parameter C set to 1, Epsilon parameter as 1, and no limit on maximum iterations. For k-Nearest Neighbors, the KNN regressor was employed using Euclidean distance, with the number of neighbors set to 5. Standard deep learning CNN/RNN models, in concert with word embeddings, are used extensively for automated scoring of quality and/or traits in text [6, 84, 31, 16, 101, 52]. The CNN [101] model used a static word embedding as input, run through a single 1D convolutional layer, followed by a max pooling and fully connected (dense) layers. The input static word embedding used was the Word2Vec word embedding with 100 dimensions. The CNN model was trained using a learning rate of 0.0005, a batch size of 300, and 20 epochs. The GRU model [52] used the same input word embedding as the CNN, and featured a two layer bi-directional GRU model followed by a batch normalization layer and fully connected (dense) layers. For transformer-based, we incorporated BERT and RoBERTa [29, 50, 97, 74]. Fine-tuned transformer models, as well as the pre-trained models' embeddings, and zero-shot transformer LLM assessments, have been used in an array of text scoring research [97, 74, 96]. Transfomer-based PLMs such as BERT and RoBERTa have attained state-of-the-art text sequence classification performance for an array of tasks including automated assessment of text quality and traits [74, 91, 82, 32, 38]. The two transformer PLMs were trained as follows. The BERT [29] base model, with 110M parameters, and the RoBERTa [50] base model with 125M parameters were used. For both models, further fine-tuning on the AES training data was performed using 5-fold cross validation as done for the other models. Moreover, hyperparameters were tuned to balance under versus over-fitting by separating a portion of the training data into a separate validation set. Both models were run with learning rates set to 0.0005, and batch sizes of 300. In regards to the number of epochs, BERT was run for 30 epochs whereas RoBERTa was run for 20. The output of BERT and RoBERTa were 768-dimensional embeddings. We input these into a batch normalization layer and then a fully connected (dense) layer and output layer to get the prediction scores. ### Benchmarking Consistent with our first research question (and pursuant with our first research gap), the goal of the benchmark analysis depicted in the center of Figure 2 was to assess the effectiveness of ML methods for AES on human-generated text. Consistent with prior work [41, 67], two sets of metrics were employed. The first were error metrics: mean squared error (MSE) and mean absolute error (MAE). The second were agreement/correlational metrics: quadratic weighted kappa (QWK), Pearson correlation coefficient (PCC), and Spearman rank correlation (SRC). ### Statistical Analysis Consistent with RQ2, the goal of our statistical analysis was to parsimoniously examine the interplay between text genres, ML models used for assessment, and human versus GPT respondents (see center of Figure 2. We employ a mixed-effects model, taking Prompt Type, Prompt Respondent, and the ML model as the primary fixed effects and examining the possible two-way or three-way interaction between them. GPT generated text tend to differ from humans in response length on a given prompt, with GPT tending to be more verbose (i.e., lengthier). To account for this, the length of a generated text in words was included as a control variable in the ANOVA. We also controlled for the testbeds associated with each prompt (i.e., ASAP versus CLC-FCE). Overall, the factorial structure of the model was as follows: \[Y=A/S+A+B+C+D+E+A\times B+A\times C+B\times C+A\times B\times C+\epsilon \tag{1}\] Dependent variable \(Y\) corresponds to the predicted score. \(S\) stands for the 66 unique prompts, and \(A\) stands for the prompt type. \(B\) signifies the respondent type, while \(C\) denotes the ML model. Considering the difference in answer length between human and ChatGPT generated text, we introduce text length, \(D\), as a control variable. Finally, we include testbed, \(E\), as a control variable to account for the two sources of prompts: AES and FCE. Additionally, the model includes a random intercept to account for the random effect of different prompts nested within each prompt type. ### Content Analysis The purpose of the content analysis is, as noted in RQ3, to compare and contrast the linguistic characteristics of GPT versus human-generated text. Our content analysis explores this question in two ways. First, we leverage the idea of parallel representations [3, 5]. Guided by the intuition that although text is one dimensional, language is multi-dimensional - spanning semantics (e.g., word senses, topics), sentiment, affect, psychological, pragmatic, syntactic, and stylistic elements - parallel representations attempt to capture the richness of language in 1D text. Table 3 shows examples of parallel representations, where bolded tokens for parallel representations signify differences relative to the primary 1D word representation depicted in the first row of the table. As shown in the table, only some of the tokens in parallel representations convey new/different information, such as word sense, parts-of-speech, sentiment polarity, affect, and mapping to tags in different lexicons, including the linguistic inquiry word count (LIWC). In prior text classification work, univariate and multivariate measures of information or correlation have been used for feature selection of tokens [3] for classifying sentiment, or to train embeddings [5] for detecting psychometrics. Here, we use the same idea, but with one important difference. Our binary class label is whether the text was authored by a human or GPT respondent. And we wish to use parallel representations to see how the expressive power of human generated text differs from that of GPT text, across the linguistic representations depicted in Table 3. The idea being that higher expressive power connotes greater linguistic differences between human and GPT generated text. Table 3 delineates the comprehensive parallel representations generated for the sample input text, "It was so hot outside, it was like the Sahara desert. I got out of the car with a huge grin on my face." These representations encompass word-level tokens, named entities, hypernyms, and domain-specific lexicons. These elements are strategically employed to capture topical nuances at a macroscopic level, thereby facilitating the identification of underlying patterns and structures even in data-limited scenarios. For example, named entities, extracted via Stanford CoreNLP [53], amalgamate information at the granularity of persons, places, and organizations. Hypernyms, derived from WordNet's hierarchical structure [35], enable the aggregation of entities based on "is-a" relationships. Domain lexicons, curated through rigorous data domain analysis [14], enrich the semantic layer by linking terms like "face" to domain-specific labels such as "ANATOMY." Crucially, the representations in Table 3 maintain consistent lengths and are index-aligned. This alignment imbues them with a "parallel" nature, thereby facilitating the fusion and correlation of features. For instance, word tokens can be amalgamated with sense tags to construct a "Word & Sense" representation, which leverages WordNet for enhanced word sense disambiguation. Similarly, a "Word & NE" representation can be formulated by merging words with named entities, thereby offering multiple levels of semantic granularity. Beyond these, sentiment and affective states serve as valuable supplementary representations. Sentiment analysis enables the quantification of user subjectivity and emotional tone, with words mapped to their corresponding sentiment scores as per SentiWordNet [11]. These scores can be categorized into high, medium, or low bins, culminating in a nuanced set of nine possible sentiment tags. For example, the term "hot" exhibits low negative polarity. Affective categories, derived from WordNet Affect [81], further enrich the representation, as evidenced by the mapping of the word "got" to the "SURPRISE" category in Table 3. Syntactic representations incorporated include parts-of-speech (POS), Word & POS, misspellings, and hapax legomena. Misspellings introduce sparsity and noise, particularly in user-generated content. To mitigate this, a spellchecking algorithm with support for word exclusions is employed. Given that the frequency of misspellings serves as a critical psychometric indicator, this representation is included. As illustrated in Table 3, the misspelled term "sahara" (needed to be capitalized) is corrected across all representations and tagged as a MISSPELLING. Hapax legomena are utilized to address sparsity arising from singleton words in the training set, and are tagged with a DIS label [63]. The amalgamation of words with their corresponding POS tags serves as an additional disambiguation layer. To utilize the representations, we calculate a weight [5] for each token in each representation. After obtaining this weight for each token, we use a threshold to filter out the tokens providing \begin{table} \begin{tabular}{|p{113.8pt}|p{256.1pt}|} \hline **Representation** & **Example** \\ \hline Word & it was so hot outside, it was like the sahara desert. i got out of the car with a huge grin on my face. \\ \hline Word \& Sense & it was so **hot\_03 outside\_09**, it was **like\_02** the **sahara\_01 desert\_01**. i **got\_01** \\ \cline{2-3} & out of the car\_01** with a **huge\_01 grin\_01** on my face\_04. \\ \hline Word \& POS & **it\_PRON was\_AUX so\_ADV hot\_ADJ outside\_ADV \_PUNCT it\_PRON** **was\_AUX like\_ADP the\_DET sahara\_PROPN desert\_NOUN \_PUNCT i\_PRON got\_VFBB out\_ADP of\_ADP the\_DET car\_NOUN with\_ADP **a\_DET huge\_ADJ grin\_NOUN on\_ADP my\_PRON face\_NOUN \_PUNCT** \\ \hline Word \& NE & it was so hot outside, it was like the **sahara\_—LOC** desert. i got out of the car with a huge grin on my face. \\ \hline Hypernym & it was so hot outside, it was DESIRE the sahara **BIOME**. i **CHANGE\_STATE** out of the **MOTOR\_VEHICLE** with a huge **FACIAL\_EXPRESSION** on my **SURFACE**. \\ \hline Named Entities & it was so hot outside, it was like the **LOC** desert. i got out of the car with a huge grin on my face. \\ \hline Domain Lexions & it was so hot outside, it was like the sahara desert. i got out of the car with a huge grin on my **ANATOMY**. \\ \hline Sentiment & it was so **LPOSLNEG LPOSLNEG**, it was **LPOSLNEG** the **LPOSLNEG**. i **LPOSLNEG** out of the **LPOSLNEG** with a **LPOSLNEG** **LPOSLNEG** on my **LPOSLNEG**. \\ \hline Affect & it was so hot outside, it was like the sahara desert. i **SURPRISE** out of the car with a huge grin on my face. \\ \hline POS & **PRON AUX ADV ADJ ADV PUNCT PRON AUX ADP DET PROPN NOUN PUNCT PRON VERB ADP ADP DET NOUN ADP DET ADJ NOUN ADP PRON NOUN PUNCT** \\ \hline MISSPELLING & it was so hot outside, it was like the **MISSPELLING** desert. i got out of the car with a huge grin on my face. \\ \hline Legomena & it was so hot outside, it was like the **DIS** desert. i got out of the car with a huge grin on my face. \\ \hline \end{tabular} \end{table} Table 3: Illustration of select parallel representations for an example sentence limited additional information (i.e., those below the threshold). The expressive power of that representation for differentiating human and GPT text is calculated as the ratio of tokens above the threshold relative to the total number of tokens. More formally, we quantify the linguistic expressive power of text, using parallel representations as follows. As a first step, the weight of every token across parallel representations is calculated. Given the set of \(m\) representations \(R=\left\{r_{1},r_{2},...r_{m}\right\}\), where \(r_{j}\) denotes any parallel representation, we extract all tokens (i.e., 1-gram features). Any element \(f_{ij}\in r_{j}\) represents the \(i^{th}\) unigram/token feature for representation \(r_{j}\). The initial weight of \(f_{ij}\) is calculated as [5]: \[w\left(f_{ij}\right)=\max_{c_{a},c_{b}}\left(p\left(f_{ij}\mid c_{a}\right) \log\left(\frac{p\left(f_{ij}\mid c_{a}\right)}{p\left(f_{ij}\mid c_{b}\right) }\right)\right)+s\left(f_{ij}\right), \tag{2}\] where function \(s\) is the mean semantic orientation score across all token-senses \(v\), computed as the difference between the positive and negative polarity scores for sense \(q\) of token \(f_{ij}\) in SentiWordNet, \(c_{a}\) and \(c_{b}\) are among the set of \(C\) class labels, \(c_{a}\neq c_{b}\). \[s(f_{ij})=\sum_{q}^{v}\frac{pos(f_{ij},q)-neg(f_{ij},q)}{dv} \tag{3}\] The first part of the weighting equation considers the discriminatory potential of the feature based on its log-likelihood ratio, whereas the second part factors in the semantic orientation to ensure that features with opposing orientation (e.g., "like" versus "don't like") are differentiated in terms of overall weights. Next, in order to capture the unique information for parallel representations beyond word, we adjust their weights accordingly. Let us assume that for any representation \(r_{j}\) in \(R\), \(j=1\) denotes the word representation and \(j>1\) signifies other parallel representations. For each feature token \(f_{ij}\) in \(j>1\), we compute: \[p\left(f_{ij}\right)=\begin{cases}1,&\text{if }w\left(f_{ij}\right)>t_{w} \wedge\max_{f_{uv}}\left(\rho(f_{ij},f_{uv})\right)\leq t_{c}\\ 0,&\text{otherwise}\end{cases} \tag{4}\] where \(t_{w}\) and \(t_{c}\) are pre-defined thresholds for weight and correlation, respectively, \(\rho\) is the Pearson's correlation coefficient, \(u\) is any token in representation \(v\), and \(j\neq v\). Finally, we define the expressive power of any representation \(r_{j}\) for \(j>1\) as: \[e\left(r_{j}\right)=\sum_{i}\frac{p\left(f_{ij}\right)z_{ij}}{\sum_{i}z_{ij}} \tag{5}\] where \(z_{ij}\) are the total number of occurrences of \(f_{ij}\) in the data corpus, and hence, each \(e\left(r_{j}\right)\) is a value between 0 and 1 denoting the proportion of unique token occurrences of that representation in the corpus that contribute additional information atop the baseline word representation. Larger values denote greater differences in linguistic characteristics across class labels \(c_{a}\) versus \(c_{b}\). In our subsequent content analysis, we assess the expressive power of GPT versus human text, while including differences between other human demographic classes such as age (older versus younger authors) and gender (male versus female authors) as reference groups for comparison. As an additional content analysis, we leverage advancements in Natural Language Processing to visualize and analyze attention mechanisms in transformer-based models [56]. Specifically, we extend the work of bertviz[94] to compare parallel token-based representation weights, denoted as \(w(f_{ij})\), against the token attention weights generated by the transformer's attention layers. We employ bertViz's multiple views, such as the Attention-Head View and Model View, to visualize these comparisons. While bertViz designed to visualize single attention-head view, which can add understanding of how the specific tokens are processing by transformer-based models, it may lack of the capability of reflect the overall attention received by the specific tokens. Hence, we develop equation 6 to provide each specific token an attention score. Let \(A_{i}\) represent the aggregated attention score for token \(i\), where \(i\) is also present in the parallel representation \(r_{1}\) with \(w(f_{ij})>t_{w}\), the token weight threshold. The aggregated attention \(A_{i}\) for a token \(i\) is defined as: \[A_{i}=\frac{1}{N}\sum_{l=1}^{N}\text{mean}(l_{ik}) \tag{6}\] Where \(A_{i}\) is the aggregated attention score for token \(i\), \(N\) is the total number of attention layers in the transformer model, \(l_{ik}\) denotes the attention scores for token \(i\) at position \(k\) in layer \(l\), and \(\text{mean}(l_{ik})\) computes the average attention score across all attention heads for token \(i\) at position \(k\) in layer \(l\). We compute \(A_{i}\) for each token of interest and store the results in a dictionary that maps each token to its respective list of aggregated attention scores. This allows us to contrast the assessments made by the transformer model for human-generated text versus text generated by GPT models. ## 4 Results - Benchmark Evaluation and Empirical Analysis ### Benchmark Evaluation Results Our first research question asks about the effectiveness of state-of-the-art feature-based and deep learning methods for AES. As noted in Section 3.3, the seven ML models were each trained separately on the ASAP and CLC-FCE testbeds using 5-fold cross-validation. This was done because training each classifier within a given testbed offered better performance vis-a-vis training them across a consolidated training set encompassing both testbeds. Additionally, the dependent variable within both testbeds was standardized to a 0-1 continuous scale. As noted in Section 3.4, we employed two error metrics, MSE and MAE, and three agreement/correlation measures (QWK, PCC, SRC) [41, 67]. Given MSE and MAE are error metrics, values closer to 0 denote better performance. For QWK, PCC, and SRC, values closer to 1 indicate better performance, whereas values closer to 0 signify random performance. The results appear in Table 4. For both data sets, across all five metrics, the two transformer PLM-based models, BERT and RoBERTa, attained the best performance. Their PRC and SRC values in the 0.5 to 0.76 are on par with best-in-class text scoring results attained on problems such as psychometric NLP and personality detection [46]. Additionally, the QWK results for BERT and RoBERTa are comparable to the best ASAP full dataset results attained in prior studies [41, 67].However, it is worht noting that direct comparisons are difficult to make because prior studies have used different problem formulations, training-testing splits, and different testbeds (e.g., even within ASAP) [67]. This lack of prior streamlined large-scale benchmarking was/is the impetus for our RQ1. In regards to other ML models, the CNN/RNN methods incorporated, namely CNN and GRU, outperformed feature-based methods such as SVR, XGB, and KNN. This is also aligned with prior surveys of AES [41, 67] and related areas such as personality [97]. Interestingly, for all methods on all five metrics, results were somewhat better on the ASAP testbed as compared to CLC-FCE. This could be due to the greater abundance of available training data. In prior AES studies, ASAP has also seen greater usage [89, 41, 67]. Overall, the benchmarking results lend credence to the ML assessment models incorporated in our study. The results suggest that it is reasonable to include ML assessment scores as part of our statistical analysis model (see ensuing section for details). Figure 3 shows the distribution of human text gold standard scores, scaled to a 0-1 range (green background bars), on the ASAP testbed. Also depicted are the ChatGPT generated texts' ML model scores for BERT and RoBERTa (first column top and bottom charts, respectively), GRU and CNN (middle column), and SVR and XGB. Note that because the human text scores are the gold standard, they do not vary by model whereas the ChatGPT scores do. Looking at the distributions, we can see that the ML assessment models typically rate the ChatGPT text higher. Further, the ML assessments of ChatGPT text also follow a tighter distribution with a smaller range and less variance. ### Results - Empirical Analysis Our second research question asked how AES models trained on human-generated text rated content generated by GPT models, and how the effects were moderated by document genres and different categories of ML models used for assessment. As noted in Section 3.4, and presented in Figure 2, our analysis cube spans three dimensions: prompt genres, ML assessment models, and human versus ChatGPT generated text. Model-free analysis methods are ill-equipped for parsimoniously considering the interactions between these three dimensions, as well as the impact of other control variables such as the type of testbed (ASAP versus CLC-FCE) and other considerations (e.g., essay length). Accordingly, we used a 3-way split-plot ANOVA design to assess variations in ML assessment model scores given the impacts of the prompt type, prompt respondent (text author - GPT or human), and the ML model used to perform the scoring assessment. The input for the ANOVA model were an ML-text score tuple. Given we had 7 ML models for scoring (see rows in Table 4, this meant 7 rows per 13,847 total documents in our human plus GPT testbeds when using ChatGPT powered by GPT-3.5, and 15,333 total documents when using ChatGPT powered by GPT-4 in additon to GPT-3.5. Similar to our benchmark analysis, ML model scores were standardized to a range between 0 and 1. The 66 different prompt IDs were a between factor \begin{table} \begin{tabular}{|l l c c c c c|} \hline & \multicolumn{4}{c|}{ASAP Testbed} & \multicolumn{1}{c|}{} \\ & **Models** & **MSE** & **MAE** & **QWK** & **PCC** & **SRC** \\ \hline \multirow{2}{*}{Transformer PLM} & BERT & **0.0241** & **0.1187** & 0.7034 & 0.7648 & **0.7685** \\ & RoBERTa & 0.0252 & 0.1236 & **0.7105** & **0.7663** & 0.7612 \\ \hline \multirow{2}{*}{Deep Learning CNN/RNN} & CNN & 0.0280 & 0.1299 & 0.6856 & 0.7214 & 0.7158 \\ & GRU & 0.0274 & 0.1306 & 0.6844 & 0.7351 & 0.7299 \\ \hline \multirow{2}{*}{Feature-based ML} & SVR & 0.0293 & 0.1325 & 0.6071 & 0.7143 & 0.7081 \\ & XGB-RF-Regressor & 0.0329 & 0.1396 & 0.5748 & 0.6575 & 0.6553 \\ & KNN-Regressor & 0.0372 & 0.1491 & 0.6168 & 0.7000 & 0.6914 \\ \hline \multirow{2}{*}{Transformer PLM} & \multicolumn{4}{c|}{FCE Testbed} & \multicolumn{1}{c|}{} \\ & **Models** & **MSE** & **MAE** & **QWK** & **PCC** & **SRC** \\ \hline \multirow{2}{*}{Transformer PLM} & BERT & 0.0277 & 0.1350 & 0.3123 & 0.5152 & **0.5227** \\ & RoBERTa & **0.0266** & **0.1304** & **0.3307** & **0.5158** & 0.5214 \\ \hline \multirow{2}{*}{Deep Learning CNN/RNN} & CNN & 0.0381 & 0.1563 & 0.0956 & 0.1313 & 0.1133 \\ & GRU & 0.0377 & 0.1556 & 0.0945 & 0.1223 & 0.1195 \\ \hline \multirow{2}{*}{Feature-based ML} & SVR & 0.0339 & 0.1553 & 0.0475 & 0.3212 & 0.3226 \\ & XGB-RF-Regressor & 0.0344 & 0.1480 & 0.0943 & 0.1691 & 0.1983 \\ \cline{1-1} & KNN-Regressor & 0.0356 & 0.1512 & 0.1490 & 0.2067 & 0.2034 \\ \hline \end{tabular} \end{table} Table 4: Benchmarking Results Comparing Performance of Different ML Methods on Human Generated Text that was nested under the 6 prompt genres mentioned above. The prompt respondent and ML model used were within factors, meaning that each response generated either from GPT or human was repeatedly scored using different ML models pre-trained with human responses. In the two following sub-sections, we report the 3-way split-plot ANOVA results when using GPT-3.5 versus humans, and as a robustness check, also when using GPT-4 in addition to GPT-3.5. #### 4.2.1 Results when Comparing Human and GPT-3.5 Generated Text The overall significance results for main effects, two-way, and three-way interactions appear in Table 5. As noted in Section 3.4 when discussing (1), _A,B,C_ correspond to the three dimensions of our analysis cube, namely prompt type, respondent type, the ML assessment model, respectively. _D,E_ relate to the text length and the testbed source for the prompts (i.e., ASAP or CLC-FCE). For precision, here we only present the ANOVA results in Table 5. Effect size plots appear in figures discussed subsequently. The table depicts statistical significances for the main effects as well as two-way and three-way interactions pertaining to _A,B_ and \(C\). Overall, with the exception of the prompt type main effect, all other factors were significant (i.e., p-values i 0.05). Figures 4, 5, and 6 show effect sizes for the three two-way figures. Figure 4 shows results for the ML assessment model (x-axis) comparing the human and GPT text responses. That is, the \(B\)x\(C\) interaction. The models along the x-axis are intentionally grouped by type, with feature-based being left-most, followed by CNN/GRU in the middle, and the two transformer models on the right. For brevity, kNN was excluded from the plots. The results show that feature-based ML assessment models such as SVR and XGB tend to score human-generated text about 10-15% higher than GPT generated responses (about 7-10 points higher; for instance XGB scores human texts at 0.72 and GPT ones at around 0.65). This disparity becomes even more pronounced when using CNN and GRU to perform the assessment. On average, these deep learning models score human responses about 26-32% higher (i.e., about 15-25 points higher). These results seem to suggest that perhaps due to the out of training sample nature of GPT generated text responses, the feature and Figure 3: ML Assessments of GPT Generated Text Versus Expert Ratings of Human Text ML models tend to rate these text lower. However, when looking at the transformer PLM results, both BERT and RoBERTa tend to score the GPT essays markedly higher than the human ones. For BERT, the difference is approximately 10 points (0.78 versus 0.68) or about 15%. This "flip" relative to the feature-based and CNN/RNN deep learning models is important because as noted in our benchmark analysis, the transformer PLM models attain state-of-the-art performance. This "affinity" for GPT generated text, relative to human responses, as depicted in Figure 4, could be attributable to a few possible reasons. First, the feature-based ML models and CNN/GRU deep learning models have pre-defined features/vocabaries based on their input feature sets and/or static word embeddings. Conversely, transformer PLMs use a much larger pre-training corpora and also rely on word piece to avoid out-of-vocabulary (OOV) concerns [29, 50]. Hence, the absence of GPT generated text in the training phase may be less of a concern for transformer PLMs tasked with assessing such text. Second, as noted in our review of related work, BERT and RoBERTa share a fair amount of common pre-training data with GPT models, namely the Wikipedia corpus, 11K Book and/or Book 1 and Book 2 corpora, and common crawl and web text data [66, 21]. Admittedly, with GPT-3.5 onwards, the true extent of the training corpora for ChatGPT models is unclear, and the proportion of overlap is likely smaller. Nevertheless, familiarity in pre-training sources, as well as relative commonalities in the underlying attention/learning mechanisms (at least vis-a-vis CNN/GRU and SVR/XGB), may contribute to higher quality score assessments for GPT generated text when evaluated by BERT/RoBERTa. Third, it could be that because the transformer PLMs are more accurate (as per results in Table 4), they are less prone to over-fitting, and hence, are more capable of generalizing to the out-of-sample GPT text and assessing its quality. We explore the first two possible explanations in greater detail in the ensuing section related to RQ3. Figure 5 shows the results for human and GPT-generated text across the six genres of prompts. This corresponds to the \(A\)x\(B\) factor row in Table 5. For most genres, including argumentative (ARG), commentary (COMM), letter (LETT), and suggestion (SUGG) writing, on average, human texts scored 7 to 17 points (10-20%) higher. For narrative writings (NARR), results for humans and GPT were comparable. In contrast, GPT text scored nearly 10 points higher than human generated content for response (RESP) writing. Figure 6 shows the \(A\)x\(C\) interaction. The x-axis depicts ML models and the different series show prompt genres. In terms of impact of models on genre-level performance, many of the lines are reasonably flat, signifying consistent results across different models. Exceptions include the CNN model rating letter, suggestion, and comment texts lower, and BERT scoring letter and suggestion relatively higher. In regards to prompt genre trends, response (RESP) consistently attained the highest model predicted scores. \begin{table} \begin{tabular}{l r r r r r r} \hline **Source** & **SS** & **MS** & **NumDF** & **DenDF** & **F value** & **Pr(\textgreater{}F)** \\ \hline A (prompt type) & 0.31 & 0.06 & 5 & 2 & 3.8566 & 0.2186 \\ B (human/GPT text) & 1.76 & 1.76 & 1 & 82419 & 109.7398 & \textless{}2e-16*** \\ C (scoring model) & 1.82 & 0.36 & 5 & 97458 & 22.7752 & \textless{}2e-16*** \\ D & 515.82 & 515.82 & 1 & 97309 & 32200.2932 & \textless{}2e-16*** \\ E & 1.72 & 1.72 & 1 & 41 & 107.6747 & 4.29e-13*** \\ A\(\times\)B & 14.92 & 2.98 & 5 & 63321 & 186.2416 & \textless{}2e-16*** \\ A\(\times\)C & 6.37 & 0.25 & 25 & 97458 & 15.8982 & \textless{}2e-16*** \\ B\(\times\)C & 13.66 & 2.73 & 5 & 97458 & 170.5444 & \textless{}2e-16*** \\ A\(\times\)B\(\times\)C & 5.68 & 0.23 & 25 & 97458 & 14.1775 & \textless{}2e-16*** \\ \hline \end{tabular} \end{table} Table 5: Results for Statistical Model with Human Versus GPT-3.5 Generated Text response was the only prompt type genre in our testbed that did not appear in CLC-FCE. Hence, the generally higher scores may also be a function of reliance on text from a single testbed. On the other end of the spectrum, response genre text was scored lower by all models. Figure 7 shows the three-way interactions. The left panel shows models (x-axis) and prompt genres (different series lines) for human-generated text, whereas the right panel depicts the same for GPT-generated text. Hence, collectively, the figure shows the \(A\)x\(B\)x\(C\) row from Table 5. Results in the left panel of Figure 7 are similar to the overall two-way \(A\)x\(C\) interaction results depicted in Figure 6. However, interestingly, the SVR/XGB/CNN/GRU models score GPT text for certain genres considerably lower than human text: argumentative, letter, comment, and suggestion. However, on all four of these aforementioned genres, as well as narratives, the two transformer-based PLM models score the GPT text relatively higher compared to the feature-based and CNN/GRU Figure 4: ML Model Comparison By Respondent Type Figure 5: Interaction Effect Between Respondent Types and Prompt Types Figure 6: Interaction Effect Between ML Models and Prompt Types deep learning models. In general, BERT and RoBERTa scored GPT text considerably higher than human-generated content on response, narrative, and argumentative genres. For instance, on narratives, GPT text was scored about 15 points higher. On response texts, it was about 10 points higher. BERT and RoBERTa also scored GPT text fairly close to human text on the other genres. Interestingly, all models rated human and GPT response text the highest (see top trend line in both panels in Figure 7). However, GPT text for the response genre was unanimously scored higher than human text across all ML models. Overall, the three-way interaction results shed further light on how and when ML assessment models deem GPT generated text quality to be better than that of human text, and how these effects are moderated by the model type and prompt genres. #### 4.2.2 Results when Comparing Human, GPT-3.5, and GPT-4 Generated Text As a robustness check, we wanted to see if our results for human versus GPT were consistent if using ChatGPT powered by GPT-4 versus if/and using GPT-3.5. Hence, we reran our ANOVA models with one notable difference; for respondent type, we included human, GPT-3.5, and GPT-4 as the three categories of text generators. The overall significance results for main effects, two-way, and three-way interactions appear in Table 6. Once again, _A,B,C_ correspond to the three dimensions of our analysis cube, namely prompt type, respondent type, the ML assessment model, respectively. _D,E_ relate to the text length and the testbed source for the prompts (i.e., ASAP or CLC-FCE). The table depicts statistical significances for the main effects as well as two-way and three-way interactions pertaining to _A,B_ and \(C\). Overall, consistent with Table 5 appearing earlier, with the exception of the prompt type main effect, all other factors were significant (i.e., p-values i 0.05). Figure 8 shows the three two-way interaction plots for respondent-assessment (a), respondent-genres (b), and assessment-genres (c). The top two charts depict three series due to the inclusion of the three respondent categories: human, GPT-3.5, and GPT-4. Looking at the two-way interaction plots between assessment model and respondent type, depicted in chart (a), we observe a similar pattern to that depicted earlier. Feature and CNN/RNN-based assessment models score human-generated text higher than essays produced by both GPTs, whereas the the transformer-based language models (BERT and RoBERTa) rate the GPT generated text higher despite the GPT text being outside the training data used to fine-tune these two transformer-based assessment models. Interestingly, this effect is less pronounced for GPT-4 text as compared to essays generated by GPT-3.5. This could be due to greater differences in the underlying training data used by the Figure 7: Interaction Effect Between ML Models, Respondent Types and Prompt Types BERT/RoBERTa models in comparison with GPT-4, relative to GPT-3.5, or differences in how ChatGPT using GPT-4 was trained/tuned relative to the version powered by GPT-3.5. It would be interesting to see if this trend continues with future work. Whether or how larger assessment models are more akin to GPT-3.5 or GPT-4 may affect quality scores for human versus LLM-generated text. Looking at charts (b) and (c) in Figure 8, the overall respondent-genre and assessment-genre effects are similar to those observed earlier in Section 4.2.1. For instance, in regards to respondent-genre, chart (b), the argumentative (ARG), commentary (COMM), letter (LETT), and suggestion (SUGG) genres of prompt types were scored higher for humans versus GPTs, the respondent groups had comparable scores on narrative (NARR) writing, and the two GPT models scored markedly higher on response (RESP) writing. Similarly, in regards to the assessment-genre interaction effects, consistent with earlier results, response (RESP) writing attained the highest scores across models, \begin{table} \begin{tabular}{l r r r r r r} \hline **Source** & **SS** & **MS** & **NumDF** & **DenDF** & **F value** & **Pr(\(>\)F)** \\ \hline A & 0.27 & 0.05 & 5 & 2 & 3.3867 & 0.2435 \\ B & 1.03 & 0.52 & 2 & 101104 & 32.6008 & 7.018e-15*** \\ C & 60.80 & 12.16 & 5 & 102317 & 766.2843 & \(<\)2e-16*** \\ D & 488.21 & 488.21 & 1 & 102189 & 30766.2560 & \(<\)2e-16*** \\ E & 1.01 & 1.01 & 1 & 41 & 63.5332 & 7.107e-10*** \\ A \(\times\) B & 31.01 & 3.10 & 10 & 98035 & 195.4344 & \(<\)2e-16*** \\ A \(\times\) C & 8.34 & 0.33 & 25 & 102317 & 21.0190 & \(<\)2e-16*** \\ B \(\times\) C & 48.54 & 4.85 & 10 & 102317 & 305.9125 & \(<\)2e-16*** \\ A \(\times\) B \(\times\) C & 11.78 & 0.24 & 50 & 102317 & 14.8514 & \(<\)2e-16*** \\ \hline \end{tabular} \end{table} Table 6: Results for Statistical Model with Both GPT-3.5 and GPT-4 Included Figure 8: Two-way Interaction Effect Plots when GPT-4 is Included whereas argumentative (ARG) text yielded the lowest scores. Finally, Figure 9 shows the three-way interaction plots for our three dimensional analysis cube when including the three respondent types. Relative to the GPT-3.5 text, the GPT-4 generated essays score higher when assessed by BERT and RoBERTa for response prompts (RESP). BERT also scores the GPT-4 text higher for narrative, suggestion, and letter writing. For argumentative and communication genres, BERT scored text from GPT-3.5 higher than that generated by GPT-4. In the next section, we use feature importance and attention analysis methods to delve deeper into linguistic differences between human and GPT text, and why the transformer model may be scoring GPT text higher. ## 5 Results - Content Analysis Our third research question asked what linguistic categories are most different for human versus GPT generated text. As discussed in section 3.6, our content analysis involved two parts. In the first, we computed the expressive power of parallel representations. In the second, we compared the parallel representation weights with the attention weights from the BERT assessment model. ### Content Analysis Using Parallel Representations The purpose of the parallel representation-based analysis of expressive power was to see how linguistically similar or different human versus GPT generated texts are, and across which linguistic categories. The idea being that if there are more features that can discern the respondent type (i.e., higher expressive power), this signifies greater differences between the two groups. We set our two class labels as human and GPT generated text, respectively. We used 1,537 essays generated by GPT-3.5 and 15,437 human essays from the FCE and ASAP corpora. In total, 21 parallel representations were employed (i.e., \(m=21\)). These spanned five categories, word and sense, topical, sentiment and affect, psychological and pragmatic, and syntax and style. Regarding thresholds, we used \(t_{w}=0.00001\) and \(t_{c}=0.95\). In order to compare the expressive power for human versus GPT with other between-human demographics, we also performed the same analysis within FCE for age (younger versus older authors) and race (authors from Asian versus non-Asian countries). Finally, as an additional comparison, we included the AskAPatient online forum used in several Figure 9: Interaction Effect Between ML Models, Respondent Types (including GPT-4) and Prompt Types prior studies, where authors were grouped by gender (male versus female) and age (high versus low) [46]. The expressive power results for the five binary class comparisons appear in Figure 10, each as a separate line series. Starting with the word representation (\(m=1\)), assigned an expressive score of 1, going left to right, the chart depicts the cumulative \(e(r_{j})\) scores associated with each of the additional 20 representations. Looking at the figure, we can see that across word sense representations, all five lines appear similar. However, with the topic-oriented representations (second group from the left), we being to observe differences. Namely, on the GloVe embedding-based lexicons, the human-GPT, Ask-age, and Ask-gender lines show a lift in expressive power, suggesting that the two groups in these three datasets differ in language usage across their topical composition (relative to the FCE-age and FCE-race datasets). This gap between human-GPT and FCE age/race series further widens on the psychological and pragmatic lexicons such as LIWC and speech acts, and again within the syntax and style category (namely for POS tags), suggesting differences in discussion of psychological concepts and actions/intentions appearing in the text. Interestingly, whereas the chart shows increasing linguistic difference between text generated by humans versus GPTs, within the same domain of prompt-driven essays, the differences between human-GPT texts and those observed across age or gender groups in the Ask testbeds is less pronounced. We can conclude that relative to essays written by humans of different groups such as race and age, the GPT generated texts do exhibit greater linguistic variation quantified based on the information gain of tokens across parallel representations. One caveat to this conclusion is that we only have self-reported demographic data for FCE, but not ASAP. On the other hand, the text composition differences when compared against self-reported male/female and young/old authors in an online health discussion forum are much less pronounced. In the ensuing section, we use the attention weights from the BERT models used for assessment to shed further light on these differences between human and GPT text. Figure 10: Expressive power to demonstrate differences in textual content generated by humans versus GPT ### Content Analysis Using Transformer Attention Weights Following the approach depicted in Section 3.6, we computed the attention scores \(A_{i}\) for each of the word-representation tokens (i.e., \(r_{1}\)) with the highest weights, \(w(f_{ij})\). Given our assessment models were trained using 5-fold cross validation, in order to ensure that the attention weights for tokens were only across instances that the model had previously not seen, we computed attention for tokens within a given human-generated text only using the training BERT model from when that text appeared in the test fold. Similarly, for the GPT-generated texts, consistent with the approach taken with our statistical analysis models (Section 3.5), because all texts were out-of-sample, the attention scores for tokens appearing in GPT text were averaged across the 5 training fold models. Figure 11 shows the results. Each bar represents a token. Token labels appear vertically along the x-axis of the bottom chart. For each token, the top chart depicts token weights along the y-axis along with the occurrence frequency (i.e., how often it appears across the human/GPT essays), with the latter appearing both as the bar label value and color intensity. Tokens are arranged in ascending order from left to right based on their occurrence frequency in the data, which ranges from 3 to 10537. The bottom chart depicts the attention weights (y-axis) and breakdown of occurrences (number labels for each bar) across GPT-generated and human essays (black and gray bars, respectively). By focusing on the tokens with the highest weights in the word representation, we wanted to see how the BERT model trained on human essays attended to the tokens known to be most skewed in their occurrence for human versus GPT generated text (as measured by their \(w(f_{ij})\) weights). Looking at the bottom chart, we can see that with a few exceptions, the attention scores from the BERT model fine-tuned on human essays are almost always higher when those tokens appear in the text. This suggests that because the model's contextualized positional embeddings have been fine-tuned on human essays, it attends more to these tokens when they appear in the text as opposed to when they appear in the out-of-sample GPT essays. Overall, the results from Figure 11 suggest that the BERT models are attending more to the human text tokens versus the GPT ones. We speculate that this heightened attention could be due to the familiarity in training data, suggesting that the BERT models are attending more to text from contexts they have been fine-tuned on. Next, we explored the tokens with the highest weights that only appeared within the GPT or human generated text, respectively. The results appear in Figure 12. The left chart depicts the highest \(w(f_{ij})\) tokens from GPT that never appear in human text (and vice versa for the right chart). For each token, the bar shows the average attention scores from BERT (x-axis), and the frequency occurrence of that token in the text (color shade and number appearing next to each bar). Looking at the tokens, we can observe a few interesting patterns. GPT demonstrates far greater use of proper noun names, including authors and characters that are the subject of the essays. Examples include "bronte," "gary," "helen," "bailey" and "hastings." The fact that some/most of these tokens receive attention in the BERT model - when they have not been seen during the fine-tuning process because they never appear in the human text - suggests that the model weights from the corpora that BERT was pre-trained on may be a factor. Admittedly, the positional embeddings and ability of transformer models to pickup on syntactic structure may also be at play - prior studies have noted the complexities of understanding the underlying mechanisms within transformer-based language models [69]. The tokens appearing exclusively in GPT-generated text also include several sophisticated literary concepts and devices, namely, reference to "antagonists," "characterizations," and "intricacy," as well as themes of "forgiveness," "activism," and "manipulation." Conversely, amongst the tokens appearing exclusively in the human-generated text (right chart in Figure 12, there is far greater usage of colloquialisms and less formal verbiage such as "crappy" and "goofy." Moreover, the human text includes far greater usage of tokens with senti ment polarity such as "craziest," "ridiculously," and "horrendous." Collectively, the differences in token occurrences between GPT and human text, as depicted in Figure 12 shed light on, and align with, the expressive power trend line presented earlier in Figure 10 in that the biggest differences between human and GPT text appear across the parts-of-speech (POS), sentiment and affect, and topical concept representations. One notable exception being misspellings, which were naturally more pervasive in human text, but do not manifest in GPT-generated content. ## 6 Discussion and Concluding Remarks This paper contributes to the nascent yet growing body of literature that explores the intended and unintended consequences of generative AI models. Emerging literature explores the effectiveness of ML generated content directly against a human gold standard [33, 85]. In contrast, we explore differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs. In particular, we empirically examine how LLMs capable of generating high-quality text may disrupt automated processes that already leverage ML/NLP to score text. Across our three research questions, we firstly show that transformer language model-based assessment methods for scoring text are becoming the state-of-the-art (e.g., BERT Figure 11: Token weights and BERT attention scores for overall highest weighted tokens and RoBERTa in our study). This is consistent with other text classification problems where models and architectures using robust embeddings are outperforming standard feature ML and CNN/RNN based methods [97]. More importantly, related to the second research question, our statistical analysis shows that such transformer language model-based assessment methods rate text generated by LLMs such as GPTs higher than human-generated text, even when they are only trained on human data. For instance, BERT scored GPT-3.5 text about 15% higher than human text. Notably, we do not observe this trend when the assessment models are feature-based or CNN/RNN. Feature based methods score human text 10-15% higher whereas this gap is even more pronounced when assessing using CNN/RNN models (26-32% higher for human text). We explore this disparity for BERT-based models in the content analysis associated with our third research question using parallel representations and attention weights to understand linguistic differences between human and GPT text, and how BERT-based models might be attending differently to GPT versus human text. Our parallel representation analysis shows that there are considerable differences in topics, sentiment, affect, parts-of-speech, and mispellings between GPT and human generated essays. Further, the analysis reveals that the expressive power of the differences is greater than those observed within human texts for cross-gender and cross-race comparisons, but that are on par with differences across gender and age groups for user-generated content from an Figure 12: Token weights and BERT attention scores for highest weighted tokens only appearing in GPT (left) and human (right) text online discussion forum. Attention weight analysis suggests that BERT models for assessing GPT text are attending to author/character name proper nouns and other tokens that they may have seen during pre-training (e.g., Wikipedia), but that are out-of-sample during fine-tune training on human essays. Our work presents an important foray into the interplay between text assessment and generative models in the era of LLMs. Our main contributions are two-fold. First, we propose an analysis framework for environments involving automated generation and assessment artifacts, including ML-based scoring models, GPT-generated texts, and statistical models for parsimoniously evaluating such intersections. We intend to make the framework code, analysis models, generated data, and prompt design processes publicly available 3. Second, we use the problem context of AES to offer empirical insights, including how state-of-the-art text scoring methods, based on transformer language models, may score certain genres of GPT-generated content higher even when exclusively trained on human content, possibly due to familiarity. That is, overlap in language model pre-training data between BERT/RoBERTa and GPT-3.5 and GPT-4 could potentially impact how the former assess text generated by the latter. Although set in the context of AES, our framework and results have implications for many IR and text classification settings where automated scoring of text/documents is likely to be disrupted by generative AI Footnote 3: Code and data available at: github.com/nd-hal/automated-ML-scoring-versus-generation This work is not without its limitations. We did not manually label/score the GPT essays - this could shed light on the human-perceived quality of text generated by ChatGPT-type LLMs versus humans. However, in our experience, any manual assessment of GPT generated text could be prone to human-automation biases. On a related note, we did not include the GPT-based texts as part of our assessment model training. However, this was partly motivated by our research questions, which were interested in the impact of injecting generative model-based text into an automated assessment process traditionally devoid of such generated content. Another limitation could be that our interplay occurred in the context of automated essay scoring. However, this is an important problem that has received a fair amount of attention from the NLP community [91, 89, 79, 67]. Moreover, given the genres of text examined, including persuasion, commentary, response, summary, etc. the results are relevant in related areas such as search, content recommendation, and information retrieval more broadly where similar genres of text manifest in blogs, social media, news articles, online reviews, amongst other document modalities. Nonetheless, we believe our study makes important contributions. We are hopeful that future work can take our conceptual research design and analysis framework and extend it to other domains involving human-LLM assessment/generation intersections, such as fake news detection, search relevance assessment, and recommendation, just to name a few.
2303.17864
B-meson hadroproduction in the SACOT-$m_{\rm T}$ scheme
We apply the SACOT-$m_{\rm T}$ general-mass variable flavour number scheme (GM-VFNS) to the inclusive B-meson production in hadronic collisions at next-to-leading order in perturbative Quantum Chromodynamics. In the GM-VFNS approach one matches the fixed-order heavy-quark production cross sections, accurate at low transverse momentum ($p_{\rm T}$), with the zero-mass cross sections, accurate at high $p_{\rm T}$. The physics idea of the SACOT-$m_{\rm T}$ scheme is to do this by accounting for the finite momentum transfer required to create a heavy quark-antiquark pair throughout the calculation. We compare our results with the latest LHC data from proton-proton and proton-lead collisions finding a very good agreement within the estimated theoretical uncertainties. We discuss also scheme-related differences and their impact on the scale uncertainties.
Ilkka Helenius, Hannu Paukkunen
2023-03-31T07:58:26Z
http://arxiv.org/abs/2303.17864v2
**B-meson hadroproduction in the SACOT-\(m_{\rm T}\) scheme** ## Abstract **We apply the SACOT-\(m_{\rm T}\) general-mass variable flavour number scheme (GM-VFNS) to the inclusive B-meson production in hadronic collisions at next-to-leading order in perturbative Quantum Chromodynamics. In the GM-VFNS approach one matches the fixed-order heavy-quark production cross sections, accurate at low transverse momentum (\(p_{\rm T}\)), with the zero-mass cross sections, accurate at high \(p_{\rm T}\). The physics idea of the SACOT-\(m_{\rm T}\) scheme is to do this by accounting for the finite momentum transfer required to create a heavy quark-antiquark pair throughout the calculation. We compare our results with the latest LHC data from proton-proton and proton-lead collisions finding a very good agreement within the estimated theoretical uncertainties. We discuss also scheme-related differences and their impact on the scale uncertainties.** ###### Contents * 1 Introduction * 2 The SACOT-\(m_{\rm T}\) framework * 3 Results for proton-proton collisions * 4 Results for proton-nucleus collisions * 5 Conclusion and Outlook ## 1 Introduction The production of mesons containing a bottom quark - collectively called B mesons - in hadronic collisions provides a useful way to study various aspects of Quantum Chromodynamics (QCD). On one hand, thanks to the large bottom quark mass \(m_{\rm b}\approx 4...5\,\)GeV, the perturbative expansion in powers of the strong QCD coupling \(\alpha_{s}\) can be expected to converge relatively well and thereby provide an accurate description of the production mechanism [1, 2, 3, 4, 5]. In comparison to charm-quark production where the possible non-perturbative intrinsic charm-quark content of the nucleons [6, 7, 8] can stir the interpretation, the bottom-quark production can be seen to be a cleaner process to test the perturbative QCD, though an intrinsic bottom-quark component is not excluded [9, 10]. The B-meson production is sensitive especially to the gluon content of the colliding hadrons and can thus be used to provide information on their non-perturbative structure, the parton distribution functions (PDFs) [11, 12, 13, 14]. While the B-meson production is not used as a constraint in the current global fits of proton PDFs [15, 16, 17], it should be mentioned that e.g. in comparison to the jet production [18, 19] - a commonly used strong gluon constraint - no external corrections due to multi-parton interactions or hadronization need to be supplied but the entire process can be calculated within the collinear factorization. As a result, the B-meson production could provide a rather clean probe for gluon distributions relying solely on inclusive single-particle production. On the other hand, for observables like the Drell-Yan dilepton or direct W\({}^{\pm}\) production the weak decays of heavy-flavoured mesons also produce a significant background of charged leptons whose subtraction requires an accurate theoretical understanding of the heavy-quark production [20]. Analyzing B-meson production in proton-nucleus collisions could provide further constraints for nuclear PDFs and, in the context of heavy-ion collisions, the B-mesons can also be used as a probe of the produced strongly interacting matter [21] and the expected mass hierarchies. The cross sections for identified B-meson hadroproduction have been measured in several collision systems: proton-antiproton (p-\(\overline{\mathrm{p}}\)) [22, 23] collisions at Fermilab Tevatron, as well as in proton-proton (p-p) [24, 25, 26, 27, 28, 29, 30, 31, 32], proton-lead (p-Pb) [33, 34], and lead-lead (Pb-Pb) [21] collisions at the Large Hadron Collider (LHC). In many occasions the B-meson cannot be fully reconstructed but only the spectrum of specific decay particles like charged leptons or J/\(\psi\) mesons, are measured. In work presented here, we will concentrate exclusively on the reconstructed B mesons, but plan to return to the decay spectra in future publications. We will discuss the B-meson production mainly in the so-called general-mass variable-flavour-number scheme (GM-VFNS) [35]. The GM-VFNS provides a framework to complement fixed-order QCD calculations with a resummation of heavy-quark mass-dependent logarithms that arise from collinear splitting of partons to heavy quarks. The fixed-order calculations - known to leading order (LO) [1], next-to-LO (NLO) [2, 3], and next-to-NLO (NNLO) [4, 5] in strong coupling \(\alpha_{s}\) - are based purely on diagrams in which the heavy quarks are explicitly excited from massless partons. The resummed parts account for the possibility that the heavy quarks are produced through higher-order diagrams within the initial- and final-state radiation. Although formally suppressed by extra powers of \(\alpha_{s}\) these contributions arise from collinear configurations which are logarithmically enhanced at large values of transverse momenta (\(p_{\mathrm{T}}\)). The division between the explicit and shower-originating heavy-quark production channels is not unique which induces a scheme and scale dependence on the description. Historically, the first variant of GM-VFNS for heavy-flavour hadroproduction was the so-called FONLL (Fixed-Order Next-to-Leading Logarithm) scheme introduced in Ref. [36]. Later on the SACOT (Simplified Aivazis-Collins-Olness-Tung) scheme was presented in Refs. [37, 38] and has been later on applied e.g. in Refs. [39, 40]. In the SACOT scheme, part of the resummed contributions are described by massless partonic coefficient functions which induces an unphysical divergence towards \(p_{\mathrm{T}}\to 0\), and one cannot therefore generally extend the calculation down to zero \(p_{\mathrm{T}}\). In Refs. [41, 42, 43] the authors pointed out that this behaviour can be tamed by suitably tuning the factorization and fragmentation scales. In the FONLL scheme these divergent features are cured by multiplying the zero-mass contributions by a factor \(p_{\rm T}^{2}/(p_{\rm T}^{2}+c^{2}m^{2})\), where \(c=5\) by default and \(m\) is the heavy-quark mass, which serves to evade the unphysical behaviour while still respecting the principles of GM-VFNS. However, neither of the two is a particularly natural way to cure the divergent behaviour and the former also causes unphysical kinks to the \(p_{\rm T}\) spectrum of heavy-flavoured mesons. Indeed, the reason why the invariant heavy-quark cross section remains finite even at zero \(p_{\rm T}\) is in the mass of the heavy quark which, when properly accounted for, keeps the intermediate particles off-shell - there is always a finite momentum transfer between the colliding, massless initial-state partons. This is the underlying physics idea of the SACOT-\(m_{\rm T}\) scheme which was introduced in Ref. [44]. It is the counterpart of the SACOT-\(\chi\) scheme [45] often used in the context of deeply inelastic scattering. Very recently, preliminary documents of the so-called SACOT-MPS (Massive Phase Space) scheme have also appeared [46, 47], which seems to share partly same ideas as the SACOT-\(m_{\rm T}\) scheme applied in this work. A somewhat different but closely related approach to heavy-flavour hadroproduction is the one in which fixed-order calculations are matched with a parton shower (FO-PS) [48, 49, 50, 51]. This procedure also performs a similar resummation as done in GM-VFNS though it still, in general, misses part of the resummed contributions that are included in GM-VFNS [44], though it can be used to simulate exclusive final states as well. Also, while it is more natural to use 4-flavour PDFs in the context of FO-PS framework to describe \(b\overline{b}\) production (part of the logarithms resummed by the parton shower are included in the evolution of the \(b\)-quark PDFs) a consistent use of 5-flavour PDFs is a built-in feature of GM-VFNS making it well-suited for general-purpose PDF studies. In the present paper our aim is to apply the SACOT-\(m_{\rm T}\) scheme [44], originally devised in the context of D-meson (mesons containing a charm quark) production, to the case of B mesons. The differential cross section \({\rm d}\sigma/{\rm d}p_{\rm T}\) of both D- and B-mesons show a maximum at low \(p_{\rm T}\) but they occur at different values of \(p_{\rm T}\). How this is linked with the heavy-quark masses is an intrinsic feature of a given scheme and provides thus a well-defined way to study the reliability of different schemes. We also introduce an improved description of the fragmentation variable which evades some difficulties in the original setup. In what follows we will first introduce the formalism in Section 2, and then discuss the numerical results in Sections 3 and 4 for p-p and p-Pb collisions at the LHC, respectively. In Section 5 we summarize the paper discussing our future plans. ## 2 The SACOT-\(m_{\rm T}\) framework We will now recapitulate our SACOT-\(m_{\rm T}\) framework [44] for single-inclusive heavy-flavoured meson production in hadronic collisions. The process we study is, \[h_{1}(P_{1})+h_{2}(P_{2})\longrightarrow h_{3}(P_{3})+X\,,\] where \(h_{1}\) and \(h_{2}\) denote the colliding hadrons and \(h_{3}\) is the heavy-flavoured meson. The momenta of the hadrons are indicated by \(P_{i}\). We can write the invariant cross section as \[\frac{{\rm d}^{3}\sigma^{h_{1}+h_{2}\to h_{3}+X}}{{\rm d}^{3}P_{3}/P_{3}^{0}} =\sum_{ijk}\int_{z^{\rm min}}^{1}\frac{{\rm d}z}{z^{2}}\int_{x^{ \rm min}}^{1}{\rm d}x_{1}\int_{x^{\rm min}_{2}}^{1}{\rm d}x_{2}f_{i}^{h_{1}}(x_ {1},\mu_{\rm fact}^{2})\,f_{j}^{h_{2}}(x_{2},\mu_{\rm fact}^{2})\,D_{k\to h_{3} }(z,\mu_{\rm frag}^{2})\] \[\times J(\vec{p},\vec{P})\times\frac{{\rm d}^{3}\hat{\sigma}^{ij \to k+X}(\tau_{1},\tau_{2},\rho,\sqrt{s},\mu_{\rm ren}^{2},\mu_{\rm fact}^{2},\mu_{\rm frag}^{2})}{{\rm d}^{3}p_{3}/p_{3}^{0}} \tag{1}\] \[-{\rm subtractions}\,.\] Here, \(d\hat{\sigma}^{ij\to k+X}/d^{3}p_{3}\) are the inclusive partonic cross section for producing a parton \(k\) carrying a momentum \(p_{3}\) in collisions of partons \(i\) and \(j\) with momenta \(p_{1}=x_{1}P_{1}\) and \(p_{2}=x_{2}P_{2}\) in our scheme. The fragmentation of the produced parton \(k\) into a heavy-flavoured meson is described by the fragmention functions (FFs) \(D_{k\to h_{3}}(z,\mu_{\rm frag}^{2})\) which depend on the fragmentation scale \(\mu_{\rm frag}^{2}\). The fluxes of partons from the initial-state hadrons are described by the PDFs \(f_{i}(x,\mu_{\rm fact}^{2})\) and they depend on the factorization scale \(\mu_{\rm fact}^{2}\). The subtraction terms are required in order avoid the double counting between the same logarithmic terms that appear in partonic cross sections and PDFs/FFs, as will be discussed later on. The invariants \(\tau_{1}\), \(\tau_{2}\), and \(\rho\) are defined by \[\tau_{1}\equiv\frac{p_{1}\cdot p_{3}}{p_{1}\cdot p_{2}}=\frac{m_{\rm T}e^{-y}} {x_{2}\sqrt{s}}\,,\hskip 14.226378pt\tau_{2}\equiv\frac{p_{2}\cdot p_{3}}{p_{1} \cdot p_{2}}=\frac{m_{\rm T}e^{y}}{x_{1}\sqrt{s}}\,,\hskip 14.226378pt\rho \equiv\frac{m^{2}}{x_{1}x_{2}s}\,, \tag{2}\] where \(m_{\rm T}=\sqrt{p_{\rm T}^{2}+m^{2}}\) and \(y\) denote the transverse mass and rapidity of the parton \(k\). Here, \(p_{\rm T}\) is the partonic transverse momentum and \(m\) is the heavy-quark mass. The integration limits \(x_{1,2}^{\rm min}\) are \[x_{1}^{\rm min}=\frac{m_{\rm T}\,e^{y}}{\sqrt{s}-m_{\rm T}\,e^{-y}},\quad x_{2 }^{\rm min}=\frac{x_{1}m_{\rm T}\,e^{-y}}{x_{1}\sqrt{s}-m_{\rm T}\,e^{y}}\,. \tag{3}\] The transverse momentum \(P_{\rm T}\) and rapidity \(Y\) of the heavy-flavoured meson are related to the corresponding partonic quantities through the definition of the fragmentation variable \(z\), for which we now use \[z\equiv\frac{P_{3}\cdot(P_{1}-P_{2})}{p_{3}\cdot(P_{1}-P_{2})}\xrightarrow{ \text{c.m. frame}}\frac{P_{\rm T}}{p_{\rm T}}=\frac{|\vec{P}|}{|\vec{p}|}\,, \tag{4}\] where we have assumed that the fragmentation is collinear in the center-of-mass (c.m.) frame of the collision. This definition of \(z\) is associated with the Jacobian factor in Eq. (1), \[J(\vec{p},\vec{P})=\sqrt{\frac{\vec{P}_{3}^{2}+M^{2}}{\vec{P}_{3}^{2}}\frac{ \vec{p}_{3}^{2}}{\vec{p}_{3}^{2}+m^{2}}}\,, \tag{5}\] where \(M\) is the meson mass, and the integration limit \(z^{\rm min}\) is \[z^{\rm min}=\frac{|\vec{P}_{3}|}{\sqrt{s/4-m^{2}}}\,. \tag{6}\] We note that the definition of the fragmentation variable \(z\) in Eq. (4) is a little different than the definition our earlier work [44] where we defined the fragmentation variable as \(z^{\prime}\equiv P_{3}\cdot(P_{1}+P_{2})/p_{3}\cdot(P_{1}+P_{2})\). In the c.m. frame this corresponds to the fraction of the heavy-quark energy carried by the meson, \(z^{\prime}=E_{\rm meson}/E_{Q}\). The problem of this definition is best visible when \(Y=y=0\), i.e. \(z^{\prime}=M_{\rm T}/m_{\rm T}\). The fragmentation functions are zero for \(z^{\prime}\geq 1\), which means that the partonic \(p_{\rm T}\) has a lower limit \(p_{\rm T}^{2}\geq P_{\rm T}^{2}+M^{2}-m^{2}\geq M^{2}-m^{2}\). In other words, heavy quarks at sufficiently low transverse momenta will not form heavy-flavoured mesons at all. The definition of Eq. (4) evades this problem but also other choices are possible [41, 52]. An issue like this admittedly falls outside the predictive power of collinear factorization and can be categorized as modeling the higher-twist effects associated with the hadronization. We have checked that for the results presented in the present paper, the differences between the two above versions of the fragmentation variable, \(z\) and \(z^{\prime}\), remains at most \(\sim 10\%\) at small values of \(p_{\rm T}\) - well below the uncertainties originating e.g. from the scale choices - and vanish completely at larger values of \(p_{\rm T}\). The partonic cross sections \(d\hat{\sigma}^{ij\to k+X}\) in GM-VFNS are subject to a scheme dependence [35] to accomplish a description valid at any \(p_{\rm T}\). In the SACOT-\(m_{\rm T}\) scheme [44] the processes in which the heavy quarks are explicitly produced from massless flavours, \[gg\to Q+X\,,\ \ qg\to Q+X\,,\ \ qq\to Q+X\,,\] are evaluated with partonic cross sections carrying the full heavy-quark mass dependence [3], renormalized in the \(\overline{\rm MS}\) scheme (see Sect. 3 of Ref. [36]). We will refer to these channels as being the "direct" ones. These fixed-order NLO cross sections contain logarithmic terms \(\log\rho\) which originate from (i) collinear radiation of gluons off a final-state heavy quark, (ii) collinear splitting of final-state gluons into a heavy quark-antiquark pair, and (iii) collinear splitting of initial-state gluons into a pair of heavy quark and antiquark. These logarithms can be resummed into the scale dependence of the heavy-quark PDFs \(f_{Q}(x,\mu_{\rm fact}^{2})\) and parton-to-meson FFs, \(D_{k\to h3}(z,\mu_{\rm frag}^{2})\). The resummation then gives rise to the contributions (i) with heavy quarks in the initial state and (ii) in which the fragmentation is initiated by a light parton. We will refer to these channels as being the "non-direct" ones. In our scheme, these processes are evaluated with the zero-mass (ZM) \(\overline{\rm MS}\) expressions for the partonic cross sections \(d\hat{\sigma}^{ij\to k+X}(\tau_{1}^{0},\tau_{2}^{0})|_{\rm ZM}\)[53], where \[\tau_{1}^{0}=\frac{p_{\rm T}e^{-y}}{x_{2}\sqrt{s}}\,,\ \ \ \ \tau_{2}^{0}=\frac{p_{\rm T}e^{y}}{x_{1}\sqrt{s}}\,, \tag{7}\] but replacing the massless variables \(\tau_{1}^{0}\) and \(\tau_{2}^{0}\) by the massive invariants \(\tau_{1}\) and \(\tau_{2}\) defined in Eq. (2). In summary, **Direct**: \[\frac{\mathrm{d}^{3}\hat{\sigma}^{ij\to k+X}}{\mathrm{d}^{2}p_{\rm T} \mathrm{d}y}\bigg{|}_{\rm SACOT-}\underset{\rm T}{\equiv}\frac{\mathrm{d}^{3} \hat{\sigma}^{ij\to k+X}(\tau_{1},\tau_{2},\rho)}{\mathrm{d}^{2}p_{\rm T} \mathrm{d}y}\,,\qquad ij\to k+X\in\left\{\begin{array}{l}gg\to Q+X\\ qg\to Q+X\\ qq\to Q+X\end{array}\right. \tag{8}\] **Non-direct**: \[\frac{\mathrm{d}^{3}\hat{\sigma}^{ij\to k+X}}{\mathrm{d}^{2}p_{\rm T} \mathrm{d}y}\bigg{|}_{\rm SACOT-}\underset{\rm T}{\equiv}\frac{\mathrm{d}^{3} \hat{\sigma}^{ij\to k+X}(\tau_{1},\tau_{2})}{\mathrm{d}^{2}p_{\rm T} \mathrm{d}y}\bigg{|}_{\rm ZM}\,,\qquad ij\to k+X\notin\left\{\begin{array}{l} gg\to Q+X\\ qg\to Q+X\\ qq\to Q+X\end{array}\right. \tag{9}\] To motivate the latter choice we note that to (i) retain the Lorentz invariance, and (ii) recover the zero-mass \(\overline{\rm MS}\) result in the \(p_{\rm T}\to\infty\) limit, Eq. (9) is a rather natural choice. It implicitly accounts for the fact that even in an apparently massless production channels like \(gg\to g(\to Q\overline{Q})+X\), the final-state parton will eventually split into a heavy quark-antiquark pair such that the relevant variables to describe the underlying process are the massive invariants \(\tau_{1,2}\), not the massless ones \(\tau_{1,2}^{0}\) to account for finite virtualities of the intermediate partons. This choice also ensures that the cross sections remain finite in the \(p_{\rm T}\to 0\) limit. The subtractions in Eq. (1) associated with the initial-state radiation are obtained by replacing the heavy-quark PDFs \(f_{Q}(x,\mu_{\rm fact}^{2})\) by \[f_{Q}(x,\mu_{\rm fact}^{2}) \longrightarrow\left(\frac{\alpha_{s}}{2\pi}\right)\log\left( \frac{\mu_{\rm fact}^{2}}{m^{2}}\right)\int_{x}^{1}\frac{dz}{z}P_{qg}\left( \frac{x}{z}\right)f_{g}(z,\mu_{\rm fact}^{2}) \tag{10}\] \[P_{qg}(z) =\frac{1}{2}\Big{[}z^{2}+(1-z)^{2}\Big{]} \tag{11}\] in the \[Qg\to Q+X\,,\ \ Qq\to Q+X\,,\] channels, and keeping terms up to \(\alpha_{s}^{3}\). Similarly, the subtractions associated with the final-state radiation are obtained by replacing the FFs by \[D_{Q\to h_{3}}(x,\mu_{\rm frag}^{2}) \longrightarrow\left(\frac{\alpha_{s}}{2\pi}\right)\int_{x}^{1}\frac {dz}{z}d_{QQ}\left(\frac{x}{z}\right)D_{Q\to h_{3}}(z,\mu_{\rm frag}^{2})\,, \tag{12}\] \[D_{g\to h_{3}}(x,\mu_{\rm frag}^{2}) \longrightarrow\left(\frac{\alpha_{s}}{2\pi}\right)\log\left(\frac {\mu_{\rm frag}^{2}}{m^{2}}\right)\int_{1}^{1}\frac{dz}{z}P_{qg}\left(\frac{x}{ z}\right)D_{Q\to h_{3}}(z,\mu_{\rm frag}^{2})\,,\] (13) \[d_{QQ}(z)=C_{f}\left\{\frac{1+z^{2}}{1-z}\left[\log\left(\frac{ \mu_{\rm frag}^{2}}{m^{2}}\right)-2\log(1-z)-1\right]\right\}_{+}\,, \tag{14}\] in the \[gg\to Q+X\,,\ \ qg\to Q+X\,,\ \ qq\to Q+X\,,\] \[gg\to g+X\,,\ \ \ qq\to g+X\,,\ \ qq\to g+X\,,\] channels, and keeping terms up to \(\alpha_{s}^{3}\). The non-logarithmic terms in \(d_{\rm QQ}\) are associated with the definition of the \(\overline{\rm MS}\) FFs in the presence of a finite quark mass [54, 55]. In addition, the fully massive calculation used to evaluate the direct contributions [3] are renormalized in the so-called decoupling scheme [56] in which the scale dependence of \(\alpha_{s}\) excludes the contributions from heavy-quark loops. To translate the results in the decoupling scheme to the usual \(\overline{\rm MS}\) scheme, additional terms are supplied, see Sect. 3 of Ref. [36]. There are three independent scales involved in our calculation - the renormalization, factorization and fragmentation scales. These are taken to be \[\mu_{i}=c_{i}\sqrt{P_{\rm T}^{2}+m^{2}}\,, \tag{15}\] where \(m\) is the heavy-quark mass and our default choice is \(c_{i}=1\), as in Ref. [44]. To chart the dependence of our results on this choice we repeat the calculations by taking \(c_{i}=0.5,1,2\), with a restriction, \[\frac{1}{2}\leq\frac{\mu_{\rm ren}}{\mu_{\rm fact}}\leq 2\,,\quad\frac{1}{2} \leq\frac{\mu_{\rm ren}}{\mu_{\rm frag}}\leq 2\,. \tag{16}\] In total there are then 17 different scale combinations whose envelope we take as the scale uncertainty. We note that the FFs become scale independent for \(\mu_{\rm frag}\leq m\) and only \(D_{Q\to h_{3}}\) is non-zero in this regime. The heavy-quark PDFs are zero for \(\mu_{\rm fact}\leq m\). Consistently, no initial-state subtraction terms are included when \(\mu_{\rm fact}\leq m\), and no final-state subtraction terms are included when \(\mu_{\rm frag}\leq m\). We take the B-meson FFs from the Kniehl-Kramer-Schienbein-Spiesberger analysis (KKSS08) [39] which fits the SLD [57], OPAL [58], and ALEPH [59] data on B-meson production in \(e^{+}e^{-}\) annihilation near the Z-boson pole, \(\sqrt{s}=M_{\rm Z}\). Recently, also FFs at NNLO accuracy have become available [60]. It should be noted that in the KKSS08 analysis the bottom mass was taken to be \(m_{\rm b,FF}=4.5\,\)GeV, which differs from the values employed in the PDFs we will use in our calculations, \(m_{\rm b}=4.92\,\)GeV for NNPDF4 [16], and \(m_{\rm b}=4.75\,\)GeV for MSHT20 [15]. Since the data in KKSS08 analysis are taken at \(\sqrt{s}=M_{\rm Z}\), the exact value of the bottom-quark mass used there cannot be very critical. On the other hand, the PDF fits utilize much more data at lower interaction scales and are thus arguably much more sensitive to the quark masses (i.e. changing the input masses changes the PDFs). We thus find it better justified to use the mass values from PDFs in our calculations. To ensure the correct behaviour at the threshold, the final-state subtraction terms as well as e.g. the gluon FFs should vanish at the scale \(\mu_{\rm frag}=m_{\rm b}\). To enforce this, we always use the scale \(\mu^{2}=c_{i}\sqrt{P_{\rm T}^{2}+m_{\rm b,FF}^{2}}\) when calling the FFs. In the future, to avoid making such compromises, it would be useful to have the B-meson FFs available with the exact mass values utilized in the global PDF fits. In most of our calculations we will use the NNPDF4 partons [16] which constitute the most recent set. To investigate the mass dependence, we will use a special version of MSHT20, MSHT20nlo_mbrange_nf5 [61] which provides PDF fits with different bottom-quark masses including also the one which matches with \(m_{\rm b,FF}\). In the case of p-Pb collisions, we will use the EPPS21 nuclear PDFs [62] (with CT18A baseline proton PDFs [17]) for which \(m_{\rm b}=4.75\,\mathrm{GeV}\), and the nNNPDF3.0 nuclear PDFs [63] for which \(m_{\rm b}=4.92\,\mathrm{GeV}\). Figure 1: The \(13\,\mathrm{TeV}\) B\({}^{\pm}\)-meson data of the LHCb collaboration [32] in the rapidity window \(2.5<Y<3.0\) compared with the SACOT-\(m_{\rm T}\) calculation. The plot shows separately the full calculation (black solid), the direct i.e. light-parton to heavy quark production channels (green dashed), subtraction terms (yellow dotted-dashed), non-direct production channels (blue dashed), and the zero-mass calculation (purple dotted-dashed). The filled bands correspond to the uncertainty from the scale variation. ## 3 Results for proton-proton collisions To highlight the key features of the SACOT-\(m_{\rm T}\) setup we present, in Figure 1, the B\({}^{\pm}\) cross sections at \(\sqrt{s}=13\,\mathrm{TeV}\) in the rapidity window \(2.5<Y<3.0\) together with the experimental data from the LHCb collaboration [32]. The full SACOT-\(m_{\rm T}\) calculation follows the data very well and, in particular, reproduces the turnover at \(P_{\rm T}\approx 3\,\mathrm{GeV}\). The scale uncertainty is shown as the green band which is large at small values of \(P_{\rm T}\) but reduces to 10% at highest considered values of \(P_{\rm T}\). To illustrate how the B-meson cross section in our scheme builds from various components, Figure 1 also shows separately the contributions of direct terms, subtraction terms, and the non-direct parts in which there are either bottom quark(s) in the initial state or in which the fragmentation proceeds from a light parton. At low \(P_{\rm T}\) the direct part clearly dominates and, by construction, is the only contribution at \(P_{\rm T}=0\leavevmode\nobreak\ \mathrm{GeV}\). As \(P_{\rm T}\) increases the subtraction terms approximate rather well the full direct contribution and eventually the net contribution of these two becomes rather small. When this happens, the contributions from initial-state heavy quarks and light-parton fragmentation are the dominant ones. With our default choice of scales, this begins to happen already around \(P_{\rm T}\approx m_{\rm b}\). Arguably, the collinear logarithms Figure 2: Comparison of D\({}^{0}\)- (upper curves) and B\({}^{\pm}\)-meson (lower curves) production in the SACOT-\(m_{\rm T}\) scheme at \(\sqrt{s}=13\,\mathrm{TeV}\), \(2.5<Y<3.0\). Black solid curves correspond to the full calculation and the green dashed curves to the contributions from the direct (i.e. light parton to heavy quark) production channels. The orange dashed-dotted curves are the subtraction terms and the filled bands show the uncertainty from the scale variation. The data are from the LHCb collaboration [32, 64]. For clarity, the data and curves corresponding to the D\({}^{0}\) mesons have been multiplied by a factor of 10. \(\sim\log\left(p_{\rm T}^{2}/m^{2}\right)\) are not yet particularly large at such values of \(P_{\rm T}\) so their resummation should not be a too big of an effect either. However, even if the resummation would not yet be a large effect, the non-direct contributions can be significant as e.g. the \(gg\to gg\) matrix element that enters the contribution from gluon fragmentation carries a large colour factor which increases its importance even if the associated logarithm would not yet be particularly large. For \(P_{\rm T}\approx m_{\rm b}\) and higher, the full calculation is significantly above the direct part. On one hand this is due to the \(\alpha_{s}^{3}\) terms in the contributions with initial-state heavy quarks or light-parton fragmentation, which also partly catch the NNLO contributions to the fixed-order calculation which are now known to be important [5]. On the other hand, towards higher values of \(P_{\rm T}\) the resummation of the collinear logarithms becomes also gradually more important of an effect. The scale variations result in a significant uncertainty band. Part of this largeness is related to the fact that the scale choice also controls the relative importance of the direct, subtraction, and non-direct contributions. For example, with \(k_{i}=1/2\) only the direct part contributes up to Figure 3: The \(13\,{\rm TeV}\) B\({}^{\pm}\)-meson data of the LHCb collaboration [32] in the rapidity window \(2.5<Y<3.0\) compared with the SACOT-\(m_{\rm T}\) calculation with different bottom-quark masses \(m_{\rm b}\). The calculation uses the MSHT20nlo_mbrange_nf5 partons [61] which are available for \(m_{\rm b}=4.0\ldots 5.50\,{\rm GeV}\). The scale uncertainty band was evaluated with \(m_{\rm b}=4.75\,{\rm GeV}\). \(\sqrt{3}m_{\rm b}\approx 8.5\,{\rm GeV}\), whereas with our default choice of scales it is the non-direct part that clearly dominates at \(P_{\rm T}=8.5\,{\rm GeV}\). The result of a fully zero-mass calculation, but still adopting our default choice of scales, is shown in Figure 1 as well. We see that the zero-mass calculation agrees rather well with the full SACOT-\(m_{\rm T}\) result already at \(P_{\rm T}\gtrsim 2m_{\rm b}\) though the residual mass effects die out rather slowly in \(P_{\rm T}\). Towards lower values of \(P_{\rm T}\) the NLO zero-mass cross section not only diverges but goes also negative due the spurious behaviour of the zero-mass coefficient functions. The observations made above are reminiscent of those we found earlier for D mesons [44] but the effects of heavy-quark mass simply persist up to higher values of \(P_{\rm T}\). This is illustrated in Figure 2 where we plot the \({\rm D}^{0}\) results in the same figure. For the D-meson data, the turnover happens at lower \(P_{\rm T}\) in comparison to the B-meson case. This behaviour is also well reproduced by our default scale choice - a larger quark mass more strongly "screens" the partonic propagators due to larger virtuality and shifts the turnover to larger \(P_{\rm T}\). One can also clearly see that - in our scheme and the default choice of scales - the subtraction terms approximate well the contributions from the direct production channels for D mesons immediately above zero \(P_{\rm T}\), whereas for B mesons the cancellation between the subtraction terms and the direct production channels is shifted to larger \(P_{\rm T}\). In comparison to the D-meson results, and perhaps a little surprisingly, the scale uncertainty remains larger for B mesons up to higher \(P_{\rm T}\) although the associated QCD scales are larger. The reason is that for the B-meson production the interplay between between various components (direct, non-direct, subtraction) remains non-trivial up to higher values of \(P_{\rm T}\) and the dependence of this interplay on the scale choices results in a larger scale uncertainty. In the case of D mesons, the non-direct components quickly dominate the cross section with all considered scale choices. Notice that here we have limited the scales from below by the charm mass to make sure that they stay above the initial scales of the PDF analyses. The dependence of our calculations on the adopted set of PDFs with different bottom-quark masses is illustrated in Figure 3. Instead of NNPDF4.0, we have used here the MSHT20nlo_mbrange_nf5 partons [61]. In this latter analysis, the authors repeated the MSHT20 [15] global PDF fit seven times varying the bottom-quark mass in the range \(m_{\rm b}=4.0\ldots 5.50\,{\rm GeV}\). By using these PDF sets, we can thus study the bottom-quark dependence of our calculation with the proper behaviour of PDFs (i.e. vanishing bottom-quark) at the threshold \(\mu_{\rm fact}=m_{\rm b}\). The largest differences appear at low \(P_{\rm T}\) where the bottom-quark thus plays the most significant role. We see that adopting a smaller bottom-quark mass leads to an increased cross section as the "mass screening" in the propagators decreases. Decreasing the bottom-quark mass can be also seen to slightly shift the maximum of the \(P_{\rm T}\) spectrum towards lower values of \(P_{\rm T}\). The mass dependence is still clearly inferior to the scale dependence of our results i.e. within the scale uncertainties all the shown PDFs agree with the LHCb data (note, however, that variations in the mass also affect the scale choices). Figures 4 and 5 show the comparisons with the LHCb 7 TeV and 13 TeV [32] data using NNPDF4.0 PDFs. In both cases the predictions agree very well with the data throughout the wide rapidity range. The uncertainties from NNPDF4.0 are small (not much larger than the line width) in contrast to the scale uncertainties and are therefore not shown. We consider also the cross section ratios between the collision energies of 13 TeV and 7 TeV. The LHCb paper [32] does not contain these ratios separately for different rapidity bins, and we have therefore formed the ratios ourselves from the tabulated cross sections. The statistical and systematical uncertainties have been added in quadrature apart from the 3.9% systematic uncertainty on the \(B^{\pm}\to J/\psi K^{\pm}\) branching fraction (the decay mode measured by the LHCb), which has been canceled out. The results are shown in Figure 6. The uncertainties due to the scale choices are vastly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4.0 which we use here. Finally, Figure 7 presents the ATLAS 7 TeV data [29] and the CMS midrapidity data at \(\sqrt{s}=7\,\mathrm{TeV}\). The uncertainties due to the scale choices are mostly smaller in these ratios in comparison to the absolute cross sections. The systematics of the data are well reproduced by the calculation. Despite the smaller scale uncertainties, they are still larger than the PDF-originating uncertainties, at least for NNPDF4. \(5\,\mathrm{TeV}\)[21], \(7\,\mathrm{TeV}\)[25], and \(13\,\mathrm{TeV}\)[30]. These data do not reach to the low-\(P_{\mathrm{T}}\) region where most of the bottom-quark mass effects reside but instead extend to higher values of \(P_{\mathrm{T}}\) and provide therefore a complementary validation of our computational setup. The dependence of experimental cross sections on the c.m. energy, \(P_{\mathrm{T}}\) and rapidity are again well reproduced by the calculation. Figure 5: The \(13\,\mathrm{TeV}\)\(\mathrm{B}^{\pm}\)-meson data of the LHCb collaboration [32] compared with the SACOT-\(m_{\mathrm{T}}\) calculation. Each panel correspond to a different rapidity window. The green solid curves show the results of our central scale choice \(\mu_{i}=\sqrt{P_{\mathrm{T}}^{2}+m_{\mathrm{b}}^{2}}\), and the light-green filled bands correspond to the uncertainty due to the scale variations. Finally, we wish to illustrate the differences between our SACOT-\(m_{\mathrm{T}}\) scheme and other approaches. To this end, Figure 8 presents a comparison in which we have divided the FONLL [11, 36, 65] and fixed-order NLO calculations with the SACOT-\(m_{\mathrm{T}}\) predictions. The FONLL and fixed-order NLO predictions have been taken from the web interface in Figure 6: Ratios between the 13 TeV and 7 TeV \(\mathrm{B}^{\pm}\)-meson data of the LHCb collaboration [32] compared with the SACOT-\(m_{\mathrm{T}}\) calculation. Each panel correspond to a different rapidity window. The green solid curves show the results of our central scale choice \(\mu_{i}=\sqrt{P_{\mathrm{T}}^{2}+m_{\mathrm{b}}^{2}}\), and the light-green filled bands correspond to the uncertainty due to the scale choice. Ref. [66] selecting the NNPDF3.0 proton PDFs [67]. The coloured bands show the uncertainties due to the scale variations which, in the case of FONLL and fixer-order calculation, include only variations of the factorization and renormalization scales with 5 different combinations. The FONLL cross section for heavy-quark production is, schematically, of the Figure 7: The 7 TeV B\({}^{\pm}\)-meson data of the ATLAS collaboration [29] (upper panels), and the \(5\dots 13\) TeV B\({}^{\pm}\)-meson data of the CMS collaboration [21, 25, 30] (lowest panel) compared with the SACOT-\(m_{\rm T}\) calculation. Each panel correspond to a different rapidity window. The green solid curves show the results of our central scale choice \(\mu_{i}=\sqrt{P_{\rm T}^{2}+m_{\rm b}^{2}}\), and the light-green filled bands correspond to the uncertainty due to the scale choice. form, \[d\sigma_{q}^{\text{FONLL}}=\sigma^{\text{fixed order}}+\frac{p_{\text{T}}^{2}}{p_{ \text{T}}^{2}+c^{2}m_{q}^{2}}\big{(}d\sigma^{\text{resummed}}-\text{subtractions} \big{)}\,, \tag{17}\] where the default choice \(c=5\) has been applied, and which is still folded with a scale-independent fragmentation function to obtain the spectrum of heavy-flavoured mesons. While the fixed-order part includes only those contributions in which the heavy quarks are explicitly produced, the resummed part performs the same resummation of collinear logarithms as the SACOT-\(m_{\text{T}}\) scheme. The subtraction terms ensure that no double counting takes place. The principal difference with respect to the SACOT-\(m_{\text{T}}\) scheme is that the resummed part uses pure zero-mass coefficient functions which diverge towards zero \(p_{\text{T}}\). The factor \(p_{\text{T}}^{2}/(p_{\text{T}}^{2}+c^{2}m_{q}^{2})\) is there to tame the divergence. The constant \(c\) controls how quickly the resummation is allowed to kick in as a function of \(p_{\text{T}}\). The default FONLL predictions, however, do not involve an uncertainty due to the variations of the constant \(c\). At low \(p_{\text{T}}\) the FONLL predictions match with the fixed-order calculations and show a clearly smaller scale uncertainty in comparison to the SACOT-\(m_{\text{T}}\) scheme. This is due to the fact that FONLL suppresses the contributions from the resummed part (which comes with a large scale uncertainty at low \(P_{\text{T}}\)) by choosing a large enough \(c\). This is of course well justified in the sense that at low \(P_{\text{T}}\) the collinear logarithms are not yet large and thus their resummation cannot be a big effect either. However, we recall that by including the \(\mathcal{O}(\alpha_{s}^{3})\) terms in the resummed cross sections, they also effectively contain contributions from the fixed-order NNLO calculations which can be significant even if the resummation of the associated logarithms is not yet crucial. Moving towards somewhat higher values Figure 8: Ratios of FONLL (red band) and fixed-order NLO (blue band) cross sections with respect to the SACOT-\(m_{\text{T}}\) calculation (green band) in the rapidity window \(2.5<Y<3.0\) at \(\sqrt{s}=13\,\text{TeV}\). The FONLL and fixed-order results use the NNPDF3.0 proton PDFs. The widths of the bands correspond to the scale uncertainties of each calculation. of \(P_{\rm T}\) the scale uncertainty of the SACOT-\(m_{\rm T}\) scheme quickly diminishes and becomes eventually smaller than that of FONLL - starting from \(P_{\rm T}\approx 3m_{\rm b}\) or so. This indicates that the resummation begins to have an effect at such values of \(P_{\rm T}\) but the chosen value of \(c\) in FONLL still retains the fixed-order contribution (with a larger scale uncertainty) significant. At the high-\(P_{\rm T}\) end both FONLL and SACOT-\(m_{\rm T}\) display a scale uncertainty which is approximately the same for both and clearly smaller than the scale uncertainty of the fixed-order predictions. ## 4 Results for proton-nucleus collisions The D-meson production in p-Pb collisions [68] has been used as a constraint in the EPPS21 [62] and nNNPDF3.0 [63] fits of nuclear PDFs. The theoretical framework in the EPPS21 analysis was the one discussed here, SACOT-\(m_{\rm T}\), while the nNNPDF3.0 analysis used a fixed-order POWHEG calculation [48, 49, 50] matched to the PYTHIA [69] parton shower. The differences between the two approaches were discussed in Ref. [44]. Heavy-flavour observables have also been studied in a recent variant of the nCTEQ15 analysis [70] but using considerable simplifications on the partonic matrix elements and kinematics. Specifically, it is the nuclear modification \[R_{\rm pPb}=\frac{d^{2}\sigma^{\rm p\text{-}Pb}/dYdP_{\rm T}}{d^{2}\sigma^{\rm p \text{-}p}/dYdP_{\rm T}}\,, \tag{18}\] for D-meson production that enters the EPPS21 and nNNPDF3.0 analyses. In such ratio most of the scale uncertainties in the SACOT-\(m_{\rm T}\) scheme were observed to cancel between the numerator and denominator though some dependence persist, particularly at low \(P_{\rm T}\)[13]. To be on the safe side, EPPS21 imposed a cut \(P_{\rm T}>3\,\mathrm{GeV}\). In the POWHEG+PYTHIA approach the scale uncertainties in \(R_{\rm pPb}\) at high \(P_{\rm T}\) were observed to be much larger than in the SACOT-\(m_{\rm T}\) scheme [63]. The nNNPDF3.0 analysis nevertheless included the D-meson data without any restrictions in \(P_{\rm T}\) excluding, however, the p-Pb data at backward rapidities (\(Y<0\)). In both cases, the inclusion of the LHCb data [68] led to a significant reduction of the nuclear-PDF uncertainties at small \(x\). In this section we will now use these D-meson-constrained nuclear PDFs to predict the nuclear modification ratios for B-mesons and see whether the predictions agree with the recent LHCb data [34]. Before comparing with the data we study the relative size of the PDF and scale uncertainties in B-meson \(R_{\rm pPb}\). This is done in Figure 9 in which the relative scale and 90% PDF uncertainties for \(R_{\rm pPb}\) are shown. For EPPS21, the PDF uncertainty is calculated according to the Hessian prescription, see Sect. 4.3 of Ref. [62], whereas the 90% nNNPDF3.0 uncertainty is calculated by rejecting the predictions of those replicas that constitute 10% of the most extreme predictions, see Sect. 7.2 of Ref. [63]. In both cases, the correlations between the nuclear and proton PDFs are accounted for. The full uncertainty band combines the PDF and scale uncertainties in quadrature. The scale uncertainties are the largest at low values of \(P_{\rm T}\) and they are very similar between EPPS21 and nNNPDF3.0. In the case of EPPS21 the PDF uncertainties are always clearly larger than the those induced by the scale variations. The nNNPDF3.0 PDF uncertainties are, however, systematically smaller than those of EPPS21 and in places the scale uncertainty competes and even exceeds the PDF uncertainty. The fact that the nNNPDF3.0 uncertainties are generally smaller than those of EPPS21 is presumably mostly due to the methodological differences between these two PDF analyses [71]. Figure 10 shows how our calculations using the EPPS21 and nNNPDF3.0 nuclear PDFs compare against the LHCb \(\mathrm{B}^{\pm}\)-meson data at \(8.16\,\mathrm{TeV}\)[34]. In the backward direction \(-3.5<Y<-2.5\) (\(Y\) referring to the rapidity of the meson in nucleon-nucleon center-of-mass frame) one probes predominantly the large-\(x\) part of the nuclear PDFs where there is an enhancement (antishadowing) in comparison to the proton PDFs. In the forward direction \(2.5<Y<3.5\) it is the small-\(x\) regime of nuclear PDFs that matters the most where the nuclear PDFs are suppressed (shadowing) in comparison to the proton PDFs. The LHCb data are broadly consistent with these expectations and quantitatively agree with both EPPS21 and nNNPDF3.0. In particular, the data at forward direction are more precise than the EPPS21 predictions and could possibly give some additional constraints in a global analysis of nuclear PDFs, though the statistical weight of these B-meson data will be small in a global \(\chi^{2}\) analysis. The lower panel still shows the forward-to-backward ratio \[R_{\mathrm{FB}}=\frac{d^{2}\sigma^{\mathrm{p-Pb}}(Y>0)/dYdP_{\mathrm{T}}}{d^{2} \sigma^{\mathrm{p-Pb}}(Y<0)/dYdP_{\mathrm{T}}}\,, \tag{19}\] which requires no p-p baseline measurement. We also show the \(\mathrm{B}^{0}\) measurement - our calculation is identical for \(\mathrm{B}^{\pm}\) and \(\mathrm{B}^{0}\) (the KKSS08 FFs are the same for these two species). Note that the data in the forward and backward directions come from separate LHC runs with different beam configurations so the luminosity uncertainties do not cancel. Our predictions are found to agree with the data also here. The data perhaps hints towards a Figure 9: The scale (dotted) and 90% PDF (dashed) uncertainties of the B-meson nuclear modification factors in p-Pb collisions. The upper panels correspond to the EPPS21 PDFs [62] and the lower panels to the nNNPDF3.0 [63] PDFs. stronger \(P_{\rm T}\) dependence but a more precise measurement is still required to confirm this in a statistically significant way as notable fluctuations are seen in LHCb data for \(R_{\rm pPb}\) at this \(P_{\rm T}\) region. ## 5 Conclusion and Outlook In summary, we have extended the NLO SACOT-\(m_{\rm T}\) scheme, originally introduced in the context of D-meson production, to the case of B-meson production at the LHC. In the original version we had defined a fragmentation variable that could lead to a pathological behaviour in certain corners of the phase space - a better version introduced in the present paper evades this problem. We contrasted our calculations against the proton-proton data from the LHCb, ATLAS and CMS collaborations finding a very good agreement within theoretical uncertainties originating from the variations of the renormalization/factorization/fragmentation scales and the bottom-quark mass. Notably, the shift in the position of the peak value in \(P_{\rm T}\) spectra when increasing the heavy-quark mass is naturally reproduced with our default setup. We found a good agreement also with the data at high-\(P_{\rm T}\) region where the scale variations play a smaller role. To get some more insight on different GM-VFNS schemes, we compared our results to the FONLL approach Figure 10: The nuclear modification factors (upper panels) and the forward-to-backward ratio (lower panel) for B mesons. The coloured bands correspond to the EPPS21 [62] (blue) and nNNPDF3.0 [63] (purple) nuclear-PDF uncertainties. The data are from Ref. [34]. and concluded that the somewhat different evolution of scale uncertainties as a function of \(P_{\rm T}\) can be attributed to a different regulation of massless coefficient functions which also controls the relative contribution of direct and non-direct production channels. While the scale uncertainties can be large in the case of absolute cross sections, they are strongly suppressed e.g. in ratios of cross sections between different c.m. energies or ratios between different collision systems, which are then much more sensitive to the underlying proton and/or nuclear structure. In particular, we considered the nuclear modification \(R_{\rm pPb}\) and the forward-to-backward ratio \(R_{\rm FB}\) by using the EPPS21 and nNNPDF3.0 nuclear PDFs. The predictions agree very well with the data from the LHCb collaboration, lending further support for the universality of nuclear PDFs. Having now tested the SACOT-\(m_{\rm T}\) scheme in the case of inclusive D- and B-meson production, we plan to extend our framework also to include the decays of these open heavy flavours. In many cases the decay particles - e.g. the J/\(\psi\) spectrum from B mesons - can be measured with a significantly greater accuracy than the fully reconstructed D or B mesons. This would then open e.g. the possibility to include the corresponding \(R_{\rm pPb}\) data to the global fits of nuclear PDFs without resorting to simplifying approximations made in other works and provide more constraints for small-\(x\) gluon shadowing in heavy nuclei. In addition, now that the fixed-order NNLO calculations for \(b\overline{b}\) production are/will soon be publicly available, it begins to be possible to increase the accuracy of GM-VFNS in hadroproduction to include higher-order perturbative contributions. ## Acknowledgements Our work has been supported by the Academy of Finland, projects 308301 and 331545, and was funded as a part of the Center of Excellence in Quark Matter of the Academy of Finland, project 346326. The reported work is associated with the European Research Council project ERC-2018-ADG-835105 YoctoLHC. The computing resources have been brought to us by the Finnish IT Center for Science (CSC), under the project jyy2580.
2309.13481
Offline to Online Learning for Personalized Bandwidth Estimation
In this work, we tackle the problem of bandwidth estimation (BWE) for real-time communication systems through expert personalization. While expert heuristic-based methods have been widely adopted, tailoring these methods for each and every end user environment is cumbersome due to the level of domain expertise and manual effort required to adjust the carefully tuned heuristic parameters. Thus. we propose Merlin, a data-driven solution to BWE that harnesses expert demonstrations from prior heuristic-based methods to extract an expert BWE policy. The extracted policy can then be finetuned to end user network conditions to improve user quality of experience (QoE). In real-world videoconferencing calls, Merlin matches our expert's policy with no statistically significant movements in terms of objective QoE metrics. Additionally, we show that personalizing Merlin's control policy is possible through a small number of online data-driven parameter updates.
Aashish Gottipati, Sami Khairy, Gabriel Mittag, Vishak Gopal, Ross Cutler
2023-09-23T21:39:51Z
http://arxiv.org/abs/2309.13481v2
# Real-time Bandwidth Estimation from Offline Expert Demonstrations ###### Abstract In this work, we tackle the problem of bandwidth estimation (BWE) for real-time communication systems; however, in contrast to previous works, we leverage the vast efforts of prior heuristic-based BWE methods and synergize these approaches with deep learning-based techniques. Our work addresses challenges in generalizing to unseen network dynamics and extracting rich representations from prior experience, two key challenges in integrating data-driven bandwidth estimators into real-time systems. To that end, we propose Merlin, the first purely offline, data-driven solution to BWE that harnesses prior heuristic-based methods to extract an expert BWE policy. Through a series of experiments, we demonstrate that Merlin surpasses state-of-the-art heuristic-based and deep learning-based bandwidth estimators in terms of objective quality of experience metrics, while generalizing beyond the offline world to in-the-wild network deployments where Merlin achieves a 42.85% and 12.8% reduction in packet loss and delay, respectively, when compared against WebRTC in inter-continental videoconferencing calls. We hope that Merlin's offline-oriented design fosters new strategies for real-time network control. ## 1 Introduction Estimating the optimal rate of information flow is essential to ensuring congestion-free network communication. The bottleneck link- the link with the least available bandwidth-dictates the rate of information flow across the network. Estimating the bottleneck link refers to the problem of bandwidth estimation (BWE)- a challenging and active area of networking research. BWE is fundamental to real-time communication (RTC) systems and lies at the heart of network systems. Without accurate bandwidth estimates, seamless network communication becomes nearly impossible. The challenge of BWE emerges from the complex and dynamic nature of network environments [52]. First, network flows change over time as devices come and go and users change applications, resulting in a non-stationary environment with the bottleneck link varying over time. Second, the bottleneck link often lies beyond the first hop and cannot be probed instantaneously. Third, network environments are partially observable. That is, many outside factors such as cross-traffic impact the bottleneck link and cannot be directly controlled. To chip away at these challenges, early bandwidth estimators in RTC relied mainly on Real-time Transport Protocol (RTP) [46], which probes the network and aggregates receive-side network statistics. By periodically probing the network via RTP, the effects of non-stationarity and partial observability can be mitigated when estimating the capacity of the bottleneck link. The simplicity of RTP enables portability; however, it limits the quality of estimates produced [28], leading to the widespread adoption of more sophisticated heuristic-based methods [7]. Based on aggregated network statistics, rule-based estimators such as WebRTC [4] leverage statistical models to estimate the available bandwidth. While these methods have been widely adopted, increasing network heterogeneity and complexity necessitates more sophisticated methods. For example, RTC applications such as videoconferencing require high bandwidth and low latency while passive internet-of-things (IoT) monitoring systems require low bandwidth. Both of these flows, while disparate in their requirements, compete for resources in the network core. Scaling to millions or even billions of flows, resources quickly become scarce while flow interactions perturb the network. To better serve users and meet growing resource demands, we require fine-grain network control, i.e., instantaneous adaption to network changes. However, prior rule-based methods tend to follow longer-term network trends to promote smooth estimates [4]. Additionally, heuristic-based models were hand-crafted based on extensive domain knowledge, making them difficult to adapt to the ever-changing network landscape. Lastly, even with domain knowledge, network complexity is quickly outstripping human intuition, e.g., configuring a simple TCP session has dozens of free variables [9]. To enable future network applications, we require methods that react to instantaneous network changes, cope with the growing complexity of networks, and are easy to update. Within recent years, deep learning-based models have demonstrated impressive ability for real-time adaption under complex domains, while enabling ease of updates through enhanced input data and fine-tuned objective functions [39]. Although these properties are desirable, real networks tend to be extremely diverse, making data-driven adoption difficult. In contrast to other deep learning methods, reinforcement learning (RL) seeks to learn a policy. The learned policy incorporates environment dynamics, enabling the learned agent to grasp not only which control actions are positive but which are negative as well. RL utilizes exploration to search over control policies, enabling the agent to try unconventional strategies and discover robust network control policies [24]. Even so, RL agents are conventionally trained from a blank slate in online environments, neglecting the vast amount of knowledge encoded in previous heuristic-based methods, and requiring a large number of training samples to converge to an acceptable policy. In the case of videoconferencing, the large sample complexity translates to hundreds of thousands of videoconferencing calls and extremely long convergence times [13]. Second, and most crucially, in videoconferencing we seek to learn a policy to maximize user quality of experience (QoE); however, defining a reward for subjective user experience with objective network metrics is difficult and remains an open research problem. Furthermore, without a well-defined reward function, agents may exploit the reward definition and maximize the expected return without learning the desired task- a phenomenon known as reward hacking [19]. To that end, we desire a method for real-time BWE that exhibits the benefits of deep RL but leverages the vast efforts of prior domain experts; thus, we turn towards offline imitation learning (IL) methods. IL differs from RL in that it seeks to learn a policy from a known expert, i.e., given a set of offline demonstrations (expert states and actions), extract a policy that best fits the expert. IL builds upon the vast efforts of previous domain knowledge encoded within heuristic-based experts and enables many of the benefits of RL. However, there is no free lunch. Imitating an expert for real-world BWE suffers from two distinct challenges. First, extracting a policy from a finite set of expert demonstrations does not necessarily result in the true generalizable, expert policy. Without extracting the true policy, the agent is likely to introduce compounding errors under unseen dynamics, severely degrading user QoE [60]. Secondly, handcrafted expert feature representations may not necessarily translate directly to data-driven methods [14]. Consequently, we propose Merlin, the first purely offline, data-driven solution for BWE that harnesses the encoded domain knowledge of prior methods to extract an expert BWE policy. Merlin is trained to imitate an expert Unscented Kalman Filter (UKF) model strictly from offline, simulated demonstrations via behavioral cloning (BC), a method that reduces policy learning to supervised learning. **We emphasize that no network interactions are required to train Merlin and data is collected once before training.** Furthermore, as expert demonstrations are sampled strictly from simulation, **we require no specialized hardware or testbed environments to generate data for our agent**, democratizing access to learning-based network control. We rigorously evaluate Merlin in simulated, testbed, and wild environments. The highlights of our evaluations are as follows. We find that our IL model outperforms the state-of-the-art heuristic-based methods as well as the state-of-the-art RL-based methods in terms of objective QoE metrics within our emulated testbed. Furthermore, we demonstrate that our imitator is robust to domain shifts and is capable of mimicking our expert UKF model with high confidence in emulated environments. We further support our claims with in-the-wild deployments, where Merlin achieves a 42.85% and 12.8% reduction in packet loss and delay, respectively, while preserving a higher receive rate in comparison to WebRTC in inter-continental videoconferencing calls. Lastly, we observe that the reported receive rate and media type features are critical to extracting a control policy. Increasing the number of demonstrations appears to aid in mitigating domain shift but was not directly reflected in IL loss metrics. Leveraging temporal correlations via recurrent models tends to perform better than non-recurrent methods. Lastly, when expert demos are abundant, BC tends to outperform more sophisticated IL methods. In summary, our contributions are as follows: 1. We demonstrate a new method for BWE that utilizes IL to extract expert policies from purely offline data, leveraging the extensive domain expertise encoded in prior hand-crafted network heuristics. 2. We rigorously evaluate Merlin in simulated, emulated, and real-world conditions to study the generalization of cloned policies. Our analysis provides insights into achieving robustness to distribution shift, a key challenge in adapting data-driven models. 3. We conduct multiple ablation studies to uncover IL best practices for network control. Our guidelines on features, demonstrations, and architectures provide a recipe for future research at the intersection of machine learning and networking. 4. We discuss the broader potential of learned policies to transition toward data-driven control for non-stationary networked systems. IL shows promise for flexibly modeling complex network dynamics offline. Overall, we position this work as advancing the integration of machine learning techniques into networking systems. ## 2 Related Work Congestion control (CC) solutions broadly seek to promote packet flow, reducing network congestion while encourag ing maximum bandwidth utilization. These approaches often employ BWE to measure the available link capacity and set the send rate accordingly. Classical techniques for BWE have traditionally relied on various forms of packet probing techniques [21]; however, these methods are fundamentally limited by probing overhead which may itself contribute to congestion. More modern approaches employed sophisticated statistical techniques to estimate bandwidth [27, 56]. Most notably, implementations of WebRTC [4] utilize a Kalman Filter for BWE and have become the de facto standard for RTC. While widespread, heuristic-based methods tend to be conservative, matching long-term trends rather than tracking instantaneous network perturbations [7, 13]. Other methods have sought to take a broader approach to CC for RTC systems. For example, enforcing a close coupling between video codecs and the transport layer has been shown to reduce network congestion by jointly adapting the encoding rate of video frames and the transport layer send rate [16, 69]. More recent endeavors have shifted to machine learning and deep learning techniques to estimate bandwidth by exploiting the structured signal present in network data [55, 59, 47, 25]. While traditional deep learning techniques tend to perform well for static tasks such as classification, they struggle to adapt to more dynamic control tasks such as CC, which requires learned dynamics to inform congestion decisions [1]. Incorporating network dynamics enables learning richer BWE and CC policies. Accordingly, recent works have sought to apply reinforcement learning to CC [5, 18, 22, 33, 40, 43, 48, 57, 58, 67]. Mao et al. [38] learn a policy to dynamically tune the transmit rate of video packets and deploy their learned model on Facebook's web-based video streaming platform; however, their area of focus is confined to video-on-demand and not RTC systems. On the other hand, the first RL-based bandwidth predictor for RTC systems, R3Net [13], relied exclusively on deep RL to produce bandwidth estimates, neglecting prior domain expertise. Other novel RL-based approaches to CC exploit cross-layer features to inform policy decisions [30, 34, 35, 36, 37, 66]; however, enforcing a close-coupling between layers restricts architecture design and reduces modularity. In contrast, recent works have sought to leverage expert algorithms for model training. Eagle [11] adopts an expert-oriented approach, seeking to match BBR [6] via online RL. DuGu [23] utilizes an online IL approach to mimic a custom CC oracle for short-term video uploads. Zhou et al. propose Concerto [68], a BC-based method that leverages oracle estimates to select the best video bitrate from a discrete set of encodings; however, we emphasize we focus on learning to estimate the available network resources from offline domain-expertise collected from prior heuristic-based methods. Later works such as Gemini [61], HRCC [54], HybridRTS [65], BoB [3], SAFR [62], OnRl [64], and Libra [10] build non-standalone estimators directly on top of heuristic-based methods or utilize prior methods as fall-back mechanisms to mitigate tail-end predictions of learned agents. Along similar lines, Zhang et al. explore fusing cloned experts with online RL-based models for video bitrate prediction [63]. In contrast to these works, we propose Merlin a standalone, data-driven approach to BWE that does not rely on auxiliary estimators such as Google's Congestion Control (GCC) algorithm for BWE. Lastly, the most similar work to ours is Sage [60]. Sage builds upon the vast efforts of prior methods for learning a purely, data-driven TCP-based CC algorithm via offline RL. In contrast to Sage, we emphasize that we tackle the problem of BWE for RTC systems rather than CC for TCP-based applications; hence, the dynamics of our problem are dissimilar, e.g., TCP-based systems maintain reliability while RTC systems exchange reliability for reduced latency. Furthermore, no specialized testbed equipment nor even emulated interfaces are required for model training. Merlin is trained completely from offline simulated experience, enabling others to readily build on our method. In summary, Merlin differs in fundamental ways from prior works. First, we do not utilize RL or hybrid methods and rely on IL to learn a standalone BWE policy. Second, as depicted in Figure 1, we train purely offline without ever interacting with a network during training, generalizing from offline simulated experience to both emulated and wild environments. Lastly, Merlin is designed specifically for RTC systems, prioritizing latency over reliability. ## 3 Bandwidth Estimates vs. User Experience To illustrate the importance of BWE, we conducted a preliminary study on the impact of bandwidth estimates on user QoE during RTC videoconferencing. We benchmark three bandwidth estimators: our expert UKF model, an overshooter, and an undershooter. We conduct approximately 100 live video-conferencing calls on real networks and report the video mean opinion score (MOS), a gold standard metric for video QoE that ranges from one to five with one corresponding to low QoE and five mapping to high QoE [41]. We report a subset of our findings from network environments with stable 1 Mbps links in Figure 2. In Figure 1(a), the bandwidth estimator overshoots the 1 Mbps limit, which causes the video MOS to severely degrade, oscillating between 0.0 and 2.0 (a MOS of 0 indicates an error, i.e., no video frames are received). The Figure 1: Learning from Offline Demonstrations. poor bandwidth estimates severely degrade objective QoE and cause the sender to flood the network with packets, resulting in increased queuing delays and packet loss. In contrast to overshooting, in Figure 1(c), the bandwidth estimator severely underestimates the 1 Mbps link. Although not as harsh as overshooting, the video MOS degrades due to a lack of high-quality video frames, resulting in a stable MOS of 2.0. Specifically, the video codec leverages the bandwidth estimates to choose an encoding rate for each frame. By underestimating the available bandwidth, the video frames are encoded at a reduced bitrate to meet bandwidth requirements, delivering lower-quality video streams to the receiver. However, since the network is not congested, packets are still able to flow freely and arrive at the receiver without being lost, leading to better user experience and a higher MOS than in the overshoot case. Lastly, we observe the best video MOS when the bandwidth estimator tracks the bandwidth closely as demonstrated by our UKF expert in Figure 1(b). By providing accurate resource estimates, packets are able to flow freely without being dropped; additionally, video frames can be encoded at higher bitrates due to the increase in estimated network resources. The increased video encoding rate translates to higher-quality video streams being delivered to the receiver which results in a stable MOS of 3.0. In comparison to undershooting and overshooting, tracking the available bandwidth closely leads to a significant boost in objective QoE. We report the results on simple stable cases to illustrate the impact of bandwidth estimates; however, BWE becomes more challenging in live environments due to the partial observability and non-stationary nature of real networks [52]. Thus, in this work, we seek to tackle complex environments. We conduct several video-conferencing experiments on live inter-continental links in section 5, and demonstrate that we can preserve user QoE through high-quality BWE via purely offline IL. ## 4 Merlin **Design Goals.** Conventional bandwidth estimators provide smooth estimates over long horizons, making instantaneous network adaption difficult. In contrast, new RL-based estimators react promptly to network perturbations and are "easy" to update but often exhibit high sample complexity and require a well-defined reward function to guide network control policy. Thus, we desire a method for real-time BWE that exhibits the benefits of deep RL but leverages the vast efforts of prior domain experts to bypass reward definitions; thus, we turn towards offline IL. Specifically, we seek to leverage BC to extract an expert BWE policy from offline expert demonstrations for real-time BWE. **Overview.** For our work, we seek to mimic our expert UKF model, a rule-based model constructed from extensive domain expertise. UKF, like WebRTC, adopts a delay-based approach; that is, based on observed network delays, UKF smoothly adapts its bandwidth estimates. More concretely, UKF utilizes an internal dynamics model to represent the current network state and a set of static functions to adapt its bandwidth estimate. In contrast to WebRTC, UKF was designed specifically for videoconferencing. The estimates produced by UKF do not follow an additive increase multiplicative decrease (AIMD) scheme, which leads to smoother bandwidth estimates in comparison to WebRTC's "sawtooth" behavior. UKF has previously been deployed on Microsoft Teams; hence, it is a battle-tested expert for real-time bandwidth estimation. Additionally, as extensive domain expertise is required to adjust UKF, it is the perfect candidate for our work. Thus, given a set of collected UKF demonstrations \(\Xi\), UKF states \(S\), and UKF actions \(\pi^{*}(s)\), we seek to learn the expert policy \(\pi^{*}\) in the following manner: \[\pi^{*}=\operatorname*{arg\,min}_{\pi}\sum_{\xi\in\Xi\in S}L(\pi(s),\pi^{*}(s)) \tag{1}\] where \(\pi\) corresponds to the policy of our imitator. By reframing policy learning in the context of supervised learning, BC enables agents to learn a control policy while benefiting from the stability and convergence properties of supervised learning. Despite these benefits, BC suffers from the problem of compounding error [53]. Supervised learning relies on the i.i.d. assumption which may not hold during long-horizon tasks. Trajectories contain a sequence of states and actions that often depend temporally, breaking the i.i.d. assumption [45]. As a result, when a BC model arrives at an unseen state, the newly executed action may be misaligned with the true objective, diverging slightly from the expert trajectory. The dependence between states causes the error to compound as the BC agent moves along its trajectory, diverging more and more from the expert. Compounding error Figure 2: Quality of Bandwidth Estimates vs. User Experience. has been shown to limit the robustness of BC models as the learned agents are incapable of bridging to new unseen environments [60]. To tackle real-world applications, IL methods must also overcome the distribution shift between offline demonstrations and target environments. **Addressing Compounding Error.** To combat compounding error and improve generalization, we utilize a large number of expert demonstrations. We utilize OpenNetLab's gym environment to collect expert trajectories. Specifically, we collect 100k expert demonstrations from simulation, randomly varying the call parameters to improve data diversity. By sampling the expert from a diverse set of circumstances, Merlin is able to better observe the expert from a variety of states, which enhances state space coverage and mitigates compounding error. Furthermore, to mitigate the effects of compounding error that may arise due to the sheer size of the output range, we restrict bandwidth estimates to 10 Kbps and 8 Mbps, a range that supports audio-only calls to high-definition video-conferencing calls with screen sharing. We additionally limit our action space \(\hat{b}_{log}\) to a real number between 0 and 1 and employ a log transform to project Merlin's output action into bps, \[\hat{b}_{log}=\frac{\log(\hat{b})-\log(b_{min})}{\log(b_{max})-\log(b_{min})} \tag{2}\] where \(\hat{b}\) represents the estimated bandwidth, \(b_{min}\) corresponds to the minimum bandwidth, \(b_{max}\) is the maximum bandwidth, and all bandwidth values are in Mbps. Limiting the action space to a number between 0 and 1 reduces the complexity of our action space and helps our model learn a more robust policy. The impact of these design decisions is reflected in the results detailed in section 5. **Extracting BWE Signals in Partially Observable Environments.** To deal with partial observability, state-of-the-art heuristic-based methods such as WebRTC combine queuing theory with raw network metrics to model one-way delay gradients for BWE [7]. In contrast, current RL methods start from a blank slate and learn to extract complex BWE signals from raw network metrics. Both methods, while performant, neglect the benefits of each other; that is, heuristic-based methods are fundamentally limited by the capacity of domain experts and cold-start RL overlooks prior efforts. Accordingly, Merlin is designed to learn these domain-specific representations implicitly through expert supervision via IL. Similarly, Merlin relies on its data-driven architecture to extrapolate more complex signals; specifically, we incorporate a Long Short Term Memory (LSTM) unit to maintain the history of previous network behavior. The LSTM acts as a buffer for experience, and, over the duration of a videoconferencing call, Merlin builds up its internal state representation to learn temporally-dependent features that capture the non-stationary nature of the network environment. The robustness to partial observability is validated in section 5. In addition to incorporating expert supervision and learned feature representations, we conducted an exhaustive feature ablation study detailed in section 5 to arrive at the best performing state representation (detailed in Appendix A). Most notably, we experimented with including the five previous bandwidth estimates which were sampled at 60 ms granularity. However, as we utilize offline expert trajectories for training, these estimates actually correspond to our expert's previous predictions. It was observed that these features hindered Merlin's ability to extract the expert policy. We hypothesize that the previous estimates led to Merlin placing more weight on these previous estimates; effectively, "cheating" by reusing UKF estimates for bandwidth prediction. Thus, when generalizing to new environments, Merlin would perform poorly in comparison to our expert UKF model. Pruning these previous estimates greatly enhanced Merlin's robustness to domain shift. **Tackling Domain Shift.** As previously mentioned, prior heuristic methods were constructed by domain experts; hence, updating these models is non-trivial. The current process for updating these methods relies on a time-consuming processing and entails collecting network statistics, manually hand-engineering representations, and redeploying these heuristics, repeating until an acceptable measure of success is achieved. As network heterogeneity increases, specializing these models for individuals and adapting model parameters to each new environment quickly becomes infeasible. On the other hand, RL agents can be readily fine-tuned with new observations; however, estimates tend to be noisier, reacting to instantaneous network perturbations. In contrast, Merlin utilizes IL for expert supervision to mitigate overly aggressive reactions to network perturbations, effectively regularizing bandwidth estimates. In combination with expert supervision, Merlin's state construction utilizes both short term and long horizon features to promote smooth bandwidth estimates. As a first step, IL enables learning a policy for smooth expert bandwidth estimates, while the data-driven design facilitates a personalized experience by fine-tuning on new observations. Merlin's resilience to domain shift is empirically validated by generalizing from offline simulated observations to real, inter-continental videoconferencing calls, an environment where state observations deviate significantly from offline experi Figure 3: Merlin’s Network Architecture. ence, in section 5. Merlin's architecture and policy network are described next. **Architecture Details.** We choose to utilize an LSTM for our policy network to exploit the temporal correlations present in network traffic and mitigate compounding error. We benchmark against non-recurrent architectures in section 5. Our findings demonstrate that the recurrent structure of the LSTM enhances network control performance by exploiting the temporal dependencies between network control actions. Merlin accepts a 64 length tensor of normalized state observations. As depicted in Figure 3, Merlin first encodes state observations with its LSTM and then leverages two fully-connected layers to decode state observations into output actions. The first fully-connected layer is followed by a ReLU activation function while the final output layer utilizes a sigmoid activation function, limiting the model output to a real number between 0 and 1. It is important to note that we utilize a small, lightweight architecture for client-side deployment and real-time execution (\(\approx\) 60 ms granularity). **Training Procedure.** As illustrated in Figure 5, we train Merlin from offline UKF demonstrations. First, we deploy our expert UKF in a fork of OpenNetLab's gym environment [12]. The gym leverages the WebRTC stack and NS3 [44] to simulate videoconferencing calls to promote data-driven BWE solutions. Network behavior is simulated according to the provided trace workloads and network parameters. Our calls are generated from a diverse set of production traces, encompassing low bandwidth, high bandwidth, fluctuating bandwidth, burst loss, and LTE workloads. During generation, we randomly vary call parameters such as the proportion of video packets to audio packets, queuing delays, and the video start times. At each step in the simulation, transport layer information such as the send time, arrival time, and payload size is reported. The gym environment provides tools for calculating relevant transport metrics such as the delay, loss ratio, and receiving rate from the exposed packet level information. We record the observed packet level information and UKF bandwidth predictions from 100k unique calls. We apply the inverse of equation 2 to project UKF's bandwidth estimate into action space for training. It is important to note that gym packets do not contain real video and audio payloads; hence, there is a gap between real and simulated videoconferencing calls. The sim-to-real gap limits the breadth of expert demonstrations; however, the programmatic nature and customizability of call parameters enable us to collect expert demonstrations from a diverse set of circumstances that may not be realizable under real-world conditions. Although our expert is not observed in real-world environments, we emphasize that the diverse set of observations enables our model to bridge the gap from simulated networks to in-the-wild deployments. Before training, an offline dataset of state observations was constructed by grouping packets received in 60 ms windows and aggregating feature-level information accordingly. Features were reconstructed based on the state representation detailed in Appendix A. The sequence of states and actions from one call was treated as a single training sample. To train Merlin, a batch size of 256 was utilized, i.e., 256 calls were ingested per training step. The mean squared error (MSE) between Merlin's actions and UKF's expert actions was minimized over the course of 1000 training epochs. Different loss functions such as mean-absolute-error (MAE) were empirically tested, indicating that the MSE loss objective produced the best imitation policy. To update model parameters and dynamically tune the learning rate, Adam optimization was employed with an initial learning rate of 0.001. After each training epoch, utilizing randomly generated videoconferencing workloads and OpenNetLab's gym, the MSE validation performance between Merlin and UKF was reported (see Section 5 for more details on workload generation). **Most importantly, Merlin never interacted with a network nor was a single packet transmitted during train time. Furthermore, training data was collected once- prior to training.** **Implementation.** We implement Merlin's LSTM architecture in Pytorch. We export Merlin to ONNX format and utilize ONNX runtime for deployment. ONNX provides a basic model wrapper that enables compatibility across operating systems and applications. Merlin can be deployed directly into the Teams media stack via ONNX runtime for receiver-side estimation. By extracting an expert policy from offline experience, we succeed in building upon prior heuristic-based estimators for BWE. That is, we utilize the encoded domain knowledge to learn a generalizable, data-driven bandwidth estimator without ever transmitting a single network packet. ## 5 Evaluation In this section, we seek to answer the following questions: 1. Is offline, simulated experience alone sufficient to produce expert bandwidth estimates? 2. How robust is the learned BWE policy to domain shift? 3. Which features are most important to mimicking an expert estimator? 4. How do different IL architectures and techniques impact bandwidth estimates? 5. How does the quantity and quality of demonstrations affect BWE performance? Figure 4: System overview of Merlin. ### Methods **Environment Assumptions and Parameters.** For our evaluations, we sample state observations at 60 ms granularity, enabling real-time BWE. Bandwidth estimates are clipped to be between 10 Kbps and 8 Mbps, which supports a broad range of RTC applications, ranging from audio-only calls to high-definition, 30 frames per second videoconferencing with interactive screen sharing. Estimates below 10 Kbps are below industry recommendations for audio-only calls, while estimates above 8 Mbps offer no added benefits. Lastly, we restrict our evaluations to peer-to-peer, audio and video calls (see Figure 6) and leave group calls for future work. **Benchmarks.** We benchmark against two different state-of-the-art bandwidth estimators. (1) WebRTC [4], the de facto standard for RTC applications that utilizes GCC for BWE. GCC leverages a Kalman filter to tune bandwidth estimates based on the estimated end-to-end one-way delay variation [7], employing an AIMD scheme. WebRTC is the most widely utilized communication stack for RTC applications. (2) R3Net v1.4, a variant of the online RL model proposed by Fang et al. in [13]. R3Net v1.4 was previously benchmarked against HRCC [54] and Gemini [61], the winners of MMSys '21 Bandwidth Estimation Challenge, and shown to outperform both models, achieving state-of-the-art performance for RL-based BWE. In addition to baseline comparisons, we evaluate Merlin's imitation quality in relation to our expert UKF, a custom, hand-engineered statistical model previously deployed in production on Microsoft Teams. **Metrics.** For our simulated evaluations, we track the MSE between Merlin and UKF in action space as our sole key performance indicator. A small action MSE indicates that the actions produced by our BC model closely match those produced by UKF, while a larger difference indicates a deviation between the imitator and the expert. While calls are simulated, the simulated packets do not carry any meaningful payload, so we do not compute gold-standard metrics such as the video MOS in this environment. In contrast to our simulated environments, we track the following metrics in our testbed and wild environments. (1) Video MOS. The MOS values range from 1 to 5 with 1 indicating poor QoE and 5 indicating exceptional QoE. Video MOS is the gold standard for QoE and is computed based on user feedback. In our work, we utilize a vision-based model to produce a video MOS estimate [41]. The estimates were shown to exhibit 99% correlation with user visual experience. (2) Audio MOS. Similar to the video MOS, the audio MOS also ranges from 1 to 5 with 1 indicating low audio QoE and 5 indicating high QoE. Audio MOS is the gold standard for audio-based QoE and is computed based on user feedback. In our work, we utilize a signal-based model to produce an audio MOS estimate. The estimates were internally shown to correlate significantly with user audio experience. (3) Receiving rate. The receiving rate is reported in Kbps and correlates with user experience. While a higher receiving rate is preferred, delay and packet loss must be taken into account as a high receiving rate can correlate with network congestion. (4) Packet loss rate. The percentage of lost packets observed. A lower loss rate indicates that packets are freely flowing through the network while a high loss rate indicates network congestion and degradation in user QoE. (5) Delay. The delay metric is reported in ms, with a lower delay indicating higher QoE and a higher delay indicating network congestion. We choose to track the delay mean as opposed to the delay MOS, as a notable shift in delay is required to move the delay MOS score significantly, e.g., 3 ms increase in delay corresponds to an observed 0.001 increase in delay MOS (a higher delay MOS is worse). Furthermore, it is important to note that while multiple network metrics correlate to user experience, the relationship is complex. Thus, to understand the overall impact on user experience, we must analyze these metrics collectively rather than individually. **Simulated Evaluation.** We evaluate the performance of Merlin against UKF using traces generated from production parameters within a fork of OpenNetLab [12]. We randomly generate traces for evaluation from a diverse set of network workloads containing low bandwidth, high bandwidth, fluctuating bandwidth, burst loss, and LTE. Call parameters such as the proportion of video packets to audio packets, queue delay, and the video start time are randomly sampled at runtime. Our evaluation consists of 480 distinct simulated calls. We run 1000 validation runs, which corresponds to nearly \(480,000\) simulated calls. We report the best achieved performance. Figure 5: From Offline Demos to Agent Deployment, Training Merlin via Imitation Learning. It is important to note that in real videoconferencing calls, video packets are not received at the start of the call. This is because audio packets tend to flow first, which leads to a sharp change in bandwidth once video packets are received. Our randomly generated traces capture the variation in delay of video streams. This shift in bandwidth leads to more challenging traces for BWE. **Testbed Evaluation.** We benchmark the performance of Merlin against WebRTC, UKF, and R3Net v1.4 using production traces over emulated networks within our testbed environment. We utilize 10 different production traces which cover low bandwidth, high bandwidth, fluctuating bandwidth, and burst loss settings. Our testbed consists of two lab machines connected over a network link. The network link is emulated to match production settings. Since we are transmitting over real networks, other factors such as cross-traffic and queuing delays influence the results of our evaluation. To mitigate noise, we conduct hundreds of evaluation calls and utilize a Welch t-test to determine whether our results are statistically significant. Our evaluation consists of over 100 emulated calls per model (\(\approx 400\) calls in total at 10 per trace). We report the averaged metrics across each network profile. In relation to UKF, we seek to accept the null hypothesis, that is, there is no difference between the performance of the imitator and the expert within our emulated environment. In contrast, we seek to outperform existing methods such as WebRTC and R3Net v1.4. **In the Wild Evaluation.** We measure the performance of Merlin against UKF and WebRTC over real networks in the wild. Our setup consists of 20 nodes distributed across North America, Asia, and Europe. For each evaluation call, we randomly sample 10 pairs of nodes. We then conduct calls with UKF, WebRTC, and Merlin. We conducted our experiments during the day and at night over the course of a week. Similar to our emulated evaluation, we conducted hundreds of evaluation runs and utilized a Welch t-test to determine whether our results were statistically significant. Our evaluation consists of over 700 in the wild calls per model (\(\approx 2100\) calls in total). We report the averaged metrics across all runs. In relation to UKF, we seek to accept the null hypothesis, that is, there is no difference between the performance of the imitator and the expert within real deployments. In contrast, we seek to improve upon WebRTC. **Ablation Studies.** We experiment with different learning parameters and evaluate our imitator against UKF using randomly generated workloads within our simulated environment. The setup is identical to our simulated evaluations. We report the best achieved performance across each parameter setting. We experiment with different architectures, IL methods, input features, demonstration numbers, and types of demonstrations. For an in-depth review of the IL methods tested, we direct readers to Appendix E. **Key Findings.** Through our evaluations, we demonstrate the following findings: 1. Offline experience is sufficient to produce expert bandwidth estimates. Merlin outperforms the state-of-the-art bandwidth estimators in terms of video MOS, audio MOS, and the average receiving rate. We show that the change in video MOS and receiving rate are statistically significant. Merlin also shows no statistical movement in comparison to UKF on real networks. 2. Our learned BC policy is robust to domain shift. We train on offline experience from simulated UKF calls and generalize to both emulated and wild networks. We further show that these results are statistically significant by showing no movement in terms of video MOS and audio MOS. 3. The most important features for mimicking an estimator and producing expert bandwidth estimates are the receiving rate and the media type features. Counter-intuitively, we find that the average packet loss rate and loss ratio have little effect on the learned predictor. The best subset of features contains all five input feature categories. 4. Rather than using demonstrations drawn from our target environment, we find that the richness and diversity of demonstrations contributes more to the performance of the imitator; hence, using a diverse set simulated data is sufficient for policy extraction. ### Simulated Audio and Video Calls We first assess the ability of Merlin to imitate our expert UKF bandwidth estimator. Merlin achieves an MSE difference of 0.0012 in action space in comparison to UKF over 480 randomly generated traces (see LSTM-BC in Figure 10(b)). While opaque, these results indicate that Merlin can closely mimic our expert UKF estimator which is evident in our qualitative assessment in Figure 7. Most notably, in Figure 7(a), we see that Merlin inherits the same quirks as our expert. UKF takes a more conservative approach to BWE to produce smoother estimates over the duration of the videoconferencing call; as a result, UKF avoids abrupt changes that are prominent in the fluctuating case. Furthermore, our expert produces estimates based on the observed packets, and since video packets tend to not flow immediately, our expert severely undershoots the true bandwidth at the beginning of the high bandwidth call in Figure 7(b). Since audio packets only require 10 Kbps of bandwidth, both the expert and imitator severely undershoot the true bandwidth at the beginning of the call; however, as soon as bandwidth-hungry video packets begin to flow across the network, both the imitator and UKF exhibit the same behavior of smoothly ramping up to the bandwidth limit. Both quantitatively and qualitatively, we demonstrate that our imitator is capable of mimicking our expert from purely offline experience within simulation. Our simulation mainly serves as a validation check that our imitator works as expected. We discuss more rigorous evaluations in the coming evaluation sections. ### Testbed Videoconferencing We compare the performance of our learned imitator against three benchmark models: WebRTC, UKF, and R3Net v1.4 over controlled networks with emulated links. We note that cross-traffic and other noise contributors are present within the testbed environment. We test each estimator across a diverse set of production network traces and aggregate statistics across \(\approx 100\) calls for each method. Our results are summarized in Table 1. In terms of video MOS scores, Merlin outperforms both WebRTC and R3Net v1.4, two state-of-the-art methods. Merlin achieves a 3% improvement over WebRTC and a 0.3% improvement against R3Net v1.4 in terms of video MOS. The movement in video MOS is statistically significant. As for audio MOS, Merlin beats WebRTC and R3Net v1.4 by 0.4% and 1.7% respectively. While Merlin attains a modest improvement over state-of-the-art methods, we emphasize that Merlin is trained completely from offline experience while R3Net v1.4 required millions of online network interactions to converge and WebRTC involved extensive domain knowledge and hand-engineered rules to attain comparable performance. It is important to note that WebRTC, while the standard for RTC, is designed to be a general purpose RTC stack as opposed to specializing in videoconferencing. Due to its general purpose nature, UKF's specialized estimates outperform WebRTC; hence, as the imitator, Merlin in turn outperforms WebRTC. In contrast, the performance gap between Merlin and R3Net v1.4 is likely due to limitations in generalizability. For example, while heuristic-based models like UKF were designed with domain expertise to ensure domain adaption, R3Net v1.4 was trained stochastically to maximize its reward, which may not fully map to user subjective experience across network settings. The evaluations further indicate that Merlin regresses in terms of the packet loss rate and delay mean against R3Net v1.4 and WebRTC respectively; however, Merlin attains a higher receiving rate which translates to more packets flowing across the network, triggering an increase in both the loss rate and delay mean. First, when the estimator produces higher estimates, a greater receiving rate can be attained. These estimates are fed to the codec and the codec itself makes executive decisions such as assigning more parity packets to the link to mitigate packet loss through forward error correction (FEC). Thus, with a higher receiving rate, we can expect the loss rate and congestion to increase which degrades the delay metric. However, we emphasize that the 3 ms delay regression against WebRTC is negligible as it translates to a delay MOS increase of \(\approx 0.001\). The movement in receiving rate and delay are statistically significant. Despite these regressions, Merlin enhances both video MOS and audio MOS, which are two established objective QoE metrics, when compared with WebRTC and R3Net v1.4. Lastly, we compare against UKF on our testbed environment to study how well Merlin imitates our expert. Quantitatively, in terms of video MOS, audio MOS, and the receiving rate, we observe no statistically significant movement across our production traces (see Table 1), which indicates that these metrics are statistically drawn from the same distribution. However, Merlin does incur a degradation in terms of both the loss rate and delay in comparison to UKF. While close, the small deviation in loss rate and delay may indicate that Merlin fails to entirely mimic our expert. One possible explanation for this regression is the domain shift between our simulated environment and the testbed environment. Since Merlin is trained entirely from offline simulated data, the delay metrics observed at train time may not fully match our target environment. This is because networks fall under the umbrella of partially observable environments as links outside the subnet cannot be probed directly. As a result, many of these factors cannot be accurately simulated within our simulated environment, leading to a divergence in delay metrics. One possible resolution would be to incorporate real-world data into our offline dataset, enabling Merlin to observe the shift in delay computation. Despite this regression, in terms of our gold standard MOS metrics, Merlin outperforms the state-of-the-art and demonstrates no statistical movement in comparison to the expert, demonstrating that Merlin is capable of generalizing to new environments from simulated, offline expert observations. Qualitative results are presented in Appendix C. ### Videoconferencing in the Wild We compare Merlin against WebRTC and UKF over real network links. We deploy and evaluate each estimator over real networks with links spanning multiple continents. We aggregate statistics from \(\approx 2100\) calls. Our results are summarized in Table 2 and Figure 9. The results demonstrate that Merlin is capable of generalizing from simulated to real-world environments, performing competitively with our expert UKF and WebRTC. We observe competitive audio MOS performance against both UKF and WebRTC, differing by 0.2% and 0.1% respectively. In terms of video MOS scores, Merlin regresses by 2.9% against UKF; however Merlin outperforms WebRTC by 3.75%. Similarly, Merlin leads to a Figure 6: Deploying estimators on videoconferencing clients. 0.4% boost against WebRTC and a 13% reduction in comparison to UKF in terms of the observed receiving rate. We note that Merlin achieves a higher median video MOS and receiving rate than UKF; however, UKF's video MOS and receiving rate metrics exhibit higher variance compared to Merlin (see Appendix D). Although Merlin regresses, we emphasize the two following aspects. One, Merlin was trained purely from offline, simulated experience with zero network interactions and is able to outperform WebRTC and compete with UKF. Two, while we have not solved the problem of domain shift completely, Merlin's data-driven design enables the potential to reduce the performance gap by incorporating more data and finetuning our objective function. In contrast, the evaluations further indicate that Merlin improves in terms of the packet loss rate and delay against both UKF and WebRTC. The difference in observed metrics translates to a 19% and 42.85% loss rate improvement against UKF and WebRTC respectively. In terms of observed delay, Merlin achieves a 4.9% and 12.8% gain over UKF and WebRTC respectively. Similar to the previous evaluation, we emphasize that UKF achieves a higher receiving rate, leading to more packets entering the network and potentially increasing the packet loss and delay; however, in comparison to WebRTC, Merlin achieves a higher receiving rate while reducing both packet loss and delay which translates to improved QoE. We hypothesize that this performance regression is a direct result of WebRTC's general purpose nature, resulting in regressions against more specialized methods (e.g., Merlin) for RTC applications such as videoconferencing. Lastly, we show how well Merlin's extracted policy generalizes to a real-world environment. Specifically, we observe no statistically significant movement of Merlin against UKF within a real environment, indicating that Merlin is retaining the expert policy obtained from simulated experience even within environments that differ significantly from our simulation. Overall, Merlin produces competitive bandwidth estimates with our expert UKF model, while outperforming WebRTC across various settings. Our evaluations demonstrate that Merlin is capable of generalizing to new environments from simulated, offline expert observations. ### Guidance for Bandwidth Estimation **Impact of Features.** We seek to study the impact of different features on learned BWE; specifically, handcrafted features with domain knowledge may impede our learned estimators. We group features into five categories and ablate on these groups: receiving rate, loss ratio, average number of lost packets, queuing delay, and media type features. The media type features report the probability mass of video packets, audio packets, and screen sharing packets over the last three time steps. We then exhaustively retrain Merlin on each feature subset and report the performance on generated production traces within our simulated environment. We report our findings in Figure 9(a). Our experiments indicate that the two most impactful features are the receiving rate and the media type features. Surprisingly, we find that the receiving rate feature alone is sufficient to mimic UKF with an action space MSE of \(\approx 0.0028\). The impact of the receiving rate is expected, but the magnitude is unexpected. The receiving rate corresponds to the rate of information that the receiving endpoint is capable of receiving; hence, we would expect the receiving rate to correlate heavily with the throughput achieved at the bottleneck link. Similarly, the media type feature alone enables Merlin to achieve approximate action space errors of 0.0037 in relation to UKF; in contrast, while the queuing delay feature bolsters our learned estimator, the queuing delay, loss ratio, and average loss features individually are insufficient to mimic our expert (see Appendix B). The ablation results further indicate that without either the receiving rate or media type features, Merlin is unable to learn a competitive BWE policy. One explanation is that both receive rate and media type features encode the observed audio and video receive rates. The receive rate feature reflects the combination of both audio and video receive rates, while the media type features provide estimates of the relative proportion of audio and video packets received over a given window. Depending on the workload, the proportion skew heavily influences bandwidth estimates. Since audio requires only 10 Kbps of resources, the flow of audio packets remains relatively constant; hence, when the video packet to audio packet proportion is high, link resources correspondingly are abundant to support the flow of high-quality video which is further corroborated by the impact of the video start times detailed in section 5.2. In combination, both the audio receive rate and video receive rate can be derived from these two features, which together help improve the quality of bandwidth estimates. While loss metrics provide the agent with auxiliary information, by themselves they are noisy. We hypothesize that this discrepancy in performance stems directly from partial observability. For example, while packet loss may result from Figure 7: Imitating UKF in Simulation. network congestion and exhausted link resources, loss may additionally arise from physical factors, e.g., fading in wireless networks and link level disconnections- both of which are indistinguishable from the agent's perspective and impact loss rates regardless of the bottleneck's available resources. Lastly, we find that the best performing subset of features contains all five feature groups (see Appendix B for more details). It appears that the receiving rate, packet type, and delay-based features serve as core features and are crucial towards real-time BWE, while loss-based features appear to bolster performance but operate as auxiliary information. **Learning Methods and Architecture.** We evaluate the performance of different IL methods and architectures for BWE within our gym environment. We compare three different IL approaches: BC, Implicit Behavioral Cloning (IBC) [15], and Generative Adversarial Imitation Learning (GAIL) [20] and two policy architectures for BC: a multi-layer perceptron (MLP) and an LSTM. We report our findings in Figure (b)b. We implement IBC and GAIL with MLP-based policy networks. We adopt similar hyperparameters for each model and maintain the same gym validation parameters across trials. We train GAIL with 16000 expert state-action samples per training epoch. Our evaluations indicate that BC with an LSTM policy network outperforms all other benchmarked methods, achieving an action MSE error of 0.0016 (see Figure (b)b). The next best performing approach is BC with an MLP-based policy network, followed by IBC and then GAIL. We hypothesize that IBC and GAIL are more effective than BC in environments with limited expert demonstrations due to their joint representation of states and actions; however, in our case, we have an abundance of rich expert demonstrations and are effectively able to better cover our state space, mitigating the likelihood our imitator arrives at a previously unseen state which in turn reduces the effect of compounding error. Lastly, we ablate on the size of Merlin's LSTM; however, we observe little performance difference between model sizes (see Appendix B). Given that Merlin produces a single output from 64 inputs, the problem is relatively small; hence, we would not expect model size to severely influence performance. In essence, when expert demonstrations are \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & Video MOS & Audio MOS & Receiving Rate & Loss Rate & Delay Mean \\ \hline \hline Merlin & **2.9150 \(\pm\) 0.1538** & **2.8598 \(\pm\) 0.0821** & **849.9764 \(\pm\) 104.4793** & 0.0353 \(\pm\) 0.0144 & 30.8500 \(\pm\) 4.3147 \\ \hline UKF & 2.9068 \(\pm\) 0.2004 & 2.8401 \(\pm\) 0.0795 & 829.7375 \(\pm\) 127.6 & **0.0328 \(\pm\) 0.0105** & 28.0160 \(\pm\) 3.5423 \\ \hline WebRTC & 2.8519 \(\pm\) 0.1217 & 2.8452 \(\pm\) 0.0667 & 775.3979 \(\pm\) 60.2634 & 0.0375 \(\pm\) 0.0053 & **26.1996 \(\pm\) 2.5462** \\ \hline R3Net v1.4 & 2.9050 \(\pm\) 0.2438 & 2.8094 \(\pm\) 0.0897 & 847.9143 \(\pm\) 148.3244 & **0.0328 \(\pm\) 0.0163** & 42.2350 \(\pm\) 7.4667 \\ \hline \end{tabular} \end{table} Table 1: Benchmarking on Emulated Links. **Statistical significance with \(p<0.05\) between Merlin, WebRTC, and R3Net v1.4. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model** & **Video MOS** & **Audio MOS** & **Receiving Rate** & **Loss Rate** & **Delay Mean** \\ \hline \hline Merlin & 4.069 \(\pm\) 0.6204 & 4.811 \(\pm\) 0.1344 & 1910.521 \(\pm\) 705.7378 & **0.0021 \(\pm\) 0.0136** & **6.856 \(\pm\) 26.7283** \\ \hline UKF & **4.190 \(\pm\) 0.3236** & **4.824 \(\pm\) 0.1218** & **2159.809 \(\pm\) 311.1544** & 0.0025 \(\pm\) 0.0177 & 7.191 \(\pm\) 15.5186 \\ \hline WebRTC & 3.919 \(\pm\) 0.4873 & 4.817 \(\pm\) 0.1674 & 1901.152 \(\pm\) 435.089 & 0.0030 \(\pm\) 0.0183 & 7.730 \(\pm\) 28.2775 \\ \hline \end{tabular} \end{table} Table 2: Benchmarking on Wild Networks. Figure 8: Video MOS vs. Audio MOS performance across emulated network environments. Figure 9: Video MOS vs. Audio MOS on Wild Networks. readily available, BC provides favorable results. Furthermore, exploiting temporal correlations present in network control tasks with recurrent networks appears to enhance policy performance. **Quantity and Quality of Demonstrations.** We explore the impact of the quantity and quality of demonstrations on our learned estimator. Specifically, we compare with expert demonstrations drawn from two environments: our simulation and our testbed environment. We retrain Merlin on four different datasets: 1.5k emulated trajectories, 10k emulated trajectories, 10k simulated trajectories, 10k simulated trajectories, and 100k simulated trajectories. The emulated trajectories are intentionally limited; that is, we do not randomize call parameters and leverage parameters drawn directly from production traces. By limiting the breadth of demonstrations from the target distribution, the relation between demonstration diversity and data quality can be explored. Emulated data is of higher quality as it reduces the domain shift between offline and online samples. Furthermore, we evaluate the implications of DAGGER [45] on our 10k gym dataset, doubling the dataset size to 20k demonstrations by the end of the training run. We report our findings in Figure 9(c). First, comparing the performance of Merlin on 10k emulated trajectories and 10k simulated trajectories, we observe greater gym performance with our simulated trajectories rather than our emulated demonstrations. Merlin achieves an action MSE error of 0.0016 with the simulated dataset in comparison to 0.0055 action MSE error with the emulated demonstrations. Furthermore, qualitatively, we observe that utilizing 10k simulated demonstrations is superior to training on 10k emulated demonstrations even when evaluating on our emulated platform (see Appendix C). We hypothesize that this robustness to domain shift is due to the reduction of compounding error; specifically, since our simulated dataset contains a richer set of trajectories, i.e., trajectories unlikely to be encountered in real deployments, Merlin is able to observe our expert across a more diverse set of circumstances which bolsters performance. Second, comparing the performance of 10k gym samples, 20k gym samples collected with DAGGER, and 100k gym samples, we observe little added benefit within our gym environment. Our results indicate a modest improvement of 0.0003 in terms of action MSE with 80k extra demonstrations compared to DAGGER-enhanced training; however, when benchmarking each method on our testbed, we observe that the model trained with 100k demonstrations performs the best overall. While we observe that 10k demonstrations are sufficient to learn a BWE policy, increasing the number of expert observations appears to improve generalization. Furthermore, data diversity appears to impact imitation performance more than demonstration quality. Thus, to learn network control policies via offline expert demonstrations, we find that providing the agent with a large number of diverse demonstrations is key to ensuring robustness against domain shift which is corroborated in [60]. ## 6 Conclusion In our work, we tackle key challenges in adopting AI-based system optimization and control such as domain shift. Through our evaluations, we demonstrate Merlin as a data-driven solution that builds upon prior network control heuristics. We provide preliminary results demonstrating the promise of offline learning for learning real-time network control policies. Although Merlin learns a robust BWE policy and outperforms state-of-the-art rule-based and learning-based methods, Merlin is not the end for data-driven BWE. For example, we evaluate only audio and video calls in peer-to-peer setups; however, many videoconferencing calls consist of multiple endpoints communicating over group video calls which involve a single centralized server. As flows are concentrated at a single node, the complexity of BWE increases as competing receiver flows may impede one another. Furthermore, warm-starting RL agents with policies extracted from BC have been shown to produce strong results [36] which may translate to improved network control policies. We hope that Merlin's offline-oriented design fosters new strategies for real-time network control. Figure 10: Ablation Study Results. ## Acknowledgements We would like to thank Lili Qiu for her highly valuable feedback. We thank Scott Inglis and Ezra Ameri for their help.
2310.20128
Deconfined quantum critical point lost in pressurized SrCu2(BO3)2
In the field of correlated electron materials, the relation between the resonating spin singlet and antiferromagnetic states has long been an attractive topic for understanding of the interesting macroscopic quantum phenomena, such as the ones emerging from magnetic frustrated materials, antiferromagnets and high-temperature superconductors. SrCu2(BO3)2 is a well-known quantum magnet, and it is theoretically expected to be the candidate of correlated electron material for clarifying the existence of a pressure-induced deconfined quantum critical point (DQCP), featured by a continuous quantum phase transition, between the plaquette-singlet (PS) valence bond solid phase and the antiferromagnetic (AF) phase. However, the real nature of the transition is yet to be identified experimentally due to the technical challenge. Here we show the experimental results for the first time, through the state-of-the-art high-pressure heat capacity measurement, that the PS-AF phase transition of the pressurized SrCu2(BO3)2 at zero field is clearly a first-order one. Our result clarifies the more than two-decade long debates about this key issue, and resonates nicely with the recent quantum entanglement understanding that the theoretically predicted DQCPs in representative lattice models are actually a first-order transition. Intriguingly, we also find that the transition temperatures of the PS and AF phase meet at the same pressure-temperature point, which signifies a bi-critical point as those observed in Fe-based superconductor and heavy-fermion compound, and constitutes the first experimental discovery of the pressure-induced bi-critical point in frustrated magnets. Our results provide fresh information for understanding the evolution among different spin states of correlated electron materials under pressure.
Jing Guo, Pengyu Wang, Cheng Huang, Bin-Bin Chen, Wenshan Hong, Shu Cai, Jinyu Zhao, Jinyu Han, Xintian Chen, Yazhou Zhou, Shiliang Li, Qi Wu, Zi Yang Meng, Liling Sun
2023-10-31T02:31:31Z
http://arxiv.org/abs/2310.20128v1
# Deconfined quantum critical point lost in pressurized SrCu(BO\({}_{3}\))\({}_{2}\) ###### Abstract In the field of correlated electron materials, the relation between the resonating spin singlet and antiferromagnetic states has long been an attractive topic for understanding of the interesting macroscopic quantum phenomena, such as the ones emerging from magnetic frustrated materials, antiferromagnets and high-\(T_{\rm c}\) superconductors [1]. SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) is a well-known quantum magnet, and it is theoretically expected to be the candidate of correlated electron material for clarifying the existence of a pressure-induced deconfined quantum critical point (DQCP) [2; 3], featured by a continuous quantum phase transition, between the plaquette-singlet (PS) valence bond solid phase and the antiferromagnetic (AF) phase. However, the real nature of the transition is yet to be identified experimentally due to the technical challenge. Here we show the experimental results for the first time, through the state-of-the-art high-pressure heat capacity measurement, that the PS-AF phase transition of the pressurized SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) at zero field is clearly a first-order one. Our result clarifies the more-than-two-decade-long debates about this key issue, and resonates nicely with the recent quantum entanglement understanding that the theoretically predicted DQCPs in representative lattice models are actually a first-order transition [4; 5; 6; 7; 8]. Intriguingly, we also find that the transition temperatures of the PS and AF phase meet at the same pressure-temperature point, which signifies a bi-critical point as those observed in Fe-based superconductor and heavy-fermion compound, and constitutes the first experimental discovery of the pressure-induced bi-critical point in frustrated magnets. Our results provide fresh information for understanding the evolution among different spin states of correlated electron materials under pressure. ## I Introduction The deconfined quantum critical point (DQCP) is a concept to describe the continuous phase transition between two spontaneous symmetry-breaking phases in the correlated electron material at zero temperature [9; 10]. It is characterized by the absence of confinement - the state of elementary excitations carries fractionalized quantum number and interacts via emergent gauge field - different from those as predicted by the conventional phase transitions [11; 12; 13]. The concept of the DQCP receives widespread attention because it gives rise to exotic states of matter, along with the Berezinskii-Kosterlitz-Thouless (BKT) transition [14; 15], anyon condensations [16; 17] and those in topological insulators and high-temperature superconductors, to challenge the conventional understanding of matter within the paradigm of Landau-Ginzberg-Wilson (LGW) where symmetries and their spontaneous breaking are the dominated factors [11; 12]. The concepts of fractionalization and emergent gauge fields in DQCP also have potential applications in quantum computing and information perspective. In the past two decades, investigations on DQCP have been an active subject. Enormous efforts have been made to explore various theoretical models and experimental systems to realize the DQCP and its associated consequences [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. However, a key question about whether the proposed DQCP models really host a continuous phase transition corresponding to a true conformal field theory (CFT) remains controversial and is still under intensive debates. Previous results suggest that the DQCP is realized in some 2D quantum spin or interacting Dirac fermion lattice models [18; 30] and the transition of two different spontaneous symmetry breaking phases indeed appears to be continuous, but the finite-size scaling is not consistent with regular scaling ansatz [21; 23; 31], and the extracted scaling dimensions from the numerical simulations are incompatible with later evaluations based on conformal bootstrap [32]. The controversy is more clearly revealed very recently that the DQCP "fails" a series of general standards from quantum entanglement perspective that all CFTs are expected to meet [4; 5; 6; 7; 33; 34], making the nature of the DQCP more enigmatic at the present after two decades of debating. At the experimental frontier, material realizations and detections of the DQCP are equally active. In quantum magnets, it was found that the layered material, SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\), could be a better candidate for realizing DQCP due to its unique crystal structure [2; 35; 36; 37; 38; 39; 40; 41; 42]. As shown in Fig. 1 (a) and (b), Cu\({}^{2+}\) ions (in 3d\({}^{9}\) configuration) in monolayer SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) form a dimerized singlets (DS) state at ambient pressure, which couples the Cu\({}^{2+}\) with a strong intradimer antiferromagnetic coupling \(J^{{}^{\prime}}\) and a weaker inter- dimer coupling \(J\). The 2D network of the Cu\({}^{2+}\) ions resembles the famous Shastry-Sutherland (SS) lattice [43], as shown in Fig. 1 (b). Application of pressure (\(P\)) can change the lattice constants of the material and consequently alter the ratio of \(J\) over \(J^{{}^{\prime}}\). The previous works established the empirical relations of \(J^{{}^{\prime}}(P)=(75-8.3P)\) and \(J\left(P\right)=\left(46.7\right.-\left.3.7P\right)\) K, with \(P\) being in GPa, based on the data of high-pressure heat capacity measurements [2; 3]. By applying hydrostatic pressure above 1.8 GPa, SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) enters a plaquette-singlet VBS phase (Fig. 1 (c)). Upon further increase the pressure to above 3 GPa, the PS phase transforms to an AF phase [3] (Fig. 1 (d)). However, due to the technique limitations in the practical high-pressure measurements, the truly important information about the nature of the PS to AF transition remains unknown. Recent experimental works show that application of magnetic field perpendicular to the ab-plane can trigger the transitions of the SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) material [35; 36], while the material exhibits a proximate critical yet first-order transition from the PS phase towards the AF phase [36] as the magnetic field is enhanced. Moreover, it was further hypothesized that this weakly first-order transition could eventually connect to a true DQCP or even a quantum spin liquid (QSL) phase close to zero field along the pressure axis [36]. To clarify this key puzzling issue of whether the PS-AF transition is a beyond-LGW DQCP or a LGW-allowed first-order transition experimentally (see Fig. 1 (e)) and to shine light on the overall ground-state phase diagram of the spin-1/2 SS model [44; 45; 46; 47; 48; 49], and further to clarify whether the material SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) could host an intermediate quantum spin liquid phase between the AF and PS phases [50; 51; 52], in this work, we perform high-pressure heat capacity measurements by using the state-of-art technique, which allows us to measure the sample in a hydrostatic pressure environment for pressures continuously tuned to above 3 GPa and low temperature down to 0.4 K (the details can be found in Supplementary Information-SI). ## II Temperature dependence of heat capacity at different pressures As shown in Fig. 2, the plots of the temperature dependence of heat capacity versus temperature (\(C/T\)) displays a hump below 1.8 GPa as the temperature is reduced (Fig. 2 (a) and (b)), the maximum of which is associated with the formation temperature (\(T_{\text{DS}}\)) of the dimer singlet phase [37; 38; 39; 41; 42]. Upon increasing pressure to 2.1 GPa, there are two peaks appearing at low temperature. The high temperature one is related to the onset transition temperature from the paramagnetic (PM) phase to the PS liquid (PSL) phase (\(T_{\text{PSL}}\)), and the low temperature peak is associated with the transition temperature of PSL-PS phase (\(T_{\text{PS}}\)). These two phases present in the range of 2.1-2.6 GPa (Fig. 2 (c)-(e)). At pressure about 2.7 GPa, an AF phase appears at temperature slightly lower than \(T_{\text{PS}}\) (Fig. 2 (f)). It seems like that the three phases, AF, PS and PSL phases are compatible (Fig. 2 (f)). Such a feature can be observed until 2.75 GPa (Fig. 2 (g)). At 2.9 GPa and above, the PS phase no longer exists (the reason will discuss below) but the AF phase prevails (Fig. 2 (h)). A hump feature is also observed at the temperature higher than \(T_{\text{PSL}}\), we define the temperature at the maximum hump as the onset temperature of the AF liquid state (\(T_{\text{AFL}}\)), below which the spins start to establish the effective AF interactions but are not ordered yet [3]. The true AF long-range order is established below the \(T_{\text{AF}}\) peak, as shown in Fig. 2 (h) for 2.9 GPa. We repeat the experiments with a new sample cut from the different batches and observe the reproducible results (see SI). ## III Establishment of complete pressure-temperature phase diagram We summarize our experimental results in the phase diagram (Fig. 3). All the data shown in the main panel are from the heat-capacity measurements, in which the solid and half-filled markers are the data obtained from two samples separately in this study, while the open markers are the results from our previous study [3]. All data measured on the different samples are well consistent with each other. There are three regimes in the phase diagram: below 1.8 GPa, the ground state Figure 1: **SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) structure, Shastry-Sutherland lattice and the related ground-state phase diagrams.** (a) Single layer of SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\), where the intradimer Cu\({}^{2+}\)-Cu\({}^{2+}\) interaction is labeled by \(J^{{}^{\prime}}\) while the interdimer interaction is labeled by \(J\). (b) Dimerized singlet (DS) state formed by Cu\({}^{2+}\) in SrCu(BO\({}_{3}\))\({}_{2}\), in which \(J^{{}^{\prime}}\) is stronger than \(J\) (in the Shastry-Sutherland (SS) lattice), at the ambient pressure. (c) (d) The emerging states, PS and AF as \(J/J^{{}^{\prime}}\) increases. Note that in the PS phase only half of the plaquettes, either the yellow&beige or teal&cyan filled polygons, form singlets and breaks the symmetry of SS lattice. In the AF phase, the on-site spin rotational symmetry is broken. (e) Phase diagram of SS model by ED and DMRG calculations [44; 38], where the transitions are all first-order. of the SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) sample holds DS phase (the left regime); at pressure about 1.8 GPa, the system enters the PS phase (the middle regime); further compression to about 2.7 GPa, the AF phase sets in and stabilizes up to 4 GPa. To understand the details on the PS-AF phase transition, we zoom in the phase diagram near the boundary of these two phases (see the inset of the main panel). A coexistence of PS and AF phases is observed in a very narrow pressure range. To find the precise pressure point for the PS-AF phase transition, we extrapolate the plot of temperature versus pressure for the two phases near the boundary (see red dash and green dash lines), which gives rise to an intersection located at 2.78 GPa. This intersection marked by a red star is defined as the critical pressure for the PS-AF phase transition. It is apparent that pressure above 2.78 GPa, the PS phase completely transforms to the AF phase. Our results reveal that the transition from the PS to the AF phases is the first order transition, which gives the decisive answer that the lattice translation symmetry breaking PS phase evolves into the spin rational symmetry breaking AF phase. Such a scenario can be supported by the plots of heat capacity versus temperature measured at 2.7 GPa and 2.75 GPa (Fig. 2 (f) and (g)), where the \(C/T\) (T) at \(T_{\rm AF}\) is getting higher than that at \(T_{\rm PS}\) as the pressure approaches the AF regime. These changes are in concordance with the well-known behavior of a first-order transition. It is worth noting that the observation on the narrow overlap of the PS and AF phases should come from the inhomogeneity of the pressure, because SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) is a pressure-sensitive material [53; 54; 55; 56]. A slightly difference of the pressure environment (the pressure at the center and the edges of the sample) may influence the detected critical points of the two phase transitions. Fortunately, this 'by-product' pro Figure 2: **The results of temperature dependence of high-pressure heat capacity.** (a) and (b) are the plots of \(C/T\) versus \(T\) measured at 0.9 and 1.4 GPa, which show the typical characteristic of the dimmer singlet (DS) phase. The maximum of the hump \(T_{DS}\) signifies the formation temperature of the DS phase. (c)-(e) display the pressure-induced two transitions upon cooling measured at 2.1, 2.5 and 2.6 GPa. The high temperature peak is related to the plaquette singlet liquid (PSL), while the low temperature peak is associated with the plaquette singlet phase (PS). (f-g) are the results obtained at 2.7 and 2.75 GPa, illustrating the emergence of the antiferromagnetic (AF) phase, close to the first order transition between the PS and AF phases. The insets show the transitions at low temperatures for better view. (h) presents the results of \(C/T\) versus \(T\) measured at 2.9 GPa, which exhibits an AF ground-state below \(T_{\rm AF}\) and crossover temperature \(T_{\rm AFI}\). from an antiferromagnetic liquid phase to paramagnetic phase at high temperature. Figure 3: **The complete \(P\)-\(T\) Phase diagram of SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\).** The acronyms PM, DS, PSL, PS, AFL, and AF stand for paramagnetic, dimer singlet, plaquette singlet liquid, plaquette singlet, antiferromagnetic liquid, and antiferromagnetic phases, respectively. \(P_{1st}\) represents the pressure of the first order transition. The solid and half-filled markers are the data obtained from two samples separately in this study, while the hollow ones are the data obtained from our previous study [3], all of which is in good agreement. The inset zooms in the transition near the boundary of PS and AF phases. The red star is the critical pressure point of 2.78 GPa, which is determined by an extrapolation of onset temperatures of the PS and AS phases (see red and green dash line). The solid line which goes through the data points of \(T_{\rm DS}\), \(T_{\rm PSL}\) and \(T_{\rm AFI}\). (the crossover scale of the AF correlation, denoted by the green triangles), are from an ED calculation on the SS model with functions of \(\tilde{J}\) (\(P\)) and \(J\) (\(P\)) mentioned above [3]. vides us a way to determine the critical pressure/temperature point for the transition of PS-AF phases, which meets at a nearly same pressure/temperature point and is defined as the bi-critical point in the phase transition theory. The observation of the bi-critical transition between PSL-PS and AFL-AF phases in SrCu\({}_{3}\)(BO\({}_{3}\))\({}_{2}\) is reminiscent of what have observed in the Fe-based superconductor Ca\({}_{0.73}\)La\({}_{0.22}\)FeAs\({}_{2}\)[8], in which the sample also undergoes a first order transition from the AF phase to a superconducting (SC) phase, accompanied by PM-AF and PM-SC transition at a bi-critical point. A similar observation has also been made in heavy fermion compound YbAgGe [57]. ## IV Pressure dependent gap of DS, PS and AF phases Since the gap value extracted from low temperature fit to \(C/T\) can further diagnose the nature of the phases and the phase transition, we fit the low-temperature data based on the following form [3]: \(C/T=a_{0}+a_{1}T^{2}+(a_{2}/T^{3})\mathrm{exp}(-\Delta/T)\), where \(\Delta\) is the activation gap and \(a_{0}\), \(a_{1}\), and \(a_{2}\) fitting parameters. Figure 4 shows the extracted gap by fitting \(C(T)/T\) to an exponential form plus terms accounting for the heater, wires and phonons, together with the data reported previously [3; 2; 35]. Our results reveal that both DS and PS phases are gaped, which are in good agreement with both of the experimental and theoretical results [3; 37; 38; 39; 41; 42; 2]. We also observed a large reduction in gap at the boundary of the PS-AF phases. The clear drops in gap at the boundaries of the DS-PS and PS-AF phase transitions further confirm that these transitions are the first order ones. By the same method, we also fit the low-temperature data measured at pressure larger than 2.9 GPa, and find that the plots of \(C/T\) versus \(T\) cannot resolve an activation gap but render a power-law decay with temperature. This is the hallmark of the existence of the AF phase based on the gapless Goldstone modes [58]. As a result, we propose that SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) transforms to an AF phase completely via a first-order transition at 2.78 GPa. ## V Discussion The observation of the pressure-induced first-order transition between the plaquette singlet phase and the antiferromagnetic phase in the SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) delivers a clear message that the DQCP is lost in this pressurized material. In the recent theoretical developments of several important DQCP lattice models, both quantum spin models [5; 6; 7; 34] and inter- acting Dirac fermion models with Kane-Mele and plaquette interactions [4; 33], one consistently finds that the seemingly continuous transitions therein, irrespective from VBS to AF phases, or from quantum spin Hall (QSH) to SC phases, where both sides are spontaneously symmetry-breaking phases and the transitions are tuned by a single parameter, cannot be compatible with more fundamentally "first principle" tests as a continuous transition with CFT description. These tests include the conformal bootstrap bounds of critical exponents for emergent continuous symmetry and the positivity requirement of the entanglement entropy [4; 59; 6; 7; 32; 33; 34; 4]. The messages from these analyses imply that, the DQCPs with their present lattice model realizations are either first-order transitions or some other more complicate scenarios, such as multicritical point and complex fixed points, which at the moment one does not have controlled theoretical framework to calculate their precise properties [24; 31; 60]. Our experimental results in this study, therefore, come as a great relief for that the PS-AF transition in pressurized SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) at zero field and the ground-state is a first-order one, which puts the final piece of the puzzle onto the phase diagram and clarifies the two-decades-long debates on DQCPs of SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) and related models. Our results also resonate nicely with the recent quantum entanglement understanding that the proposed DQCPs in 2D quantum spin or interact- ing Dirac fermion models cannot be described by CFT [4; 5; 6; 7; 33; 34]. Another interesting observation in this study is that the \(C/T\) data measured at very low temperature (roughly the \(a_{0}\) in the gap fitting function) exhibits an enhanced value at the temperature lower than the bi-critical point (see Fig. S3 in the SI). This might suggest there exist enhanced fluctuations in the vicinity of the bi-critical point and imply that, if one could further suppress the bi-critical point by competing interactions or geometry frustrations, a DQCP or even the speculated quantum spin liquid state separating the PS and AF phases [50; 51; 52] could emerge from along such hypothesized tuning axes. Similar behavior has been observed in the bi-critical point of heavy fermion compound YbAgGe [57] with the specific heat, Figure 4: **Pressure dependence of gap extracted from low-temperature fits to \(C/T\).** The red squares and purple diamonds are the data obtained from this work and the rest symbols are the data from Refs. [2; 3]. The three insets are the plots of \(C/T\) versus temperature measured at the three typical pressures with the fitting details in the low temperature. Gru'neisen parameter and the magnetocaloric effect, it will be of interest that the latter measurements can be performed here. It is also interesting to note that phenomenon observed in this study shares the similarity with that seen in the pressurized Fe-based superconductor Ca\({}_{0.73}\)La\({}_{0.27}\)FeAs\({}_{2}\)[8], where a first-order transition of AF-SC phases is observed, and in that case, the two finite temperature phase boundaries (PM-AF and PM-SC phases) also meet at a bi-critical point. It is certain of significance to carry out the comparison on the two different materials, SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) and Ca\({}_{0.73}\)La\({}_{0.27}\)FeAs\({}_{2}\), by distinguishing the commonness and peculiarity, and find clues to understand the superconductivity in the proximity of AF phase. It is expected that our results will provide valuable experimental foundation and theoretical inspirations for eventually extending the paradigm of quantum phase transitions beyond the Landau-Ginzberg-Wilson. ## Methods The high-pressure heat capacity of the samples is derived from alternate-current (ac) calorimetry. In this technique, a small temperature oscillation (\(\Delta T\)) generated by a heater glued to one side of a sample is converted to an ac voltage signal that can be detected by a chromel-AuFe (0.07%) thermocouple fixed on the opposite side. The sample is loaded into a Teflon capsule filled with a mixture of glycerin-water (3:2), which can maintain the sample in a hydrostatic pressure environment. The ac calorimetry method adapted to high pressures is described in Refs [61; 62]. Pressure is determined by the pressure dependence of \(T_{c}\) of Pb [63] that is placed together with the sample in the capsule. The detailed high-pressure heat capacity measurement platform is discussed in the SI. ## Data availability The data of this work is available upon reasonable requests. ## References * [1] P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. **78**, 17 (2006). * [2] M. Zayed, C. Ru'egg, J. Larrea J, A. La'uchli, C. Panagopoulos, S. Saxena, N. Ellerby, D. McMorrow, T. Stra'sse, S. Klotz, G. Hamel, R. A. Sadykov, V. Pomjakushin, M. Boehm, M. Jim'nez-Ruiz, A. Schneidewind, E. Pomjakushin, M. Stimaciu, C. K., and H. M. Rombrow, Nature physics **13**, 962 (2017). * [3] J. Guo, G. Sun, B. Zhao, L. Wang, W. Hong, V. A. Sidorov, N. Ma, Q. Wu, S. Li, Z. Y. Meng, A. W. Sandvik, and L. Sun, Phys. Rev. Lett. **124**, 206602 (2020). * [4] Y. Da Liao, G. Pan, W. Jiang, Y. Qi, and Z. Y. Meng, arXiv preprint arXiv:2302.11742 (2023). * [5] Y.-C. Wang, N. Ma, M. Cheng, and Z. Y. Meng, SciPost Physics **13**, 123 (2022). * [6] J. Zhao, Y.-C. Wang, Z. Yan, M. Cheng, and Z. Y. Meng, Phys. Rev. Lett. **128**, 010601 (2022). * [7] M. Song, J. Zhao, L. Janssen, M. M. Scherer, and Z. Y. Meng, arXiv preprint arXiv:2307.02547 (2023). * [8] Y. Zhou, S. Jiang, Q. Wu, V. A. Sidorov, J. Guo, W. Yi, S. Zhang, Z. Wang, H. Wang, S. Cai, K. Yang, S. Jiang, A. Li, N. Ni, G. Zhang, L. Sun, and Z. Zhao, Science Bulletin **62**, 857 (2017). * [9] T. Senthil, A. Vishwanath, L. Balents, S. Sachdev, and M. P. Fisher, Science **303**, 1490 (2004). * [10] T. Senthil, L. Balents, S. Sachdev, A. Vishwanath, and M. P. A. Fisher, Phys. Rev. B **70**, 144407 (2004). * [11] L. Landau, E. Lifshitz, and E. M. Pitaevskii, Statistical physics (1999). * [12] K. G. Wilson, Rev. Mod. Phys. **55**, 583 (1983). * [13] S. Sachdev, _Quantum Phase Transitions_ (Cambridge University Press, 2001). * [14] V. Berezinskii, Sov. Phys. JETP **32**, 493 (1971). * [15] J. M. Kosterlitz and D. J. Thouless, Journal of Physics C: Solid State Physics **6**, 1181 (1973). * [16] M. Barkeshli and X.-G. Wen, Phys. Rev. Lett. **105**, 216804 (2010). * [17] X.-G. Wen, Rev. Mod. Phys. **89**, 041004 (2017). * [18] A. W. Sandvik, Phys. Rev. Lett. **98**, 227202 (2007). * [19] J. Lou, A. W. Sandvik, and N. Kawashima, Phys. Rev. B **80**, 180414 (2009). * [20] M. S. Block, R. G. Melko, and R. K. Kaul, Phys. Rev. Lett. **111**, 137202 (2013). * [21] A. Nahum, J. T. Chalker, P. Serna, M. Ortun'o, and A. M. Somoza, Phys. Rev. X **5**, 041048 (2015). * [22] A. Nahum, P. Serna, J. T. Chalker, M. Ortuno, and A. M. Somoza, Phys. Rev. Lett. **115**, 267203 (2015). * [23] H. Shao, W. Guo, and A. W. Sandvik, Science **352**, 213 (2016). * [24] C. Wang, A. Nahum, M. A. Metlitski, C. Xu, and T. Senthil, Phys. Rev. X **7**, 031601 (2017). * [25] Y. Q. Qin, Y.-Y. He, Y.-Z. You, Z.-Y. Lu, A. Sen, A. W. Sandvik, C. Xu, and Z. Y. Meng, Phys. Rev. X **7**, 031052 (2017). * [26] N. Ma, G.-Y. Sun, Y.-Z. You, C. Xu, A. Vishwanath, A. W. Sandvik, and Z. Y. Meng, Phys. Rev. B **98**, 174421 (2018). * [27] N. Ma, Y.-Z. You, and Z. Y. Meng, Phys. Rev. Lett. **122**, 175701 (2019). * [28] B. Zhao, J. Takahashi, and A. W. Sandvik, Phys. Rev. Lett. **125**, 257204 (2020). * [29] Y. Da Liao, X. Y. Xu, Z. Y. Meng, and Y. Qi, Phys. Rev. B **106**, 075111 (2022). * [30] Y. Liu, Z. Wang, T. Sato, M. Hohenadler, C. Wang, W. Guo, and F. F. Assaad, Nature Communications **10**, 2658 (2019). * [31] A. Nahum, Phys. Rev. B **102**, 201116 (2020). * [32] D. Poland, S. Rychkov, and A. Vichi, Rev. Mod. Phys. **91**, 015002 (2019). * [33] Z. H. Liu, W. Jiang, B.-B. Chen, J. Rong, M. Cheng, K. Sun, Z. Y. Meng, and F. F. Assaad, Phys. Rev. Lett. **130**, 266501 (2023). * [34] Z. H. Liu, Y. Da Liao, G. Pan, M. Song, J. Zhao, W. Jiang, C.-M. Jian, Y.-Z. You, F. F. Assaad, Z. Y. Meng, and C. Xu, arXiv e-prints, arXiv:2308.07380 (2023), arXiv:2308.07380 [cond-mat.str-el]. * [35] J. L. Jim'nez, S. Crone, E. Fogh, M. E. Zayed, R. Lortz, E. Pomjakushina, K. Conder, A. M. La'uchli, L. Weber, S. Wessel, A. Honecker, B. Normand, C. Ru'egg, P. Corboz, H. M. Ronnow, and F. Mila, Nature **592**, 370 (2021). * [36] Y. Cui, L. Liu, H. Lin, K.-H. Wu, W. Hong, X. Liu, C. Li, Z. Hu, N. Xi, S. Li, R. Yu, A. W. Sandvik, and W. Yu, Science **380**, 1179 (2023). * (37) H. Kageyama, H. Suzuki, M. Nohara, K. Onizuka, H. Takagi, and Y. Ueda, Physica B: Condensed Matter **281-282**, 667 (2000). * (38) A. Koga and N. Kawakami, Phys. Rev. Lett. **84**, 4461 (2000). * (39) H. Kageyama, K. Yoshimura, R. Stern, N. V. Mushnikov, K. Onizuka, M. Kato, K. Kosuge, C. P. Slichter, T. Goto, and Y. Ueda, Phys. Rev. Lett. **82**, 3168 (1999). * (40) R. W. Smith and D. A. Keszler, Journal of Solid State Chemistry **93**, 430 (1991). * (41) T. Waki, K. Arai, M. Takigawa, Y. Saiga, Y. Uwatoko, H. Kageyama, and Y. Ueda, Journal of the Physical Society of Japan **76**, 073710 (2007).. * (42) M. Takigawa, T. Waki, M. Horvatic, and C. Berthier, Journal of the Physical Society of Japan **79**, 011005 (2010). * (43) B. S. Shastry and B. Sutherland, Physica B+ C **108**, 1069 (1981). * (44) P. Corboz and F. Mila, Phys. Rev. B **87**, 115144 (2013). * (45) M. Albrecht and F. Mila, Europhysics Letters **34**, 145 (1996). * (46) S. Miyahara and K. Ueda, Phys. Rev. Lett. **82**, 3701 (1999). * (47) E. Mu'ller-Hartmann, R. R. P. Singh, C. Knetter, and G. S. Uhrig, Phys. Rev. Lett. **84**, 1808 (2000). * (48) C. Boos, S. P. G. Crone, I. A. Niesen, P. Corboz, K. P. Schmidt, and F. Mila, Phys. Rev. B **100**, 140413 (2019). * (49) N. Xi, H. Chen, Z. Y. Xie, and R. Yu, Phys. Rev. B **107**, L220408 (2023). * (50) L. Wang and A. W. Sandvik, Phys. Rev. Lett. **121**, 107202 (2018). * (51) W.-Y. Liu, S.-S. Gong, Y.-B. Li, D. Poilblanc, W.-Q. Chen, and Z.-C. Gu, Science Bulletin **67**, 1034 (2022). * (52) J. Yang, A. W. Sandvik, and L. Wang, Phys. Rev. B **105**, L060409 (2022). * (53) P. Wang, C. Liu, R. Yang, S. Cai, T. Xie, J. Guo, J. Zhao, J. Han, S. Long, Y. Zhou, Y. Li, X. Li, H. Luo, S. Li, Q. Wu, X. Qiu, T. Xiang, and L. Sun, Phys. Rev. B **108**, 054415 (2023). * (54) Y. Zhou, Q. Wu, P. F. Rosa, R. Yu, J. Guo, W. Yi, S. Zhang, Z. Wang, H. Wang, S. Cai, _et al._, Science bulletin **62**, 1439 (2017). * (55) J. Paglione and R. L. Greene, Nature physics **6**, 645 (2010). * (56) C. Yang, J. Guo, S. Cai, Y. Zhou, V. A. Sidorov, C. Huang, S. Long, Y. Shi, Q. Chen, S. Tan, Q. Wu, P. Coleman, T. Xiang, and L. Sun, Phys. Rev. B **106**, 024503 (2022). * (57) Y. Tokiwa, M. Garst, P. Gegenwart, S. L. Bud'ko, and P. C. Canfield, Phys. Rev. Lett. **111**, 116401 (2013). * (58) J. Goldstone, A. Salam, and S. Weinberg, Phys. Rev. **127**, 965 (1962). * (59) Y. Nakayama and T. Ohtsuki, Phys. Rev. Lett. **117**, 131601 (2016). * (60) R. Ma and C. Wang, Phys. Rev. B **102**, 020407 (2020). * (61) A. Eichler and W. Gey, Review of Scientific Instruments **50**, 1445 (1979). * (62) V. Sidorov, J. Thompson, and Z. Fisk, Journal of Physics: Condensed Matter **22**, 406002 (2010). * (63) A. Eiling and J. Schilling, Journal of Physics F: Metal Physics **11**, 623 (1981). ## Acknowledgements This work was supported by the National Key Research and Development Program of China (Grant No. 2021YFA1401800 and 2022YFA1403900, No. 2022YFA1403400, No. 2021YFA1400400), the NSF of China (Grant Numbers Grants No. U2032214, 12122414, 12104487 and 12004419), and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB25000000, No. XDB33000000), the K. C. Wong Education Foundation (GJTD-2020-01). J. G. and S.C. are grateful for supports from the Youth Innovation Promo- tion Association of the CAS (2019008) and the China Postdoc- toral Science Foundation (E0BK111). C. H., B.-B.C and Z. Y. M. acknowledge the support from the Research Grants Council (RGC) of Hong Kong Special Administrative Region (SAR) of China (Projects Nos. 17301420, 17301721, AoE/P-701/20, 17309822 and HKU C7037-22G), the ANR/RGC Joint Research Scheme sponsored by the RGC of Hong Kong SAR of China and French National Research Agency (Project No. A HKU703/22) and the HKU Seed Funding for Strategic Interdisciplinary Research "Many-body paradigm in quantum moire' material research". Corresponding authors Correspondence to Zi Yang Meng ([email protected]) or Liling Sun ([email protected]).
2309.10188
Atmospheric Retrieval of L Dwarfs: Benchmarking Results and Characterizing the Young Planetary Mass Companion HD 106906 b in the Near-Infrared
We present model constraints on the atmospheric structure of HD 106906 b, a planetary-mass companion orbiting at a ~700 AU projected separation around a 15 Myr-old stellar binary, using the APOLLO retrieval code on spectral data spanning 1.1-2.5 $\mu$m. C/O ratios can provide evidence for companion formation pathways, as such pathways are ambiguous both at wide separations and at star-to-companion mass ratios in the overlap between the distributions of planets and brown dwarfs. We benchmark our code against an existing retrieval of the field L dwarf 2M2224-0158, returning a C/O ratio consistent with previous fits to the same JHKs data, but disagreeing in the thermal structure, cloud properties, and atmospheric scale height. For HD 106906 b, we retrieve C/O $=0.53^{+0.15}_{-0.25}$, consistent with the C/O ratios expected for HD 106906's stellar association and therefore consistent with a stellar-like formation for the companion. We find abundances of H$_2$O and CO near chemical equilibrium values for a solar metallicity, but a surface gravity lower than expected, as well as a thermal profile with sharp transitions in the temperature gradient. Despite high signal-to-noise and spectral resolution, more accurate constraints necessitate data across a broader wavelength range. This work serves as preparation for subsequent retrievals in the era of JWST, as JWST's spectral range provides a promising opportunity to resolve difficulties in fitting low-gravity L dwarfs, and also underscores the need for simultaneous comparative retrievals on L dwarf companions with multiple retrieval codes.
Arthur D. Adams, Michael R. Meyer, Alex R. Howe, Ben Burningham, Sebastian Daemgen, Jonathan Fortney, Mike Line, Mark Marley, Sascha P. Quanz, Kamen Todorov
2023-09-18T22:33:54Z
http://arxiv.org/abs/2309.10188v1
Atmospheric Retrieval of L Dwarfs: Benchmarking Results and Characterizing the Young Planetary Mass Companion HD 106906 b in the Near-Infrared ###### Abstract We present model constraints on the atmospheric structure of HD 106906 b, a planetary-mass companion orbiting at a \(\sim\)700 AU projected separation around a 15 Myr-old stellar binary, using the APOLLO retrieval code on spectral data spanning 1.1-2.5 \(\mu\)m. C/O ratios can provide evidence for companion formation pathways, as such pathways are ambiguous both at wide separations and at star-to-companion mass ratios in the overlap between the distributions of planets and brown dwarfs. We benchmark our code against an existing retrieval of the field L dwarf 2M2224-0158, returning a C/O ratio consistent with previous fits to the same \(JHK_{\rm s}\) data, but disagreeing in the thermal structure, cloud properties, and atmospheric scale height. For HD 106906 b, we retrieve C/O = \(0.53^{+0.15}_{-0.25}\), consistent with the C/O ratios expected for HD 106906's stellar association and therefore consistent with a stellar-like formation for the companion. We find abundances of H\({}_{2}\)O and CO near chemical equilibrium values for a solar metallicity, but a surface gravity lower than expected, as well as a thermal profile with sharp transitions in the temperature gradient. Despite high signal-to-noise and spectral resolution, more accurate constraints necessitate data across a broader wavelength range. This work serves as preparation for subsequent retrievals in the era of _JWST_, as _JWST_'s spectral range provides a promising opportunity to resolve difficulties in fitting low-gravity L dwarfs, and also underscores the need for simultaneous comparative retrievals on L dwarf companions with multiple retrieval codes. Atmospheric science (116), Exoplanet astronomy (486), Exoplanet atmospheres (487), Exoplanet formation (492), Exoplanet structure (495), L dwarfs (894), Planetary atmospheres (1244), Exoplanet atmospheric composition (2021), Extrasolar gaseous planets (2172) + Footnote †: journal: AJ 0000-0002-8002-8885]Arthur D. Adams 0000-0002-4880-0888]Michael R. Meyer 0000-0002-4883-0886]Alex R. Howe 0000-0002-4883-0888]Ben Burningham 0000-0002-0703-3883]Sebastian Daemgen 0000-0002-4883-0888]Jonathan Fortney 0000-0002-4883-0888]Mike Line 0000-0002-0703-3477]Mark Marley 0000-0002-0703-3477]Sascha P. Quanz 0000-0002-0703-3433]Kamen Todorov ## 1 Introduction Mass is typically taken as the discriminator between planets and brown dwarfs, based on the minimum of \(\sim\)13 Jupiter masses needed for sustained deuterium fusion (Spiegel et al., 2011). While one can use mass alone to define the classes of planets and brown dwarfs, there is an alternate definition based on the formation pathway of an object, as more "star"-like or "planet"-like (see e.g. Janson et al., 2012; Pepe et al., 2014; Currie et al., 2014, 2020; Schlaufman, 2018). These definitions may produce similar categories of planetary and brown dwarf companions as with the mass definition. However, 13 \(M_{\rm J}\) is not known to be a strict upper limit to forming companions as planets (i.e. that the companion forms within a circumstellar disk surrounding a young star) -- nor is 13 \(M_{\rm J}\) a strict minimum below which objects may not collapse from a molecular cloud. In exoplanet and brown dwarf demographics, there is a local minimum in the observed distributions of companions' masses as ratios to their hosts' masses, as seen in radial velocity and astrometry (Sahlmann et al., 2010), direct imaging (Reggiani et al., 2016; Vigan et al., 2017; Nielsen et al., 2019), and microlensing (Shvartzvald et al., 2016; Suzuki et al., 2016). That is, using the mass definition of planets and brown dwarfs, it is both difficult to form planets at masses as large as \(\sim\)1% of their hosts, and also difficult to form brown dwarfs with mass ratios that small. It is companions in this region that serve as the more ambiguous cases when using formation history as the criterion for distinguishing planets and brown dwarfs. How can we tell the formation pathway for individual companions? The chemical composition of a companion reflects its formation pathway, especially in the carbon-to-oxygen (C/O) ratio of its envelope relative to those measured in its host star1. Planetary formation pathways are themselves generally divided into core accretion versus gravitational instability (see e.g. Forgan and Rice, 2013; Forgan et al., 2015, 2018; Kratter and Lodato, 2016). Core accretion allows a planet's C/O ratio to diverge from its host's C/O based on where and when companions accrete their material in the disk. The H\({}_{2}\)O, CO, and CO\({}_{2}\) ice lines determine the relative fraction of C and O contained in gases versus solids as a function of distance from the host star (e.g. Mousis et al., 2009; Oberg et al., 2011; Madhusudhan et al., 2011). Since disk chemistry also evolves in time, planet compositions will reflect the chemical evolution in the disk over their development (e.g. Booth et al., 2017; Madhusudhan et al., 2017), both in the disk midplane (Eistrup et al., 2016, 2018) and vertically (Cridland et al., 2020). For planets formed via gravitational instability the formation mechanism is different, but there is still ample opportunity for the atmosphere to evolve its chemistry away from a stellar-like C/O ratio (in this case, because the protoplanetary fragment has time to stratify its C and O compounds between the core and envelope; see e.g. Ilee et al., 2017). Footnote 1: The metallicity can also provide important evidence of planet-like formation, especially for core accretion. See for example the review in Madhusudhan (2019). While a non-stellar C/O ratio will certainly be reflected in the companion's observable emission spectrum, the presence of clouds in the companion photosphere require a careful modeling approach. Many young (\(lessness100\) Myr) companions in the target mass ratio range fall in the L and T spectral types (Kirkpatrick, 2005). In the warmer L dwarfs (1300 K\(\lesssim T_{\rm eff}\lesssim\)2000 K), a variety of cloud species become important opacity sources (see e.g. Morley et al., 2012; Marley et al., 2013; Helling and Casewell, 2014; Helling, 2021); silicates such as enstatite (MgSiO\({}_{3}\)) and forsterite (Mg\({}_{2}\)SiO\({}_{4}\)), iron (Fe), aluminum oxides (e.g. Al\({}_{2}\)O\({}_{3}\)), and quartz (SiO\({}_{2}\)) can all contribute significantly in column density to L dwarf photospheres (Helling and Woitke, 2006; Gao et al., 2020; Woitke et al., 2020; Burningham et al., 2021). One important limitation in using observed gas abundances alone to constrain a C/O ratio is that oxygen-rich cloud species can condense out a significant amount of the oxygen budget at the pressures they reside. This biases the gas-derived C/O constrained from an emission spectrum, as it will be carbon-rich relative to the cumulative envelope C/O of the companion at the time of formation. We discuss our results in the context of this assumption in the discussion (SS7). Fitting brown dwarf spectra has traditionally relied on interpolations using grids of forward models that rely on specific input physics (e.g. Burrows and Liebert, 1993; Allard et al., 1996; Marley et al., 1996; Tsuji et al., 1996; Saumon et al., 2000; Geballe et al., 2001; Hubeny and Burrows, 2007; Saumon and Marley, 2008; Yamamura et al., 2010; Patience et al., 2012). There have been numerous analyses of field brown dwarfs that use such libraries of model spectra to constrain global properties such as effective temperature, metallicity, age, surface gravity, luminosity, mass, and in some cases cloud layers (e.g. Allers et al., 2007; Cushing et al., 2007; Cruz et al., 2009; Stephens et al., 2009; Rice et al., 2010; Allers and Liu, 2013; Bonnefoy et al., 2014; Martin et al., 2017). The link between theory and observation for sub-stellar atmospheres has been evolving for quite some time, as reviewed in works such as Burrows et al. (2001); Marley et al. (2013); Marley and Robinson (2015), which has motivated a second approach to spectral fitting, namely atmospheric retrieval.2 Retrievals opt to generate forward models in parallel with a parameter estimation technique such as Markov-Chain Monte Carlo (MCMC) or Nested Sampling, rather than interpolate from a pre-computed grid. Such an approach is computationally intensive and typically requires one to make simplifying assumptions in the parametrization that may or may not be physically consistent. However, the potential benefit is a more direct and precise constraint on key physical parameters, which may be warranted if the spectra are sensitive to small changes in the parameters, such as those of the temperature-pressure (T-P) profile, molecular abundances, or significant cloud opacities. We now have the results of a growing number of L dwarf retrievals to guide us in interpreting our data. Burningham et al. (2017) retrieve atmospheric properties from the near-infrared spectra of 2 mid-L field dwarfs using the Brewster retrieval code. This is then expanded into the mid-infrared in Burningham et al. (2021), constraining multiple cloud species including enstatite (MgSiO\({}_{3}\)), quartz (SiO\({}_{2}\)), and iron (Fe). Gonzales et al. (2020) apply Brewster to a L+T sub-dwarf binary and provide evidence for their co-formation as well as evidence for clouds in the primary. Peretti et al. (2019) use the retrieval code HELIOS-R on a combination of thermal infrared photometry and \(R\sim 30\)\(J\)-band data and place their retrieved chemical composition in context with both astrometric and radial velocity measurements. Molliere et al. (2020) employ the petitRADTRANS code to fit the near-infrared spectrum of the directly imaged planet HR 8799 e, finding an apparent degeneracy between solutions with significant cloud opacity, and those with less cloudy atmospheres but with much shallower temperature gradients. This reflects a theoretical prediction from Tremblin et al. (2015, 2016) that the red \(J\)-\(H\) and \(J\)-\(K\) colors of many L dwarfs may just as readily be explained by a chemo-convective instability that produces vertical temperature gradients shallower than would be expected in thermo-chemical equilibrium. Nowak et al. (2020) produce and compare retrievals from both the ExoREM and petitRADTRANS codes on an \(R\sim 500\)\(K\)-band spectrum of \(\beta\) Pictoris b, finding excellent agreement between the retrieved C/O ratios of the two codes. Wang et al. (2022)'s retrieval on \(K\) band data of HR 7672 B represents the highest resolution spectrum used in an L dwarf retrieval to date, at \(R\sim 35000\), and were able to precisely constrain the H\({}_{2}\)O and CO abundances, finding a C/O ratio consistent with the primary. Lueber et al. (2022) present a systematic retrieval of brown dwarfs across the L and T spectral types at an average resolution \(R\sim 100\) across the near-infrared, but do not find any consistency or trends in the retrieved cloud properties for the L dwarfs. However, when considering mid-infrared (here, 5-14 \(\mu\)m) spectra at similar resolution, Suarez and Metchev (2022) find silicate features emerge starting at a spectral type of approximately L2, continuing through the mid-Ls, with the variability of the brown dwarf correlating positively with the presence and strength of silicate absorption. In this work we will present our own retrieval efforts on a widely separated companion classified as an early L dwarf, and will use an additional retrieval on a previously-studied field L dwarf to compare the results of our code with those of a different retrieval code on an object in a similar spectral class. We discuss the available data on the HD 106906 system and its companion in SS2 and describe the components of our atmospheric forward model and retrieval code in SS3. We test the ability of our code to converge on consistent results by using synthetic data in SS4, benchmark the code by modeling a field L dwarf that has been previously retrieved on with a different code (SS5), and finally show our results for the L dwarf companion HD 106906 b in SS6. Finally, we discuss the interpretation and limitations in our retrieval in SS7 and summarize our key findings in SS8. ## 2 The HD 106906 system \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Name} & Value & Reference \\ \hline Distance (pc) & \(103.3\pm 0.4\) & Brown et al. (2021) \\ Projected separation (\(\arcsec\)) & \(7.11\pm 0.03\) & Bailey et al. (2014) \\ Projected separation (AU) & \(734\pm 4\) & Calculated from above. \\ Age (Myr) & \(15\pm 3\)1 & Pecaut et al. (2012) \\ \(M_{\star}/M_{\odot}\) (binary, total) & \(2.58\pm 0.04\) & Lagrange et al. (2016) \\ \(M_{\rm comp}/M_{\rm J}\) & \(11\pm 2\) & Bailey et al. (2014) \\ \(\log_{10}(L/L_{\odot})\) & \(-3.65\pm 0.08\) & Daemgen et al. (2017) \\ \(T_{\rm eff}\) (K) & \(1820\pm 240\) & Daemgen et al. (2017) \\ Spectral Type & L1.5 \(\pm 1.0\) & Daemgen et al. (2017) \\ \hline \end{tabular} \end{table} Table 1: Fundamental properties for the HD 106906 system and companion HD 106906 b. ### System Properties The HD 106906 system consists of a pair of nearly identical-mass young F-type stars (combined mass \(2.6M_{\odot}\)) orbiting each other at \(0.36\pm 0.002\) AU (Lagrange et al., 2016; Rodet et al., 2017; De Rosa and Kalas, 2019). Its membership in the Lower Centaurus Crux (LCC) association (Gagne et al., 2018) places the system's age at \(15\pm 3\) Myr (Pecaut et al., 2012; Pecaut and Mamajek, 2016). A companion, HD 106906 b, has an estimated mass of \(11\pm 2M_{\rm J}\) from fits to evolutionary \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Name} & Value & Reference \\ \hline \hline \multicolumn{1}{c}{\(J\) (1.10–1.35 \(\mu\)m)} & & \\ \hline Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(2.86^{+0.91}_{-0.69}\) & Bailey et al. (2014) \\ Magnitude (2MASS) & \(17.6\pm 0.3\) & \\ Magnitude (STMAG equivalent) & \(20.3\pm 0.3\) & Calculated.b \\ Resolution & \(\approx 200\) & Daemgen et al. (2017) \\ S/N per pixel & \(\approx 20\) & \\ \hline \multicolumn{1}{c}{\(H\) (1.45–1.81 \(\mu\)m)} & & \\ \hline Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(3.68^{+1.17}_{-0.89}\) & Estimated.a \\ Magnitude (2MASS) & \(16.2\pm 0.3\) & Calculated.b \\ Magnitude (STMAG equivalent) & \(20.0\pm 0.3\) & Calculated.b \\ Resolution & \(\approx 3000\) & Daemgen et al. (2017) \\ S/N per pixel & \(\approx 20\)–50 & \\ \hline \multicolumn{1}{c}{\(Ks\) (1.94–2.46 \(\mu\)m)} & & \\ \hline Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(2.81^{+0.16}_{-0.15}\) & Bailey et al. (2014) \\ Magnitude (2MASS) & \(15.46\pm 0.06\) & \\ Magnitude (STMAG equivalent) & \(20.28\pm 0.06\) & Calculated.b \\ Resolution & \(\approx 4000\) & Daemgen et al. (2017) \\ S/N per pixel & \(\approx 20\)–40 & \\ \hline HST/WFC3/F127M (centered at 1.274 \(\mu\)m) & & \\ \hline Magnitude (STMAG) & \(19.41\pm 0.01\) & \\ Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(6.28\pm 0.08\) & Zhou et al. (2020b) \\ \hline HST/WFC3/F139M (centered at 1.384 \(\mu\)m) & & \\ \hline Magnitude (STMAG) & \(19.97\pm 0.01\) & \\ Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(3.74\pm 0.05\) & Zhou et al. (2020b) \\ \hline HST/WFC3/F153M (centered at 1.532 \(\mu\)m) & & \\ \hline Magnitude (STMAG) & \(19.79\pm 0.01\) & \\ Flux Density (\(10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)\(\mu\)m\({}^{-1}\)) & \(4.39\pm 0.04\) & Zhou et al. (2020b) \\ \hline \end{tabular} \end{table} Table 2: Photometric and spectral properties for the companion HD 106906 b. models and sits at a projected angular separation of \(7\arcsec.11\pm 0\arcsec.03\)(Bailey et al., 2014). At a distance of \(103.3\pm 0.4\) pc (Brown et al., 2021), this places HD 106906 b at a projected physical separation of \(734\pm 4\) AU (Zhou et al., 2020). Given its mass and orbit, HD 106906 b straddles the line between planets and brown dwarfs. Its exact formation pathway remains uncertain, as its mass ratio relative to -- and remarkably wide separation from -- its binary hosts pose challenges for all possible scenarios. To date, no studies of the HD 106906 system have provided substantial evidence for one formation pathway over another for the companion. On the planet formation side, there are several efforts to understand the dynamical history of the HD 106906 system, given the misalignment of the companion with the observed debris disk (Bailey et al., 2014; Kalas et al., 2015; Lagrange et al., 2016; Bryan et al., 2021). A core accretion pathway would require HD 106906 b to have formed interior to \(\lesssim\)100 AU, which then requires a mechanism to evolve its orbit to the current projected separation in excess of 700 AU. Wu et al. (2016) highlight the possibility that HD 106906 b could have been a planet scattered outward by its binary host, though the binary-planet scattering time scale is thought to be longer than the age of the system (Jilkova and Zwart, 2015). This hypothesis has been tested with efforts to constrain its orbital motion (Rodet et al., 2019; Nguyen et al., 2020). Nguyen et al. (2020) posit that HD 106906 b's orbit could have been excited in both orbital eccentricity and inclination from an unstable resonance with the binary.3 However, this explanation is unlikely for HD 106906 in particular given the low density (\(<0.11\) stars per cubic parsec) of the LCC, which makes it unlikely that the companion's current position is the result of fly-bys scattering an initially closer-in orbit. The most recent study of the dynamical origin of this system is found as of the writing of this article is Moore et al. (2023), who provide an argument via numerical simulations that HD 106906 b could have been captured into the system as a planetary-mass free-floating object. They estimate the probability of this scenario occurring within the last 5 Myr is \(\sim 10^{-6}\), which, while still low, is an order of magnitude more likely than in previous estimates. Footnote 3: This mechanism is of great interest in understanding the formation and evolution of the purported “Planet Nine” in our own Solar System, and may serve as a general mechanism for explaining the observation of planetary-mass companions at orbital separations \(\gtrsim 100\) AU. Swastik et al. (2021) demonstrate that the occurrence rate of companions at \(\sim\)10-1000 AU shows a negative correlation with host metallicity - as opposed to the positive correlation seen in close-in gas giants - for masses greater than about 4 Jupiter masses. This suggests that the formation histories of both the most massive planets and brown dwarfs may be dominated by gravitational instability, as the theory of formation by instabilities in the disk predicts a negative correlation with host metallicity (see e.g. Helled and Schubert, 2009). Bryan et al. (2021) find that the spin axis of HD 106906 b, its orbital plane, and the plane of HD 106906's circumstellar disk are all mutually misaligned. They conclude that formation via gravitational instability is a plausible mechanism, as it is most consistent with misalignment across all 3 vectors. This scenario points to a C/O ratio consistent with the hosts, as this could occur either with gravito-turbulent instability or fragmentation of a self-gravitating turbulent cloud. ### Data Photometry for the hosts and companion span the optical (F606W from Kalas et al., 2015 and \(z^{\prime}\) from Wu et al., 2016) through the thermal infrared (\(L^{\prime}\), see Bailey et al., 2014). Two sources of photometry exist within the \(JHK_{\rm s}\) wavelength range; see Table 2. The first is from Bailey et al. (2014), who published \(J\) and \(K_{\rm s}\) magnitudes from the Magellan Adaptive Optics (MagAO) Clio2 instrument. The second is from Zhou et al. (2020), who observed the HD 106906 system in the F127M, F139M, and F153M bands of the Wide Field Camera 3 (WFC3) of the Hubble Space Telescope (HST). The F127M bandpass overlaps with the \(J\) bandpass, as does F153M's bandpass with that of \(H\), thus providing an independent (though not precisely congruent) comparison with our estimated \(H\) magnitude. F139M's bandpass falls almost entirely within the gap between the \(J\) and \(H\) band data. The highest resolution spectrum of HD 106906 b comes from Daemgen et al. (2017), who present data obtained with the SINFONI integral field spectrograph on the Very Large Telescope (VLT). The data consist of 3 spectra in \(J\), \(H\), and \(K_{\rm s}\), discontiguous with each other, with resolutions \(\approx 2000\)-\(4000\), at a S/N ratio \(\sim 20\)-\(50\) (see Table 2). They derive an effective temperature of \(1820\pm 240\) K and a spectral type of L\(1.5\pm 1.0\) based on comparisons with classifications in Allers and Liu (2013) and Bonnefoy et al. (2014). Daemgen et al. (2017) also classify the gravity-sensitive features as most consistent with a very low gravity (consistent with the "\(\gamma\)" class, as defined and used in e.g. Kirkpatrick, 2005; Cruz et al., 2009; Allers and Liu, 2013; Faherty et al., 2016). There are a few factors that affect the uncertainties in the reduced spectra. Firstly, because the observations in Daemgen et al. (2017) lack a reliable \(H\) band magnitude, we must make an estimate for the \(H\) magnitude. We choose to calculate \(J-H\) and \(H-K\) colors from a selection of low-gravity L dwarfs, published in Table 3 of Faherty et al. (2013). From these we take a weighted average with HD 106906 b's known \(J\) and \(K_{\rm s}\) magnitudes (Bailey et al., 2014) to obtain an estimate \(H=16.2\pm 0.2\). Secondly, Daemgen et al. (2017) identify regions, mostly at the edges of each spectroscopic band, that suffer large overall telluric absorption, as well as isolated wavelength ranges within each band (though concentrated in the H band) that may suffer from systematic uncertainties from the removal of telluric hydrogen in the data reduction. The reduced \(JHK_{\rm s}\) spectrum is normalized to the flux density at a specific reference wavelength in each band. A striking disagreement arises when comparing the flux densities inferred from the \(J\) versus that of the F127M photometry: the flux density derived from HST photometry is roughly twice as bright. To calculate this we take the portion of our spectrum within the F127M filter, and calculate how bright this object would be given the \(J\) magnitude, since the vast majority of the F127M band lies within the \(J\) band. From this calculation (done using the pysymphot package, see STScI Development Team, 2013) we expect to see a F127M flux density of \(\approx 3.15\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\)\(\mu\)m\({}^{-1}\), versus the \(\approx 6.28\times 10^{-13}\) as actually was observed with HST and reported in Zhou et al. (2020). L dwarfs are known to be variable from photometric monitoring, with the notable case of VHS 1256 b with a \(\sim 20\%\) variability across 1.1-1.7 \(\mu\)m with a period of \(\approx 21\)-24 hours (Bowler et al., 2020), and variability at the \(\sim 6\%\) level when extending to 5 \(\mu\)m (Zhou et al., 2020). This prevailing hypothesis for this variability is rotation (see also e.g. Zhou et al., 2016), with non-uniform cloud cover imparting brightness variations as changes in visible cloud opacity. However, Zhou et al. (2020) found that HD 106906 b was only variable at the \(\sim 1\%\) in the HST WFC3 F127M band, which is far smaller than would be needed to explain the discrepancy. One may prefer to adopt the HST photometry as the point of reference for the \(J\) and \(H\) bands, as space-based photometry provides a greater instrumental precision and does not suffer from systematics from telluric subtraction. However, there are no direct HST comparisons for the \(K_{\rm s}\) band, and so one will still need to mix the sources of photometry to calibrate the entire spectrum. We do not resolve the disagreement between these two sources of photometry -- instead, we run our retrieval of HD 106906 b with the original \(J\) and \(K_{\rm s}\) photometric normalization. We use calibration parameters in flux normalization to attempt to capture uncertainties in this flux normalization. These uncertainties stem from the error bars on the \(J\) and \(K_{\rm s}\) magnitudes used to normalize the spectra in Daemgen et al. (2017). We do not have a direct constraint on the C/O ratio of the hosts; such a constraint is needed to compare with the C/O ratio we retrieve to test the hypothesis that the HD 106906 b formed as a substellar companion. One approach is to combine a C/O to [Fe/H] relation for planet hosts with metallicity measurements for stars in the galaxy. In the case of HD 106906, one can use metallicities from members of the Upper Centaurus-Lupus (UCL) and Lower Centaurus-Crux (LCC) associations (see Table 1 in Bubar et al., 2011), which yields a mean [Fe/H] \(=-0.12\pm 0.09\). The discussion of Nissen (2013) provides a C/O to [Fe/H] relation for planet hosts: \[\rm C/O=0.58+0.48\,[Fe/H]\,, \tag{1}\] with an RMS dispersion \(\rm\sigma(C/O)=0.06\). Using this relation with the Bubar et al. (2011) mean metallicity, we get an estimate for the typical C/O ratio for a member of the Sco-Cen association: \[\rm C/O_{Sco-Cen}=0.52\pm 0.11 \tag{2}\] which is consistent with both the solar C/O found in Nissen (2013) (\(\rm C/O_{\odot}=0.58\)), as well as the range of \(0.55\pm 0.10\) given in Asplund et al. (2009). ## 3 Atmospheric Modeling and Spectral Retrieval To model the atmosphere of HD 106906 b from its emission spectrum we employ the APOLLO code (Howe et al., 2017, 2022), a model framework for generating spectra of planets both in transit and emission.4 The core of the forward model is modeling the combination of thermal emission, absorption from atomic and molecular species, and extinction (scattering and absorption) from clouds. APOLLO uses a hemispherical approximation to the Toon et al. (1989) 2-stream scattering routine, which is used primarily for cloud scattering. To calculate the emission spectrum, the outgoing radiation is averaged over 8 angle divisions in the outgoing hemisphere. The code is designed to be modular in parametrizations of molecular abundances, temperature-pressure, observing modes, and noise models, with particular focus on observing configurations for _JWST_. APOLLO is equipped with a likelihood sampling routine that serves as the retrieval component of our model (discussed in Section SS3.4). The parameters used in our retrievals, including the bounds on their priors for the parameter estimation routine, are listed in Table 3. ### Molecular Species and Opacities The molecular radiative transfer scheme in APOLLO relies on sampling of cross-section tables for a variety of applicable species; these are pre-computed from a grid of line-by-line opacities and are derived from the sources in Table 1 of Freedman et al. (2014), with the exception of the alkalis. The opacities for the alkali lines are drawn from the Lupu et al. (2022) catalog, which derives its Na and K profiles from a series of works analyzing the interaction between atomic lines and molecular hydrogen (Allard et al., 2007, 2012, 2016, 2019). We employ two levels of down-sampling to create the cross-sections we use in our forward models, with resolutions of 10000 and 50000. For retrievals on real \begin{table} \begin{tabular}{l r} \hline \hline \multicolumn{1}{c}{ Name} & Range of Prior \\ \hline \multicolumn{3}{c}{Fundamental} \\ \hline \(R/R_{\rm J}\) & 0.04 – 4 \\ log g = log\({}_{10}\big{[}g\big{/}(\)cm s\({}^{-2})\big{]}\) & 2.5 – 7.5 \\ \hline \multicolumn{3}{c}{Gases (log\({}_{10}\) number abundance relative to total; see §3.1)} \\ \hline \(\rm H_{2}O\), CO, CO\({}_{2}\), H\({}_{2}\)S & \(-\)12 to \(-\)1 \\ Na+K, CrH, FeH, TiO, VO & \(-\)12 to \(-\)1 \\ \hline \multicolumn{3}{c}{Temperature-Pressure (see §3.2)} \\ \hline \multicolumn{3}{c}{Temperature at \(10^{0.5}\) bars (\(T_{0.5}\), K)} & 75 – 4000 \\ \(T_{-4}\), \(T_{-3}\), \(T_{-2}\), \(T_{-1}\), \(T_{-0}\), & 75 – 4000a \\ \(T_{1}\), \(T_{1.5}\), \(T_{2}\), \(T_{2.5}\) (K) & \\ \hline \multicolumn{3}{c}{Clouds (see §3.3)} \\ \hline \multicolumn{3}{c}{Power-law exponent (\(\alpha\))} & \(-\)10 to 10 \\ log\({}_{10}\)(\(P_{\rm top}\)/bar) & \(-\)4 to 2.5 \\ log\({}_{10}\)(\(\Delta P_{\rm cloud}\)/bar) & 0 – 6.5b \\ Reference optical depth (log\({}_{10}\,\tau\)(1 \(\mu\)m)) & \(-\)3 to 2 \\ Single-scattering albedo (\(\omega_{0}\)) & 0 – 1 \\ Cloud filling fraction (\(f\)) & 0 – 1 \\ \hline \multicolumn{3}{c}{Calibration (see §3.4)} \\ \hline \multicolumn{3}{c}{Flux normalization in \(J\) (\(\Delta J\))} & 0.5 – 1.5 \\ Flux normalization in \(H\) (\(\Delta H\)) & 0.5 – 1.5 \\ \hline \end{tabular} \end{table} Table 3: Free parameters and the range of priors for the nested sampling algorithm used in our models with the APOLLO code. All priors are uniform within the listed bounds. The calibration factors are also only used for retrieval on HD 106906 b. CO\({}_{2}\) is not included as an absorber in the retrieval of the spectrum of 2M2224. data, we choose the minimum opacity resolution that ensures the ratio between the mean opacity and data resolution is \(\gtrsim 100\), following a community recommendation to avoid introducing artificial errors from binning effects.5 We freely retrieve fractional abundances for H\({}_{2}\)O, CO, CO\({}_{2}\), H\({}_{2}\)S, CrH, FeH, TiO, and VO. We assume a solar H/He ratio, and H\({}_{2}\) and He opacities include collisionally-induced absorption (CIA). Atomic Na and K are included together as a single free parameter, where the ratio of their abundances is fixed to that of solar metallicity (see e.g. Line et al., 2015). For the molecular abundances we assume a constant mixing ratio, and initialize at values corresponding to the chemical equilibrium abundances at the pressure layer closest to the literature effective temperature (here taken to be 1820 K from Daemgen et al., 2017). The equilibrium abundances were calculated through a routine in the PICASO atmospheric radiative transfer code (Batalha et al., 2019). Footnote 5: See, for example, the discussion on opacity resampling in the PICASO documentation. We visualize the contributions of the gas to the emission spectrum by calculating a contribution function per atmospheric layer, which is given by \[C_{\rm sp}(P,\lambda)\equiv B_{\lambda}(T(P))\,\frac{\int_{P}^{P+\Delta P}d \tau_{\rm sp}}{\exp\!\left(\int_{0}^{P+\Delta P}d\tau_{\rm tot}\right)} \tag{3}\] where the atmospheric layer spans pressures \(P\) to \(P+\Delta P\), \(T(P)\) is the temperature in the layer, and \(B_{\lambda}\) is the Planck function at that temperature. \(\tau_{\rm sp}\) and \(\tau_{\rm tot}\) represent the optical depths due to a given gas species and from the entire contents of the layer, respectively. The contribution function is expressed as fractions of the total across an entire vertical column in the atmosphere. The function, when summed across all gas and cloud species, is proportional to the pressure derivative of the "transmittance" (\(\exp(-\tau)\)) times the Planck function at the given pressure and temperature; see for example Line et al. (2014), SS3. ### Temperature-Pressure Profile Our T-P profile is adapted from the parametrization proposed in Piette & Madhusudhan (2020), Section 4.2 (see Figure 8 in their paper), with additional temperature nodes added to the extremes of the profile. The parametrization is designed to be flexible enough to accommodate a wide range of possible vertical thermal structures, including approximation a radiative-convective equilibrium, while also filtering out excessive unphysical behavior, such as the "ringing" that was described in Line et al. (2015, 2017). The parameters are the temperatures at 10 pressure levels, representing nodes between which we interpolate the profile. The temperature nodes are spaced in orders of magnitude from the top of the model atmosphere (\(10^{-4}\) bar) down to a pressure of 1 bar, beyond which we use half-orders until we reach the deepest pressure of the model at \(10^{2.5}\) bars. Here we label the temperatures of each node by subscripts denoting the base-10 logarithm of their corresponding pressure. We follow the recommendation of Piette & Madhusudhan (2020) to use a monotonic spline interpolation with a Gaussian smoothing kernel of width 0.3 dex in log-pressure, as the mechanism by which one can filter out the aforementioned ringing. In the original setup, the temperature at a pressure of \(10^{0.5}\) bars (\(T_{0.5}\)) is taken as a reference temperature at a fiducial pressure approximating the depth of the photosphere for a typical self-luminous brown dwarf. In this setup, the remaining parameters then define the _differences_ in temperature between each successive node. In contrast, we choose to define all our parameters as the temperatures themselves, but use an iterative process for proposing temperatures by determining the bounds on the uniform priors for each temperature: * The bounds of the prior for the photospheric node (\(T_{0.5}\)) are set by the bounds of the temperatures of the opacity tables (75-4000 K). * Then, the shallowest (\(T_{-4}\)) and deepest (\(T_{2.5}\)) temperature prior bounds are each bounded by the proposal for \(T_{0.5}\) and by the minimum and maximum opacity temperatures, respectively. * This continues with the nodes closest to the middle of the existing nodes being bounded by those already chosen nodes, sub-dividing until the whole profile is bounded and all temperatures proposed. This ensures the profile is monotonic in temperature. ### Cloud Models Our cloud model is modeled after the "slab" approaches used in Burningham et al. (2017) and Gonzales et al. (2020). The model cloud occupies a fixed region in pressure space, with a minimum pressure where cloud absorption begins (the cloud "top"), and some depth in pressure. The vertical opacity profile is restricted to follow \(\partial\tau/\partial P\propto P\). The free parameters include the pressure of the cloud top \(P_{\rm top}\), the depth of the cloud in log-pressure space \(\log_{10}(\Delta P_{\rm cloud})\equiv\log_{10}(P_{\rm base}/P_{\rm top})\), and a wavelength-dependent opacity and single-scattering albedo instead of particle-specific parameters. The wavelength dependence is modeled as a power law with exponent \(\alpha\). The opacity at a given pressure depth and wavelength is therefore given as \[\tau(P,\lambda)=\tau_{0}\left(\frac{\lambda}{\mu{\rm m}}\right)^{\alpha}\left( \frac{P^{2}-P_{\rm top}^{2}}{P_{\rm base}^{2}-P_{\rm top}^{2}}\right) \tag{4}\] where \(\tau_{0}\) is the maximum optical depth (at the base of the cloud at a pressure \(P_{\rm base}\)) at a wavelength of 1 \(\mu\)m. This is an empirical approximation to scattering by cloud particles whose sizes are smaller than the wavelengths of observation. Previous efforts at retrievals in this wavelength range indicate that a model that takes into account specific condensates for its opacity calculations is not preferred over the simpler approach used in this work. The final free parameter for our cloud model is the single-scattering albedo \(\omega_{0}\). This is the ratio of photons that are scattered versus those extincted overall (either absorbed or scattered). By choosing to model the albedo with a single free parameter, we assume it is constant across all wavelengths and pressures. To be precise, \(\omega_{0}\) in this work refers to the single-scattering albedo of the clouds alone; in APOLLO's implementation of the Toon et al. (1989) radiative transfer model, their \(\omega_{0}\) refers more broadly to the scattering-to-extinction ratio of all absorbers and scatterers in the atmosphere. For our purposes this means including the gas as well, for which we model scattering as Rayleigh scattering since the sizes of each molecule are much smaller than the observed wavelengths. ### Parameter Estimation Methods We sample likelihoods in parameter space with a nested sampling algorithm, using the dynesty Python package (Speagle, 2019). We choose to set uniform priors on all parameters, the ranges of which are listed in Table 3. Each model was initialized with 1000 live points in the "rwalk" sampling method. Models were run with the built-in default stopping criterion for assessing convergence, which depends on the amount of evidence accounted for in the cumulative samples.6 The total number of effective iterations in each run varies based on when the stopping criteria are reached, with test retrievals on simulated data (SS4) using \(\sim 10^{5}\), and retrievals on real data (SS5 and 6) requiring 2-3 times as many. Footnote 6: See the dynesty documentation for more information on how stopping criteria are applied. Once the runs are complete, we then derive the mass, effective temperature, metallicity, and C/O ratio. The mass is calculated directly from the radius and surface gravity. The effective temperature is calculated from an approximation to the bolometric luminosity, using a low-resolution (\(R\approx 200\)) spectrum that covers 0.6-30 \(\mu\)m7. We report metallicity by comparing the mean molecular weight versus that expected for solar metallicity, rather than reporting a metallicity as an [Fe/H] value. For the mass fraction \(Z\) of non-H/He elements, the metallicity is calculated as \(\log_{10}(Z/Z_{\odot})\), where we take \(Z_{\odot}=0.0196\). We choose this definition for metallicity because our values of metallicity are not tied to a specific atomic species, and the way in which we model the abundances -- uniform in pressure but freely variable -- means our model does not require the abundances to be in chemical equilibrium. Footnote 7: Note that for forward models, especially those with negative cloud opacity power-law exponents, two spectra can have considerably different effective temperatures while only displaying modest differences in the spectra in the near-infrared. This is discussed briefly in the section on our self-retrievals on cloud-free simulations (§4.1). We use Bayes factors to compare the quality of fits to data between two models. The Bayes factor is simply the ratio of the marginal likelihoods (also known as evidences) of each model's retrieval. A higher Bayes factor confers stronger support for a model relative to another; a recommendation originally proposed in Jeffreys (1998) is to interpret a ratio of 10-10\({}^{1.5}\) as "strong", 10\({}^{1.5}\)-10\({}^{2}\) as "very strong", and \(>10^{2}\) as "decisive" confidence that the model with the higher evidence is preferred. Following this, Benneke & Seager (2013) adapted the heuristics in Table 1 of Trotta (2008) to translate the language of Bayes factor comparisons into a "detection significance", usually quoted in units of "sigma" \(\sigma\). This is a convenient way to express analogous statistics in both the Bayesian and frequentist frameworks of model analysis; we report both in the following sections for our model comparisons. ## 4 Retrieving on Simulated Data from Forward Models of Low-Gravity L Dwarfs This work represents the first application of APOLLO to data from an L dwarf. To test the efficacy of the code, we generate a forward model that approximates the object, making simple assumptions about the atmospheric structure. In principle our retrieval code should be able to converge on a good fit (i.e. reduced chi-square statistic \(\chi^{2}_{\nu}\sim 1\)) to a dataset generated from its own forward model, and, based on the distributions of the retrieved parameters, should inform us to how well each parameter could be constrained from a near-infrared wavelength range and signal-to-noise similar to that of HD 106906 b. We use APOLLO to generate a forward model spectrum for a 1.5 \(R_{\rm J}\), \(\log g=4.19\) object; the choice of radius is arbitrary but the surface gravity is taken from the best estimate of HD 106906 b's gravity from the observed luminosity and effective temperature based on the fits made in Daemgen et al. (2017). For the thermal profile, we produce a parametrization that approximates a SONORA profile at approximately 1800 K if no cloud opacity is present. We use PICASO, specifically the Visscher chemical equilibrium code (Marley et al., 2021), to generate equilibrium abundances for the model pressures given the above parameters. This corresponds to a C/O ratio of 0.54 and a metallicity of 0.065. We generate data for two cases: one with clouds, parametrized as described in SS3.3, and one "clear" case without clouds. For the clouds, we use a layer that spans \(\sim 10^{-0.5}\)-\(10^{1}\) bars, chosen to bound the estimated photospheric pressures, and has enough opacity to yield an effective temperature of \(\approx 1360\) K. All models used to generate these data are identical in all non-cloud parameter values. Noise is modeled as independent, Gaussian (white) noise at S/N = 20, approximately the minimum S/N seen in the HD 106906 b spectrum. Comparisons of the forward model spectral fits, T-P profiles, and distributions of model parameters are shown in Figures 1-5 (on cloud-free data) and 6-10 (on cloudy data). ### Retrievals on Cloud-free Data The retrieval on simulated cloud-free data has no issue converging to an excellent fit, with the final \(\chi^{2}_{\nu}\) value very close to 1 (Figure 1 and Table 4). The retrieved C/O ratio is consistent with the input value of solar (0.54), with a 68% confidence interval in the posterior distribution of \(\pm 0.003\). The only abundance not tightly constrained is CO\({}_{2}\), with the true abundance sitting above the upper limit of the 68% confidence interval. The retrieved T-P profiles show a weak constraint at either end of the pressure range, with pressures smaller than \(\sim 10^{-3}\) or larger than \(\sim 10\) bars. The best-fit T-P of the cloudy model is closer to the true profile at the shallowest pressures, but the confidence ranges of the two models overlap significantly at these pressures, meaning the relative fit qualities in this region of the atmosphere are not significantly different. The contribution plots show that most of the contribution is concentrated between pressures of a few tenths of a bar to a few bars; therefore it is not surprising that most of the uncertainty in the thermal profile arises away from these intermediate pressures. The nominal expectation is that the cloudy model, when applied to cloud-free data, will effectively "turn off" the cloud opacity. This is largely true; the median opacity at the reference wavelength of 1 \(\mu\)m is very weak (an optical depth of \(\sim 0.01\), yielding an attenuation of at most a few tenths of a percent). However, there is a tail in the distribution of optical depths; in some cases the model will choose non-negligible cloud opacity, but the increases in optical depth correlate with the depth of the cloud top. This is consistent with the relative lack of contribution to the emission spectra from pressures deeper than a few bars, which is where the distribution of optical depths reaches \(\sim 1\). The single-scattering albedo \(\omega_{0}\) is low, particularly in the low-opacity cases, and is weighted toward low power-law exponents, which would allow non-negligible cloud opacity at wavelengths \(<1\)\(\mu\)m. This may explain why the distribution of effective temperatures for the cloudy model fit peaks near the true value of 1821 K but has a substantial secondary peak, with the median at 1274 K. If one were to extend the forward models to shorter wavelengths, we would see these models diverge from their cloud-free counterparts. Regardless of the precise nature of the way in which the cloud model withholds its opacity from the spectrum, the retrieved distributions of the gas species are very similar, and the constraints on the C/O ratio are of nearly identical accuracy and precision. The cloud-free model returns a Bayes factor higher than its cloudy counterpart by a factor of approximately 18, meriting its preference following the interpretation of Jeffreys (1998). In the interpretation of Benneke & Seager (2013), we could say that we "detect" the cloud-free model with a significance slightly less than \(3\sigma\). The cloud-free model gains its advantage by virtue of using fewer free parameters to fit the data. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Clear Model, Clear Data} & Cloudy Model, Clear Data & Cloudy Model, Cloudy Data & Clear Model, Cloudy Data & Clear Model, Cloudy Data \\ Name & True Value & Median & MLE & Median & MLE & Median & MLE & Median & MLE \\ \hline \hline \multicolumn{10}{c}{Fit Quality} \\ \hline \(\chi^{2}_{\nu}\) & & & & 1.00 & & 1.00 & & 1.01 & & 1.07 \\ \hline \multicolumn{10}{c}{Fundamental} \\ \hline \(R/R_{2}\) & 1.500 & 1.502 \(\pm\) 0.003 & 1.504 & 1.515\({}^{+0.021}_{-0.011}\) & 1.51 & 1.49 \(\pm\) 0.01 & 1.49 & 1.315 \(\pm\) 0.003 & 1.313 \\ \(\log_{10}\left[g/(\rm{cm\ s^{-2}})\right]\) & 4.19 & 4.18 \(\pm\) 0.01 & 4.18 & 4.17 \(\pm\) 0.01 & 4.17 & 4.19 \(\pm\) 0.01 & 4.20 & 4.39 \(\pm\) 0.01 & 4.40 \\ \(M/M_{2}\) & 14.02 & 13.63\({}^{+0.24}_{-0.24}\) & 13.64 & 13.63\({}^{+0.33}_{-0.38}\) & 13.60 & 13.99\({}^{+0.28}_{-0.28}\) & 14.17 & 17.24 \(\pm\) 0.30 & 17.23 \\ \(T_{\rm eff}\) (K) & \(a\) & 1820\({}^{+1}_{-2}\) & 1822 & 1274\({}^{+406}_{-229}\) & 1813 & 1223\({}^{+135}_{-185}\) & 1313 & 1872 \(\pm\) 1 & 1871 \\ C/O & 0.54 & 0.545 \(\pm\) 0.003 & 0.547 & 0.545\({}^{+0.003}_{-0.007}\) & 0.547 & 0.535\({}^{+0.004}_{-0.004}\) & 0.534 & 0.532 \(\pm\) 0.004 & 0.531 \\ Metallicity & 0.065 & 0.064\({}^{+0.006}_{-0.007}\) & 0.065 & 0.063\({}^{+0.008}_{-0.007}\) & 0.066 & 0.055\({}^{+0.009}_{-0.009}\) & 0.059 & 0.119\({}^{+0.007}_{-0.008}\) & 0.117 \\ \hline \multicolumn{10}{c}{Gaess (\(\log_{10}\) number abundance)} \\ \hline H\({}_{2}\)O & \(-3.35\) & \(-3.358^{+0.006}_{-0.007}\) & \(-3.359\) & \(-3.357^{+0.006}_{-0.359}\) & \(-3.354\pm 0.007\) & \(-3.348\) & \(-3.288\pm 0.006\) & \(-3.287\) \\ CO & \(-3.28\) & \(-3.279^{+0.007}_{-0.008}\) & \(-3.278\) & \(-3.280^{+0.017}_{-0.008}\) & \(-3.276\) & \(-3.293\pm 0.011\) & \(-3.288\) & \(-3.231^{+0.008}_{-0.009}\) & \(-3.234\) \\ CO\({}_{2}\) & \(-7.00\) & \(-8.85^{+1.84}_{-1.84}\) & \(-8.24\) & \(-8.93^{+1.38}_{-1.85}\) & \(-8.65\) & \(-8.49^{+1.35}_{-1.63}\) & \(-8.50\) & \(-8.87^{+1.65}_{-1.62}\) & \(-8.88\) \\ H\({}_{2}\)S & \(-4.60\) & \(-4.56\) & \(-4.57\pm 0.02\) & \(-4.55\) & \(-4.60\pm 0.03\) & \(-4.64\) & \(-4.52^{+0.03}_{-0.03}\) & \(-4.52\) \\ Na+K & \(-5.42\) & \(-5.423^{+0.008}_{-0.004}\) & \(-5.428\) & \(-5.423^{+0.011}_{-0.009}\) & \(-5.421\) & \(-5.438\pm 0.010\) & \(-5.432\) & \(-5.399\pm 0.006\) & \(-5.371\) \\ CHH & \(-9.00\) & \(-9.00^{+0.02}_{-0.02}\) & \(-9.00\) & \(-8.99^{+0.03}_{-0.03}\) & \(-9.02\) & \(-8.97\pm 0.03\) & \(-8.99\) & \(-8.87\pm 0.03\) & \(-8.89\) \\ FeH & \(-9.00\) & \(-9.01\pm 0.02\) & \(-9.01\) & \(-9.02\pm 0.02\) & \(-9.01\) & \(-9.01^{+0.02}_{-0.03}\) & \(-8.99\) & \(-8.96\pm 0.03\) & \(-8.94\) \\ TiO & \(-8.00\) & \(-8.00\pm 0.02\) & \(-8.01\) & \(-8.01^{+0.01}_{-0.02}\) & \(-8.01\) & \(-7.97\pm 0.02\) & \(-7.97\) & \(-7.88\pm 0.02\) & \(-7.88\) \\ VO & \(-8.33\) & \(-8.334^{+0.008}_{-0.009}\) & \(-8.340\) & \(-8.337\pm 0.007\) & \(-8.336\) & \(-8.339^{+0.011}_{-0.010}\) & \(-8.319\) & \(-8.267\pm 0.010\) & \(-8.265\) \\ \hline \multicolumn{10}{c}{Temperature-Pressure} \\ \hline \(T_{-4}\) (K) & 723 & 622\({}^{+0.05}_{-2.86}\) & 398 & 811\({}^{+46}_{-148}\) & 782 & 606\({}^{+64}_{-130}\) & 557 & 647\({}^{+13}_{-208}\) & 609 \\ \(T_{-3}\) (K) & 826 & 848\({}^{+41}_{-69}\) & 832 & 867\({}^{+26}_{-56}\) & 854 & 779\({}^{+35}_{-41}\) & 794 & 769\({}^{+47}_{-16}\) & 775 \\ \(T_{-2}\) (K) & 964 & 962\({}^{+}_{-4}\) & 963 & 962\({}^{+}_{-6}\) & 961 & 948\({}^{+}_{-6}\) & 951 & 968\({}^{+7}_{-5}\) & 969 \\ \(T_{-1}\) (K) & 1175 & 1175 \(\pm\) 2 & 1175 & 1174\({}^{+2}_{-3}\) & 1175 & 1176 \(\pm\) 2 & 1176 & 1174 \(\pm\) 2 & 1175 \\ \(T_{0}\) (K) & 1954 & 1958\({}^{+2}_{-3}\) & 1956 & 1958\({}^{+2}_{-2}\) & 1957 & 1948\({}^{+3}_{-4}\) & 1950 & 1930 \(\pm\) 3 & 1932 \\ \(T_{0,5}\) (K) & 2545 & 2546\({}^{+4}_{-3}\) & 2546 & 2548 \(\pm\) 5 & 2547 & 2548 \(\pm\) 8 & 2560 & 2487 \(\pm\) 4 & 2492 \\ \(T_{1}\) (K) & 3333 & 3243\({}^{+30}_{-36}\) & 3271 & 3271\({}^{+34}_{-32}\ We now show the fits to mock data with clouds included, with the retrieved spectra and contributions in Figure 6, T-P profiles in Figure 7, and posterior distributions of parameters in Figures 8-10. The power-law cloud opacity model yields a reduced chi-square statistic \(\chi^{2}_{\nu}\approx 1\). The retrieved C/O ratio is slightly less accurate when compared with that of the cloud-free case, with the true value lying just outside the 68% confidence interval (but well within the 95% interval). The cloud-free model applied to cloudy data returns a slightly worse fit, with the reduced chi-square statistic increasing to 1.07. In this case the model compensates for a lack of clouds by decreasing the radius, increasing the gravity, and increasing the abundances of all species except that of CO\({}_{2}\) by 0.05-0.1 dex. This allows for a C/O ratio distribution that is still marginally consistent with the truth at the 95% confidence level. When comparing the quality of the fits between the cloudy and cloud-free models, we find a Bayes factor of approximately 150. This puts the modest difference in the reduced chi-square statistics in greater perspective; the cloud-free model is strongly disfavored when compared with the model with clouds, with a frequentist translation to a model which is preferred at \(\sim 3.6\sigma\). This is due to an accumulation of minor differences (\(\lesssim\) the typical error bar) between the cloudy and clear fits, primarily in the \(J\) band where the simulated cloud opacity is strongest. While the cloudy model better fits the data, it does not reproduce all cloud properties precisely. The cloud top position and extent of the cloud are tightly correlated (see Figure 10), and the reference optical depth for the maximum likelihood estimator sits at the high tail of its posterior distribution, with the true value even farther out. The true value for the pressure of the cloud top sits at the high end of the 68% confidence interval for the retrieved posterior distribution, which shows the model is able to reproduce where in the atmosphere cloud opacity should become significant. Since the reference optical depth is inaccurate, this suggests that there is some minimum opacity that Figure 1: Spectra and contribution functions of the retrieved forward model fits to data simulated from **cloud-free** forward models of an L dwarf. The forward model spectra (using the MLE parameter values) are in color, with the data in grey. The retrieved models are nearly identical and overlap nearly entirely. Immediately beneath the spectra are the contribution functions for the 2 principal carbon and oxygen bearing species; CO\({}_{2}\) is included in the model used to generate the simulated data, but its contributions to the emission are well below those of H\({}_{2}\)O and CO. The deepest contours, outlined in solid colors, enclose the regions where the contribution function reaches \(>1\%\) of the total contribution within the atmospheric column at a given wavelength bin. Each successive contour denotes 2 orders of magnitude smaller fractional contribution (here, \(10^{-4}\) and \(10^{-6}\)). The faint grey lines in the \(J\)-band (leftmost) sections contribution plots denote the location of the cloud layer as retrieved by the cloudy model on the cloud-free data. The faintness of the lines denotes the low optical depth of the cloud layer, in contrast with the darker cloud contours as seen in the retrieval on cloudy simulated data (Figure 6). effectively suppresses much of the emission deeper than the cloud; no additional opacity is needed, therefore the model finds a solution centered around the minimum sufficient opacity. Finally, when comparing the retrieved T-P profiles, we see that, as in the test retrievals on spectra generated from a model without clouds, the weakest constraints in temperature arise in the shallowest and deepest parts of the atmosphere. However, we now see an additional divergence between the cloud-free and cloudy fits -- namely, the cloud-free profile diverges from the true profile within the simulated cloud layer, while the cloudy profile remains close to the true profile. Then, by the time we reach the deepest extent of the clouds, both profiles have begun to diverge from the true profile. This suggests that the cloud-free model is attempting to compensate for its lack of clouds by keeping the temperature gradient shallower, suppressing its thermal contribution in a way that can mimic the effect of a cloud layer. ### Lessons from Self-Retrievals Taking the results of self-retrievals on cloudy and clear simulated data together, we demonstrate that our code is able to both identify the correct abundances and thermal structure in the photosphere in an atmospheric simulation. The results also highlight what we might not expect to constrain precisely due to theoretical limitations, such as the deepest parts of the T-P profile, and the opacity profile of the clouds. Additionally, a cloud-free model may be able to reproduce a C/O ratio consistent with the true value, but risks returning an inaccurate radius and gravity, molecular abundances that are almost all consistently too high, all with a temperature profile that is consistent at the same pressure ranges as the cloudy model, but with higher uncertainties at the lowest pressures. Additionally, when clouds are present, we expect a cloud-free model to show the greatest difference from that of a cloudy model within the cloud layer itself, changing its gradient to compensate for the lack of extinction from condensates. The bias in molecular abundances suggests that, at least in this wavelength range, the shape of the spectrum is determined more by the relative abundances than by the absolute abundances; put another way, we may expect to see a potential degeneracy between the T-P profile, gravity, and key molecular abundances, but nevertheless may expect the retrieved C/O ratio to not be significantly biased away from the true value. However, these conclusions are necessarily limited to which Figure 2: The vertical temperature-pressure profiles of the retrieved forward model fits to data simulated from **cloud-free** forward models of an L dwarf. We show the MLE, median, and 95% confidence interval of the retrieved T-P profiles with the true profile over-plotted. physics we choose to include in the model used to simulate the data; we are limited to commenting on the efficacy of the code in terms of the consistency of retrievals with the assumptions we have made. ## 5 Retrieval on a Previously Characterized L Dwarf Our first true retrieval is of the mid-L field dwarf 2MASSW J2224438-015852 (Kirkpatrick et al., 2000), which we refer to here as 2M2224. This is one of the brown dwarfs studied with the _Brewster_ retrieval code in Burningham et al. Figure 3: A selection of parameters of the retrieved forward model fits to data simulated from **cloud-free** forward models of an L dwarf, shown as 1-D and 2-D histograms in a corner plot of the retrieved posterior distributions. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. (2017), and with mid-infrared data in Burningham et al. (2021). To benchmark our code against previously published results in the \(JHK_{\rm s}\) spectral range, we limit our re-analysis to the original, \(R\sim 75\) spectrum from Burgasser et al. (2010)8. The full wavelength range of the data is 0.65-2.56 \(\mu\)m, but to compare with Burningham et al. (2017) we Figure 4: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of gas parameters from the samples in the cloudy (red) and cloud-free (blue) forward model fit to data simulated from a **cloud-free** forward model of an L dwarf (see §4). The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. choose to only use the range from 0.8-2.4 \(\mu\)m, though a retrieval was performed with the full dataset. Burningham et al. (2017) retrieve an effective temperature \(T_{\rm eff}=1723^{+18}_{-19}\) K and \(\log g=5.31^{+0.04}_{-0.08}\). From the retrieved distributions of their H\({}_{2}\)O and CO abundances, we infer a C/O ratio of \(0.85^{+0.06}_{-0.08}\). Our forward model follows nearly the same Figure 5: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of cloud parameters from the samples in the cloudy forward model fit to data simulated from a **cloud-free** forward model of an L dwarf (see §4). The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. As there is no cloud opacity in the simulated data, the model can adapt its cloud opacity in several ways to effectively remove its influence on the resulting emission spectrum. The first is to turn the reference optical depth \(\tau(\lambda_{0})\) to a very low value (\(\ll 1\)); the second is to introduce significant opacity \(\tau(\lambda_{0})\gtrsim 1\) but to place the cloud deep into the atmosphere, below where the majority of the thermal emission originates in the spectrum (i.e. below the photosphere). parametrization, with a few differences: our opacities lack CaH, and we use the Piette & Madhusudhan (2020) T-P profile parametrization, which can reproduce the same shapes as the Madhusudan-Seager model but is slightly more flexible. Our cloud model is functionally equivalent to the "slab" case as described in SS2.1.3 of Burningham et al. (2017), though we choose to include the reference optical depth (\(\tau_{0}\equiv\tau(\lambda=1\,\mu\text{m})\)) as a free parameter in our corner plots. It should be noted, however, that Burningham et al. (2017) report their results using a "deck" cloud model, though both that and the slab model were tested in their work. Finally, our sampling method differs in that Burningham et al. (2017) used an MCMC parameter estimation technique, and imposed more restrictive priors on gravity to keep their mass below the nominal main-sequence limit of 80 \(M_{\text{J}}\). Our choices of a more free T-P profile parametrization and wider priors on the gravity mean that our code can explore a broader range of solutions for the vertical atmospheric structure, but with a concession that our solution has a higher risk of introducing structure to the profile that does not have a feasible physical interpretation. Results from the retrieval are shown in Figure 11 for the spectrum and contributions, Figure 12 for the retrieved T-P profiles, and posterior distributions for parameters in Figures 13-15. Our retrieved spectrum does not precisely reproduce the shapes of the local peaks in the \(J\) and \(H\) bands, and generally prefers a "smoother" (though not necessarily better) fit to the spectrum than that retrieved in Figure 8 of Burningham et al. (2017). Our retrieved C/O ratio of \(0.86^{+0.01}_{-0.02}\) sits entirely within the confidence interval reported in Burningham et al. (2017), despite retrieving a higher gravity and higher abundances particularly in H\({}_{2}\)O and CO. Our model finds a solution that prefers a higher metallicity (\(1.61\pm 0.14\)), Our T-P profile mimics the shape of the 2017 paper from \(\sim 0.01\) bars until the location of our retrieved cloud layer, where the models then diverge. The T-P profiles diverge most strongly where the preferred deck cloud model of Burningham et al. (2017) reaches an optical depth of 1 (at \(\log_{10}(P/\text{bar})=0.71\)). The extent of our Figure 6: Spectra and contribution functions of the retrieved forward model fits to data simulated from **cloudy** forward models of an L dwarf. The forward model spectra (using the MLE parameter values) are in color, with the data in grey. The retrieved models are nearly identical and overlap nearly entirely. Immediately beneath the spectra are the contribution functions for the 3 principal carbon and oxygen bearing species. The deepest contours, outlined in solid colors, enclose the regions where the contribution function reaches \(>1\%\) of the total contribution within the atmospheric column at a given wavelength bin. Each successive contour denotes 2 orders of magnitude smaller fractional contribution (here, \(10^{-4}\) and \(10^{-6}\)). cloud layer encompasses their median \(\tau=1\) pressure, but our retrieved power-law dependence in wavelength is much more steeply negative than theirs, with a median \(\alpha=-7.73^{+0.71}_{-0.78}\). In other words, our model prefers a solution that allows significant cloud opacity in the \(J\)-band portion of the spectrum, but rapidly diminishes at longer wavelengths. Our model has difficulty finding a solution to the atmosphere from \(\approx\) 0.8-1.3 \(\mu\)m, given that the model fit fails to capture the smaller-scale variations in the data in the region where it determines clouds are most significant. This may mean that the cloud model is instead being used to compensate for an inability to fit this portion of the spectrum, while fitting the remainder in the \(H\)- and \(K_{\rm s}\)-band ranges more accurately. An earlier retrieval with the full 0.65-2.56 \(m\)um dataset did return a cloud power-law exponent of \(\alpha=-2.04\pm 0.02\), but also had its own difficulties in capturing the entire spectrum, with a better fit to the 0.8-1.2 \(\mu\)m region but a worse fit in the \(K_{\rm s}\) band from 2.00-2.35 \(\mu\)m, and a similarly high gravity. This agreement in C/O despite disagreement elsewhere is similar to the findings in works such as Molliere et al. (2020), where their tests of models with different cloud models yielded similar C/O ratios despite retrieving disagreeing thermal and cloud profiles. In our case, with the more flexible thermal structure, there is an additional degeneracy between the gravity and the molecular abundances/metallicity. The higher the metallicity, the less deep in the atmosphere a given optical depth will be reached, but the higher the gravity, the smaller a path length for a given change in pressure, meaning that the equivalent optical depth will occur at a higher pressure. Our choice of T-P profile allows flexibility in adapting the shape of the vertical thermal profile to changes in model gravity and metallicity; therefore, we expect gravity and metallicity to be negatively correlated. Mirroring the behavior we saw in the cloud-free versus cloudy models applied to simulated data in SS4, it is possible to retrieve an accurate C/O ratio by retrieving abundances that are accurate relative to each other, but biased in their absolute values. It is difficult to compare the shallow thermal gradient with the behavior suggested in Tremblin et al. (2016), where a shallow temperature gradient driven by a thermo-chemical instability can mimic some of the spectral behavior attributed to clouds, since in this case both the shallow gradient and significant cloud opacity are present in the model solution. Nevertheless, we keep these findings in mind when interpreting the results of our retrieval on HD 106906 b (SS6). Figure 7: The vertical temperature-pressure profiles of the retrieved forward model fits to data simulated from **cloudy** forward models of an L dwarf. We show the MLE, median, and 95% confidence interval of the retrieved T-P profiles with the true profile over-plotted. ## 6 Retrieved Atmospheric Properties for HD 106906 B ### Retrieval Setup: Single-Band Trials and Regions of High Tellurics When moving from the test retrievals on simulated data to retrievals on the actual HD 106906 b data, there are a few differences in the model setup, though the core physical model remains the same. The first is the addition of calibration terms that scale the flux in each band by a multiplicative constant; this is to account for uncertainties in the photometry, as discussed in SS2.2. These calibration scales are partly degenerate with the retrieved radius, so when Figure 8: A selection of parameters of the retrieved forward model fits to data simulated from **cloudy** forward models of an L dwarf, shown as 1-D and 2-D histograms in a corner plot of the retrieved posterior distributions. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. reporting these calibration scales, we normalize the radius such that in each case, the effective calibration scale in the \(K_{\rm s}\) band is 1. The second is to add a parameter for fractional cloud cover (\(f_{\rm cloud}\)). Since we are using a 1-D (vertical-only) model, the fractional cloud cover is assumed to be isotropic, and the emission flux is simply weighted between the fully-cloudy flux one calculates from the given parameters (\(F_{\rm cloudy}\)) and the flux given the same parameters but without clouds (\(F_{\rm clear}\)): \[F=f_{\rm cloud}F_{\rm cloudy}+(1-f_{\rm cloud})\,F_{\rm clear}. \tag{5}\] Figure 9: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of gas parameters from the samples in the cloudy (red) and cloud-free (blue) forward model fit to data simulated from the cloudy forward model of an L dwarf (see §4). The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. The final modification is that the data are down-sampled to a maximum resolution of \(\approx 500\), a factor of 4-8 lower than the original. This is because the maximum resolution of our opacity tables is 50,000, and to avoid introducing excess artificial noise from binning effects, we impose a limit of \(R_{\rm opacity}/R_{\rm data}\geq 100\), which requires us to re-sample the data to a lower resolution. We calculate the uncertainties in the down-sampled data as uncertainties in the mean, i.e. since we now have \(N=4\)-8 resolution elements of the original spectrum in each of the down-sampled elements, our uncertainties in each new element are assumed to be smaller by a factor of \(\sqrt{N-1}\). This is a lower limit of the true uncertainties in the new spectrum, as the errors between the original pixels and therefore resolution elements are Figure 10: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of cloud parameters from the samples in the cloudy (red) and cloud-free (blue) forward model fit to data simulated from the cloudy forward model of an L dwarf (see §4). The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 4. almost certainly correlated at some level. To make an estimate of the typical correlation length, we use the approach in SS2.2 of Line et al. (2015), where one calculates the auto-correlation of the residuals for an initial model fit to the data. Doing this, we find that the auto-correlation drops and subsequently remains at or below \(\approx 0.25\) at a scale of 6-8 pixels. Therefore, our reported fit qualities, such as chi-squared statistics, may be overestimated by roughly a factor of 2-3. However, a larger source of systematic errors comes in the form of telluric contamination, which is typically strongest at the boundaries of each band. These errors are more likely to introduce biases in the retrieved atmospheric parameters, and so we perform retrievals to examine the effects of including versus excluding these portions of the spectrum. An initial retrieval was run on the full HD 106906 b dataset, along with retrievals on data from each band (\(J\), \(H\), and \(K_{\mathrm{s}}\)) individually; see Figure 16. The purpose of this initial set of runs was to understand how well the retrieval could fit the data, and, depending on whether and how the individual band fits compare with the full-spectrum fit, suggest whether the model is capturing the physics of the companion's atmosphere with adequate flexibility (for example, this comparison can provide a first-order check to whether the assumption of constant mixing ratios is sufficient). The result is broadly that there are wavelength regions of each band where _neither_ the single-band nor the full-spectrum fits the data well. The largest discrepancy occurs at the blue end of each band, as well as to a lesser extent at the red end of the \(H\) band. These regions are consistent with the wavelength regions identified in the original publication of the data (Daemgen et al., 2017, specifically Figure 2), where telluric contamination is thought to affect the data Figure 11: Spectra and contribution functions of the retrieved forward model fit to the SpeX data for 2M2224. The forward model spectra (using the MLE parameter values) are in color, with the data in grey. Immediately beneath the spectra are the contribution functions for the 2 principal carbon and oxygen bearing species; CO\({}_{2}\) is not included in the retrieval as it was excluded from the retrieval in Burningham et al., 2017, inviting a more direct comparison of our results. The deepest contours, outlined in solid colors, enclose the regions where the contribution function reaches \(>1\%\) of the total contribution within the atmospheric column at a given wavelength bin. Each successive contour denotes 2 orders of magnitude smaller fractional contribution (here, \(10^{-4}\)). reduction most severely. Given the overlap of the most poorly fit regions with the suspected high-telluric regions, we choose to excise these data from the final retrievals. The final fit, using the truncated data across all bands, is over-plotted in Figure 16. ### The Cloudy Model Results from the retrieval with our cloud model included are shown in Figure 17 for the spectra and contribution functions, Figure 18 for the retrieved T-P profiles, and posterior distributions of parameters in Figures 19-21. The retrieved model captures much of the broad shape of the spectrum, but fails to capture the amplitudes of some of the features in the \(J\) band. Given the S/N of the data, the \(\chi^{2}_{\nu}\) statistic of the model indicates a poor fit to the data, at \(\chi^{2}_{\nu}\approx 40\). As noted in SS6.1, we have scaled the original uncertainties assuming the errors are uncorrelated; this statistic assumes the most optimistic noise model and therefore the most pessimistic fit quality. The retrieved radius posterior range \((1.30\pm 0.06\,R_{\rm J})\) is smaller than the radius one derives from the best-fit bolometric luminosity and effective temperature in Daemgen et al. (2017), which is 1.47 \(R_{\rm J}\). However, this is affected by our choice to normalize the spectrum such that the \(K_{\rm s}\) band calibration factor is 1; therefore, there is a bit of ambiguity in whether our retrieved radius is strictly consistent or inconsistent with these previous constraints. Our inferred effective temperature is high given the small radius. The retrieved surface gravity is low compared with that derived from the Daemgen et al. (2017) fundamental parameters (\(\log g=4.19\pm 0.40\)), though the 95% confidence interval does overlap with this range. This low gravity, combined with the radius, yields a 68% confidence interval of \(1.92^{+1.48}_{-0.70}\)\(M_{\rm J}\) for the mass, with an MLE value of 4.41 \(M_{\rm J}\). This is smaller than the mass range of \(11\pm 2\)\(M_{\rm J}\) from evolutionary models as presented in Bailey et al. (2014), as well as the estimated 13 \(M_{\rm J}\) mass if one were to adopt the mean age of the LCC, at 17 Myr. We are unlikely to be able to disentangle the low retrieved mass from the existing degeneracies that persist between gravity, metallicity, and the T-P profile. Additionally, the range of bolometric luminosities we infer from our results is low compared with the original evolutionary model constraints: our cloudy model returns a 68% confidence interval Figure 12: The vertical temperature-pressure profiles of the retrieved forward model fit to the SpeX data for 2M2224. We show the MLE, median, and 95% confidence interval of the retrieved T-P profiles, with the retrieved T-P profiles of retrievals from Burningham et al. (2017) and Burningham et al. (2021) also plotted for comparison. The latter profile is shown to highlight how the retrieved vertical structure changes as longer wavelength data are included. The median retrieved \(\tau=1\) pressure for the retrieved deck cloud models of Burningham et al. (2017) is shown as a dashed line. of \(\log_{10}(L/L_{\odot})=-3.94\pm 0.10\), while the cloud-free model returns \(\log_{10}(L/L_{\odot})=-3.73^{+0.07}_{-0.06}\), compared with the original constraint of \(\log_{10}(L/L_{\odot})=-3.64\pm 0.08\). Only the cloud-free model is consistent with the Bailey et al. cooling the derived effective temperature. However, our cloudy model is consistent with the luminosity ranges derived using subsets of "young" ("YNG" and "YNG2") targets, as presented in Table 19 of Faherty et al. (2016); Daemgen et al. (2017) used this to calculate a luminosity constraint of \(\log_{10}(L/L_{\odot})=-3.83\pm 0.35\) and \(-3.64\pm 0.24\) for the YNG and YNG2 relations, respectively. Our retrieved C/O ratio of \(0.53^{+0.15}_{-0.25}\) is consistent with the estimated C/O ratio distribution of the stellar association in which HD 106906 resides (\(0.52\pm 0.11\); see Equation 2); the 3 primary C+O constituents (H\({}_{2}\)O, CO, and CO\({}_{2}\) Figure 14: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of gas parameters from the samples in the model fit to the SpeX data for 2M2224. The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 5. Median values for the equivalent parameters in the run in Burningham et al. (2017) are listed in Table 5. are constrained to within 0.5-1 dex and show positive correlations among each other, as well as with the surface gravity. The correlation between molecular abundances and gravity is known to be a consequence of a degeneracy where, in fitting absorption features, the flattening effect of higher gravity can be at least partially offset by higher abundances (see e.g. Todorov et al., 2016). The full posterior distributions for gas abundances are shown in Figure 20. The retrieved H\({}_{2}\)O abundance for the best-fit model (\(-3.33\) dex) is within 0.1 dex of the expectation given the retrieved T-P profile, if one assumes chemical equilibrium for an object at solar metallicity and C/O ratio (\(-3.35\) dex). Figure 15: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of cloud parameters from the samples in the model fit to the SpeX data for 2M2224. The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 5. Median values for the equivalent parameters in the run in Burningham et al. (2017) are listed in Table 5. The CO abundance, at \(-3.02\) dex, is higher than the equilibrium value of \(-3.28\), which drives the best-fit C/O to just beyond the 68% confidence interval, at 0.66. CO's impact is comparable to that of H\({}_{2}\)O but over isolated regions of the spectrum; the bulk metallicity has uncertainties of order 0.4 dex but is consistent with solar metallicity as well as the metallicity range of its stellar association. The CO\({}_{2}\) abundance (\(-5.21^{+0.56}_{-1.00}\) dex, best-fit value \(-4.38\) dex) is the least constrained of the 3 major C+O molecular absorbers, and has the smallest effect on the C/O ratio. The absorbers least consistent with an equilibrium abundance are the alkalis; with a range of \(-9.38^{+3.15}_{-1.86}\) and a best-fit abundance \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Name} & Median & MLE & Median (Burningham et al., 2017) \\ \hline \hline \multicolumn{4}{c}{Fit Quality} \\ \hline \(\chi^{2}_{\nu}\) & & 42 & \\ \hline \multicolumn{4}{c}{Fundamental} \\ \hline \(R/R_{\rm J}\) & \(0.91\pm 0.01\) & 0.92 & \(0.93\pm 0.03\) \\ \(\log_{10}\left[g/(\rm{cm~{}s^{-2}})\right]\) & \(6.03^{+0.14}_{-0.15}\) & 6.34 & \(5.31^{+0.04}_{-0.08}\) \\ \(M/M_{\rm J}\) & \(354^{+140}_{-103}\) & 744 & \(72.22^{+6.25}_{-12.05}\) \\ \(T_{\rm eff}\) (K) & \(1627^{+10}_{-10}\) & 1620 & \(1723.34^{+18.03}_{-18.01}\) \\ C/O & \(0.86^{+0.01}_{-0.02}\) & 0.86 & \(0.85^{+0.01}_{-0.08}\) \\ Metallicity & \(1.61\pm 0.14\) & 1.80 & \(a\) \\ \hline \multicolumn{4}{c}{Gases (\(\log_{10}\) number abundance relative to total)} \\ \hline H\({}_{2}\)O & \(-2.39^{+0.14}_{-0.15}\) & \(-2.20\) & \(-3.16^{+0.08}_{-0.07}\) \\ CO & \(-1.59\pm 0.14\) & \(-1.39\) & \(-2.40^{+0.16}_{-0.14}\) \\ Na+K & \(-5.11\pm 0.22\) & \(-5.25\) & \(-5.33^{+0.23}_{-0.25}\) \\ CrH & \(-8.66^{+0.63}_{-0.74}\) & \(-8.98\) & \(-7.49^{+0.20}_{-0.20}\) \\ FeH & \(-9.49^{+1.25}_{-1.49}\) & \(-11.09\) & \(-7.71^{+0.09}_{-0.12}\) \\ TiO & \(-8.19^{+2.00}_{-1.79}\) & \(-5.29\) & \(-8.60^{+0.93}_{-2.19}\) \\ VO & \(-7.61^{+2.87}_{-2.60}\) & \(-7.70\) & \(-9.59^{+0.83}_{-1.44}\) \\ \hline \multicolumn{4}{c}{Temperature-Pressure\(b\)} \\ \hline \(T_{-4}\) (K) & \(715^{+204}_{-190}\) & 790 & \\ \(T_{-3}\) (K) & \(977^{+204}_{-218}\) & 1414 & \\ \(T_{-2}\) (K) & \(1500^{+39}_{-56}\) & 1532 & \\ \(T_{-1}\) (K) & \(1563^{+22}_{-24}\) & 1565 & \\ \(T_{0}\) (K) & \(1638\pm 25\) & 1587 & \\ \(T_{0.5}\) (K) & \(1871^{+20}_{-25}\) & 1813 & \\ \(T_{1}\) (K) & \(1897^{+13}_{-12}\) & 1877 & \\ \(T_{1.5}\) (K) & \(1901^{+13}_{-12}\) & 1883 & \\ \(T_{2}\) (K) & \(2064^{+88}_{-60}\) & 1929 & \\ \(T_{2.5}\) (K) & \(2103^{+97}_{-74}\) & 1981 & \\ \hline \multicolumn{4}{c}{Clouds} \\ \hline \(\alpha\) & & \(-7.73^{+0.71}_{-0.78}\) & \(-9.96\) & \(-2.66^{+0.63}_{-1.45}\) \\ \(\log_{10}(R_{\rm top}/\rm{bar})\) & \(-0.34^{+0.22}_{-0.24}\) & 0.02 & \(0.71^{+0.10}_{-0.06}\) \\ \(\log_{10}(\Delta P_{\rm cloud}/\rm{bar})\) & \(1.19^{+0.25}_{-0.22}\) & 1.21 & \(3.69^{+2.28}_{-3.38}\) \\ \(\log_{10}[\tau(\lambda_{0})]\) & \(0.61^{+0.17}_{-0.17}\) & 1.13 & \(a\) \\ \(\omega_{0}\) & & \(0.05^{+0.06}_{-0.04}\) & 0.08 & \(0.52^{+0.22}_{-0.29}\) \\ \hline \end{tabular} \end{table} Table 5: Median and MLE parameter values for the retrieval on 2M2224, as described in §5, as well as the ranges of retrieved parameters reported in Burningham et al. (2017). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Cloudy Model} & \multicolumn{2}{c}{Cloud-free Model} \\ Name & Median & MLE & Median & MLE \\ \hline \hline \multicolumn{5}{c}{Fit Quality} \\ \hline \(\chi^{2}_{\nu}\) & & & 40.3 & & 60.5 \\ \(\Delta\log\)(Bayes Factor) & & 56 & & (0) & \\ \hline \multicolumn{5}{c}{Fundamental} \\ \hline \(R/R_{\rm J}\) & \(1.43\pm 0.05\) & \(1.45\) & \(1.74\pm 0.06\) & 1.66 \\ \(\log_{10}\bigl{[}g/\bigl{(}{\rm cm~{}s^{-2}}\bigr{)}\bigr{]}\) & \(3.32^{+0.24}_{-0.19}\) & \(3.67\) & \(3.43^{+0.35}_{-0.49}\) & 3.69 \\ \(M/M_{\rm J}\) & \(1.92^{+1.48}_{-0.70}\) & \(4.41\) & \(3.55^{+4.55}_{-2.29}\) & 6.62 \\ \(T_{\rm eff}\) (K) & \(1628^{+107}_{-107}\) & \(1584\) & \(1686^{+128}_{-18}\) & 1670 \\ C/O & \(0.53^{+0.15}_{-0.25}\) & \(0.66\) & \(0.87^{+0.03}_{-0.04}\) & 0.87 \\ Metallicity & \(-0.24^{+0.41}_{-0.35}\) & \(0.26\) & \(1.66^{+0.34}_{-0.38}\) & 1.85 \\ \hline \multicolumn{5}{c}{Gases (\(\log_{10}\) number abundance)} \\ \hline H\({}_{2}\)O & \(-3.73^{+0.30}_{-0.25}\) & \(-3.33\) & \(-2.52^{+0.40}_{-0.53}\) & \(-2.27\) \\ CO & \(-3.69^{+0.51}_{-0.60}\) & \(-3.02\) & \(-1.62^{+0.37}_{-0.51}\) & \(-1.36\) \\ CO\({}_{2}\) & \(-5.21^{+0.56}_{-1.10}\) & \(-4.38\) & \(-3.11^{+0.37}_{-0.53}\) & \(-2.89\) \\ H\({}_{2}\)S & \(-5.32^{+1.32}_{-0.38}\) & \(-6.15\) & \(-7.32^{+2.64}_{-2.92}\) & \(-7.94\) \\ Na+K & \(-9.38^{+3.15}_{-1.86}\) & \(-9.95\) & \(-4.49^{+2.64}_{-3.99}\) & \(-3.51\) \\ CrH & \(-9.97^{+0.66}_{-0.82}\) & \(-9.37\) & \(-7.65^{+0.52}_{-0.71}\) & \(-7.41\) \\ FeH & \(-7.95^{+0.30}_{-0.27}\) & \(-7.57\) & \(-7.19^{+0.45}_{-0.54}\) & \(-6.77\) \\ TiO & \(-7.43^{+0.35}_{-0.30}\) & \(-7.07\) & \(-5.74^{+0.40}_{-0.51}\) & \(-5.63\) \\ VO & \(-8.39^{+0.34}_{-0.33}\) & \(-8.14\) & \(-7.71^{+0.41}_{-0.53}\) & \(-7.50\) \\ \hline \multicolumn{5}{c}{Temperature-Pressure} \\ \hline \(T_{-4}\) (K) & \(1499^{+48}_{-53}\) & \(1516\) & \(1349^{+80}_{-115}\) & \(1514\) \\ \(T_{-3}\) (K) & \(1720^{+33}_{-36}\) & \(1737\) & \(1525^{+37}_{-43}\) & 1543 \\ \(T_{-2}\) (K) & \(1791^{+32}_{-35}\) & \(1802\) & \(1638^{+42}_{-47}\) & 1668 \\ \(T_{-1}\) (K) & \(2156^{+51}_{-54}\) & \(2127\) & \(1848^{+44}_{-46}\) & 1902 \\ \(T_{0}\) (K) & \(2172^{+50}_{-54}\) & \(2140\) & \(1874\pm 39\) & 1971 \\ \(T_{0.5}\) (K) & \(2233^{+39}_{-42}\) & \(2191\) & \(1887^{+39}_{-42}\) & 1978 \\ \(T_{1}\) (K) & \(2618^{+102}_{-100}\) & \(2499\) & \(2041^{+429}_{-133}\) & 2901 \\ \(T_{1.5}\) (K) & \(3120^{+304}_{-253}\) & \(3107\) & \(2373^{+687}_{-380}\) & 3440 \\ \(T_{2}\) (K) & \(3270^{+344}_{-280}\) & \(3146\) & \(2728^{+607}_{-566}\) & 3579 \\ \(T_{2.5}\) (K) & \(3468^{+339}_{-319}\) & \(3176\) & \(2988^{+677}_{-742}\) & 3735 \\ \hline \multicolumn{5}{c}{Clouds} \\ \hline \(\alpha\) & & \(-7.15^{+0.66}_{-0.73}\) & \(-7.05\) & & \\ \(\log_{10}(P_{\rm top}/{\rm bar})\) & \(-3.34^{+0.74}_{-0.45}\) & \(-2.34\) & & \\ \(\log_{10}(\Delta P_{\rm cloud}/{\rm bar})\) & \(1.72^{+0.45}_{-0.74}\) & \(0.75\) & & \\ \(\log_{10}[\tau(\lambda_{0})]\) & \(0.70\pm 0.10\) & \(0.81\) & & \\ \(\omega_{0}\) & \(0.993^{+0.002}_{-0.003}\) & \(0.996\) & & \\ \(f_{\rm cloud}\) & \(0.89^{+0.60}_{-0.07}\) & \(0.83\) & & \\ \hline \multicolumn{5}{c}{Photometric Calibration} \\ \hline Calibration factor (\(J\)) & \(1.05\pm 0.04\) & \(1.07\) & \(0.97^{+0.05}_{-0.04}\) & \(0.89\) \\ Calibration factor (\(H\)) & \(0.88\pm 0.02\) & \(0.89\) & \(0.96\pm 0.02\) & \(0.93\) \\ \hline \end{tabular} \end{table} Table 6: Median and MLE parameter values for the four model configurations used to retrieve on the spectra of HD 106906 b. of \(-9.97\) dex, the model essentially ignores the alkali absorption features in its fit. This is surprising since there are Figure 16: Initial retrieval fits to the HD 106906 b spectrum. The data are down-sampled to a maximum resolution of \(\approx 500\), a factor of 4–8 lower than the original. The fit using the full dataset is shown in light red. The shaded pink regions indicate where Daemgen et al. (2017) identified contiguous or near-contiguous regions of suspected high telluric contamination that could not be reliably fully removed in reduction, thus introducing potential residual systematics. The fits with each band individually are shown in the various non-red colors in each band (purple for \(J\), green for \(H\), and yellow for \(K_{\rm s}\)). The fit using the data across all bands, but without the high-telluric regions, is shown in darker red. Both the full and single-band fits perform most poorly in fitting the data in the high-telluric regions, especially on the blue ends of each band. Figure 17: Spectra and contribution functions of the retrieved forward model fits to the HD 106906 b data. The forward model spectra (using the MLE parameter values) are in color, with the results from the cloudy model in red and those of a cloud-free model in blue. The data are shown in grey. The retrievals are performed with the data binned down to a resolution 4–8 times lower than its original, to mitigate the potential effects of binning errors from the opacity tables. Prominent alkali lines in the \(J\) band data are marked with vertical dashed lines, including 2 KI doublets and a smaller NaI line. Immediately beneath the spectra are the contribution functions for the 3 principal carbon and oxygen bearing species. The deepest contours, outlined in solid colors, enclose the regions where the contribution function reaches \(>1\%\) of the total contribution within the atmospheric column at a given wavelength bin. Each successive contour denotes 2 orders of magnitude smaller fractional contribution (here, \(10^{-4}\)). prominent absorption lines of potassium (two KI doublets) in the \(J\) band. The failure of the model to capture these absorption lines appears to be a consequence of reducing the spectral resolution; previous attempts at retrievals were less successful at converging to a global atmospheric fit than the ones shown in this work, but had enough of the line shape at the original resolution to fit the abundances, as well as using the ratio of the KI doublet line depths as an additional constraint on gravity. Such models return alkali log abundances ranging from about \(-5\) to \(-7\), but also return infeasibly high gravities, exceeding \(\log g=6\) unless a restrictive prior is used. The preferred T-P profile is shallow in its temperature gradient from the top of the atmosphere to a pressure of several bars, after which point the temperature rapidly increases to approximately 3100 K at several tens of bars. The profile then returns to a nearly isothermal behavior to the base of the model atmosphere. Figure 18 shows the profile along with its cloud-free counterpart and a cloud-free SONORA brown dwarf profile interpolated to match the maximum-likelihood gravity and effective temperature from the cloudy model. In contrast with the radiative-convective equilibrium profile from SONORA, with a shallow thermal gradient gradually increasing to a higher adiabatic gradient at the radiative-convective boundary, our retrieved profile can be described as nearly isothermal layers for the log-pressure ranges of \(\sim-3\) to \(-2\), again from \(\sim-1\) to 0.5, and, at least for the cloudy model, a nearly isothermal layer at the deepest \(\sim 1\) dex of the model pressure range. These nearly isothermal "layers" are punctuated with comparatively rapid temperature increases. The majority of the contribution from the major absorbers (H\({}_{2}\)O, CO, and CO\({}_{2}\)) comes from pressures of \(\sim 1\) mbar-1 bar. The full posterior distributions for the cloud parameters are shown in Figure 21. The distribution of cloud top pressures ranges from the very top of the model (\(10^{-4}\) bar) to a few mbar, and the top pressure is strongly correlated with the depth of the cloud. The maximum pressure of the cloud appears to be the most important parameter here, which when combined with the fact that most of the gas contribution to the emission is beneath this maximum pressure, implies that the model prefers whichever cloud layer can produce some fixed total column optical depth. With a highly negative power-law exponent (\(\alpha=-7.15^{+0.66}_{-0.73}\), best-fit value Figure 18: The vertical temperature-pressure profiles of the retrieved forward model fits to the HD 106906 b data. We show the MLE, median, and 95% confidence interval of the retrieved T-P profiles for the cloud-free (blue) and cloudy (red) models, the latter of which is described in §3.3. Also plotted is the (cloud-free) SONORA model for a brown dwarf at the gravity and effective temperature of our best-fit cloudy model, with the change from radiative to convective behavior occurring at a few tenths of a bar. Our retrieved profiles in contrast vary much less in temperature with pressure down to the expected radiative-convective boundary for an object at this effective temperature. \(-7.05\)), clouds produce significant opacity only for the \(J\) band. As with the retrieved clouds for 2M2224, this might not reflect an accurate constraint on actual cloud opacity, but for the model is a way to suppress emission in the bluest wavelengths without an obvious physical interpretation. The retrieved distribution of the single-scattering albedo \(\omega_{0}\) is tightly distributed and is close to the upper limit. The covering fraction (\(f_{\rm cloud}\)) distribution is consistent with but not centered at 1, which corresponds to a near global coverage of a very reflective cloud. Figure 19: A selection of parameters of the retrieved forward model fits to the HD 106906 b data, shown as 1-D and 2-D histograms in a corner plot of the retrieved posterior distributions for selected parameters. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 2. ### The Cloud-free Model The spectral fits, contributions, profiles, and posterior distributions are plotted with the cloudy solution in Figures 17-21. Excluding clouds entirely in our model returns a worse fit quality, with \(\chi^{2}_{\nu}=61\). The log-ratio of the Bayes factors is 56, indicating the cloudy solution provides overwhelmingly stronger evidence than the cloud-free solution. The main reduction in fit quality is in the \(J\) band, consistent with where the cloudy solution places most of its cloudy opacity. While the gravity (\(\log g=3.43^{+0.35}_{-0.49}\), best-fit value 3.69) is consistent with the cloudy fit, the radius of the Figure 20: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of gas parameters from the samples in the cloudy (red) and cloud-free (blue) forward model fit to the HD 106906b data. The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 2. cloud-free solution increases to \(1.74\pm 0.06\)\(R_{\rm J}\), and the abundances of all species except H\({}_{2}\)S increase by amounts ranging from 0.5-2 dex. The degeneracy between gravity and the molecular abundances is even stronger than that of the cloudy atmospheres, with the CO abundance rising more than the H\({}_{2}\)O abundance, yielding a C/O distribution of \(0.87^{+0.03}_{-0.04}\). Since the H\({}_{2}\)O contribution dominates in the \(J\) band, it is possible that the relatively poor fit of the cloud-free model in the \(J\) band affects the accuracy of the retrieved C/O ratio. The T-P profile shows a similar shape to that of the cloudy profile, albeit shifted by roughly 100-200 K from the model top to a pressure of a few bars, Figure 21: Single-parameter (1-D) and parameter-versus-parameter (2-D) posterior distributions of cloud parameters from the samples in the cloudy forward model fit to the HD 106906b data. The cloud opacity is modeled as a power law in wavelength, as described in §3.3. The median value and 68% confidence interval of each parameter are shown at the top of each column; the full list of median, interval range, and MLE values are shown in Table 2. deeper than which the gradient increases in a similar fashion to the cloudy profile, but with a much higher uncertainty that is also consistent with an isothermal profile. ## 7 Discussion In the comparison of our results from SS5 concerning 2M2224 with those of Burningham et al. (2017), we retrieve a C/O ratio whose confidence interval overlaps that of their posterior distributions -- but our model disagrees on nearly everything else, including the radius, gravity, T-P profile, cloud properties, and absolute abundances. In the case of 2M2224, we have the hindsight provided by Burningham et al. (2021) that, with mid-infrared data, they were able to distinguish between specific cloud particle compositions. As a result, the slope of their T-P profile decreased considerably, as well as the retrieved abundances of H\({}_{2}\)O and CO by roughly 0.4 dex. Therefore, it is not surprising that different retrieval codes disagree when the available wavelength ranges differ. However, the fact that our codes nevertheless converged on the same C/O ratio is promising. A recent analysis from Rowland et al. (2023) shows that in the L-dwarf regime, and especially earlier L dwarfs, the choice of T-P parametrization (or non-parametrization) matters in the near-infrared: more restrictive or smoothed parametrizations may bias the retrieved parameters. Therefore, the difference between our retrieved atmospheric profiles and abundances and those of Burningham et al. (2017) may lie primarily in our differing choices of T-P parametrization. Additionally, both of the retrievals on 2M2224 assume a constant abundance with pressure for all species; Rowland et al. (2023) find that one must account for non-uniform abundances in FeH in particular due to rainout chemistry in order to avoid biasing the retrieved T-P profile. This effect is strongest for early L dwarfs, and may not bias the existing results as severely for 2M2224 at a spectral type of L4.5. However, it suggests we may consider non-uniform chemistry in future retrievals for objects such as HD 106906 b, which lies at L0.5. We discuss our current retrieval results below. Our retrieved C/O ratio of \(0.53^{+0.15}_{-0.25}\) for HD 106906 b is entirely consistent with our estimate for the C/O ratios of fellow members of the Sco-Cen association (\(0.52\pm 0.11\), as in SS2.2). Therefore, our results with the cloudy model do not rule out a stellar-like, brown dwarf companion formation pathway for HD 106906 b. However, our model returns a T-P profile whose shape is unlikely to be entirely physical, alternating between regions of nearly isothermal behavior with regions of rapid temperature increases. Unlike the shallow wavelength dependence of the cloud opacity in our simulated models, the cloudy fits on the HD 106906 b data show a large negative exponent, indicating clouds contribute primarily in the \(J\) band, but relatively little at redder wavelengths. This means that the differences matter more in the retrieved profiles between the cloud-free and cloudy models; both fit the \(H\) and \(K_{\rm s}\) band spectra with similar quality, but the cloudy model adjusts the absolute abundances and temperatures in accordance with the cloud constraints from the \(J\) band, "breaking" the degeneracy between the T-P profile, gravity, and absolute abundances. In the retrieval of 2M2224's atmosphere, we stopped short of invoking the interpretation of Tremblin et al. (2015) to characterize the shallow thermal gradients, as our models retrieved significant cloud opacity across the wavelength range of our data. Here in contrast the cloudy model prefers little to no cloud opacity in \(H\) and \(K_{\rm s}\), which keeps viable the interpretation of the data as representing a thermo-chemical instability driven by dis-equilibrium chemistry. However, the confidence in any claim, whether in the accuracy of absolute molecular abundances or characterizing the vertical structure, is limited in the absence of wider spectral wavelength coverage that can increase the precision of the retrieved profile and also capture key signatures of cloud condensate species. As mentioned above, the retrieved cloud opacity is heavily biased toward shorter wavelengths, with the cloud opacity only reaching an optical depth of \(\sim 1\) in the \(J\) band. Such a strong wavelength dependence, with a power-law exponent of \(-7\), is likely not to be attributable to a specific condensate in the atmosphere, and may be a combination of some cloud opacity (e.g. SiO\({}_{2}\), as seen in the constraints in the condensate pressures of Burningham et al., 2021) and potential remaining systematics in the shortest wavelengths. This potential degeneracy is likely only resolved with broader wavelength coverage, which is proving increasingly invaluable for accurate atmospheric characterization, and/or a more sophisticated treatment of clouds, such as modeling multiple distinct cloud layers. An additional drawback to modeling clouds using a functional form for the opacity, rather than incorporating scattering from model cloud particles, is that we are not able to account for any amount of carbon and oxygen contained within the clouds. Burningham et al. (2021) were able to determine that the choice of cloud model has an effect on their C/O ratio constraints at only around the 1% level, which means the C/O ratios are consistent within their retrieved uncertainties. In their case it was primarily because they found their oxygen-bearing clouds to reside primarily at pressures shallower than the photosphere, meaning their contribution to the overall oxygen budget was \(\sim\)1%. We can make a first-order estimate of the maximum effect of silicate condensation on our C/O ratio by following the prescription of Burrows and Sharp (1999), used in Line et al. (2015, 2017); Burningham et al. (2021), that assumes on average 3.28 atoms of oxygen are sequestered per silicon atom in silicate condensates. Since our retrieved metallicity distribution is consistent with solar, as are our abundance distributions for the major oxygen-containing species when compared with solar-metallicity equilibrium models, we can estimate that a maximum of \(\sim 16\%\) of our atmospheric oxygen may be held in silicate clouds. Our best-fit C/O ratios would then drop to as low as 0.55 (vs. 0.66), with the retrieved range updated to \(0.45^{+0.13}_{-0.21}\) (vs. \(0.53^{+0.15}_{-0.25}\)), still consistent with the association C/O. However, a true accounting of the oxygen budget in condensates will necessitate a more careful treatment of clouds than this work provides. The limitations of the near-infrared in characterizing atmospheres in this temperature range are now well-established. Therefore, our work serves as a preparation for future retrievals, taking advantage of a broader wavelength coverage, on this and other similar planetary-mass companions. _JWST_ now allows high-resolution, high signal-to-noise emission spectra of spatially resolved, very low-mass companions, and the largest benefit to retrievals is its ability to extend to the mid-infrared. In the case of HD 106906 b, GTO observations with _JWST_ are already scheduled that will capture both an \(R\sim 1000\) spectrum using NIRSpec (G395M, \(\lambda=2.87\)-5.27 \(\mu\)m) and a low resolution (\(R\sim 100\)) MIRI LRS spectrum spanning 5-12 \(\mu\)m. The results of Burningham et al. (2021) have suggested that extending into the mid-infrared not only constrains specific cloud compositions, but also significantly increases the range of the spectrum little affected by cloud opacity, which can allow for more accurate constraints on both the T-P profile and gas opacities. The results for 2M2224 suggests that the relative gas abundances may be robust to limitations in wavelength range, but that one should not expect consistent gravity, T-P profile, or cloud constraints unless one has longer wavelength coverage. This being said, we are still limited to regions of the atmosphere that can be seen in emission; longer wavelengths will tend to probe cooler regions, which for directly imaged companions without thermal inversions will mean shallower pressures. The deepest parts of the atmosphere beneath the photosphere for these wavelengths, and/or beneath optically thick cloud layers, may still be inaccessible. This means that a complete picture of metrics such as the C/O ratio are likely to be still out of reach. ## 8 Conclusions We use an atmospheric retrieval code, the APOLLO code, in its first application to a cloudy L dwarf spectrum. Our goal is to constrain formation pathways of companions to young stars. Signatures of their formation as either binary star-like or planet-like (i.e. formed in a disk) should be imprinted in their chemistry, using metrics such as the C/O ratio and metallicity. From the analysis of our model results, we conclude that: * Based on our self-retrieval results, the wavelength range and signal-to-noise of the HD 106906 b is sufficient to accurately constrain the C/O ratio for simulated data. Cloud-free models can retrieve a similarly accurate C/O ratio but are not preferred statistically when cloud opacity is present in the data, and in that case may return inaccurate gravities, radii, and particularly bias toward high molecular abundances. * When comparing our retrieval results on the field L dwarf 2M2224-0158 with those of Burningham et al. (2017), we find a consistency in our C/O ratios but a disagreement in the T-P profiles, cloud properties, and molecular abundances. This warrants a similar interpretation to those in Burningham et al. (2017) and Molliere et al. (2020) where a degeneracy is seen, especially in the near-infrared, between retrieved cloud properties and the T-P profile. * Our best-fitting model for HD 106906 b yields a C/O ratio of \(0.53^{+0.15}_{-0.25}\), consistent with the range of C/O ratios estimated for members of the Sco-Cen association (\(0.52\pm 0.11\)). This implies that we cannot rule out the hypothesis that HD 106906 b formed via the pathway expected for a brown dwarf companion to HD 106906, in contrast with a planet-like pathway. * However, our solution for the atmospheric emission of HD 106906 b yields negligible cloud opacity in the \(H\) and \(K_{\rm s}\) bands, which along with a shallow temperature gradient at pressures less than a few bars, suggest that our results point to a cloud-temperature degeneracy. * As with many other retrievals of objects at similar masses and temperatures, additional data in the mid-infrared (\(\gtrsim 10\)\(\mu\)m) will be helpful in breaking the degeneracies in atmospheric structure and composition. _JWST_ is currently observing directly imaged companions, obtaining spectra at resolutions \(\gtrsim 1000\) at wavelength ranges not obtainable from the ground. By expanding the region where clouds are expected to contribute little, such as the thermal infrared region of \(\sim 3\)-\(5~{}\mu\)m, we can better constrain the thermal structure and gas abundances, and by extension both the gravity and metallicity. Additionally, \(R\sim 100\) spectra are available through the mid-infrared instrument (MIRI), which extends the wavelength range into the realm where cloud-specific features -- such as those from enstatite -- are visible in emission. Follow-up observations in these wavelength ranges are planned for HD 106906 b that will allow us to employ a cloud model that more directly models specific cloud condensates. Additionally, while we have not resolved the discrepancy in brightnesses in the near-infrared between ground- and space-based observations, additional wavelength coverage into the thermal and mid-infrared with _JWST_ will also help us investigate this apparent disagreement. At the same time, accurate C/O ratios and metallicities of more companion hosts are needed to directly compare with the retrieved chemistry of the companions. In either case, retrievals on either ground-based or space-based data will benefit greatly from a set of standardized inter-model comparisons of results from various retrieval codes, to test how each model's treatment of the physics affects the inferred atmospheric properties. We would like to acknowledge Dr. Natasha Batalha, whose advice and expertise on retrieval and statistical methods has been valuable in improving the rigor of this work. We also would like to thank the careful thought and feedback of our referees. We are grateful for support from NASA through the _JWST_ NIRCam project though contract number NAS5-02105 (M. Rieke, University of Arizona, PI). APOLLO (Howe et al., 2017, 2022), Astropy (Astropy Collaboration et al., 2013), Jupyter (Kluyver et al., 2016), Matplotlib (Hunter, 2007), Numpy (van der Walt et al., 2011), **Pandas**(McKinney, 2010), PICASO (Batalha et al., 2019), Scipy (Jones et al., 2001), Synphot (STScI Development Team, 2013, 2018)
2309.06932
EPOCHS VIII. An Insight into MIRI-selected Galaxies in SMACS-0723 and the Benefits of Deep MIRI Photometry in Revealing AGN and the Dusty Universe
We present the analysis of the stellar population and star formation history of 181 MIRI selected galaxies at redshift 0-3.5 in the massive galaxy cluster field SMACS J0723.3-7327, commonly referred to as SMACS0723, using the James Webb Space Telescope (JWST) Mid-Infrared Instrument (MIRI). We combine the data with the JWST Near Infrared Camera (NIRCam) catalogue, in conjunction with the Hubble Space Telescope (HST) WFC3/IR and ACS imaging. We find that the MIRI bands capture PAH features and dust emission, significantly enhancing the accuracy of photometric redshift and measurements of the physical properties of these galaxies. The median photo-z's of galaxies with MIRI data are found to have a small 0.1% difference from spectroscopic redshifts and reducing the error by 20 percent. With MIRI data included in SED fits, we find that the measured stellar masses are unchanged, while the star formation rate is systematically lower by 0.1 dex. We also fit the median SED of active galactic nuclei (AGN) and star forming galaxies (SFG) separately. MIRI data provides tighter constraints on the AGN contribution, reducing the typical AGN contributions by ~14 percent. In addition, we also compare the median SED obtained with and without MIRI, and we find that including MIRI data yields steeper optical and UV slopes, indicating bluer colours, lower dust attenuation, and younger stellar populations. In the future, MIRI/MRS will enhance our understanding by providing more detailed spectral information and allowing for the study of specific emission features and diagnostics associated with AGN.
Qiong Li, Christopher J. Conselice, Nathan Adams, James A. A. Trussler, Duncan Austin, Tom Harvey, Leonardo Ferreira, Joseph Caruana, Katherine Ormerod, Ignas Juodžbalis
2023-09-13T13:12:40Z
http://arxiv.org/abs/2309.06932v1
EPOCHS VIII. An Insight into MIRI-selected Galaxies in SMACS-0723 and the Benefits of Deep MIRI Photometry in Revealing AGN and the Dusty Universe ###### Abstract We present the analysis of the stellar population and star formation history of 181 MIRI selected galaxies at redshift \(0-3.5\) in the massive galaxy cluster field SMACS J0723.3-7327, commonly referred to as SMACS0723, using the James Webb Space Telescope (JWST) Mid-Infrared Instrument (MIRI). We combine the data with the JWST Near Infrared Camera (NIRCam) catalogue, in conjunction with the Hubble Space Telescope (HST) WFC3/IR and ACS imaging. We find that the MIRI bands capture PAH features and dust emission, significantly enhancing the accuracy of photometric redshift and measurements of the physical properties of these galaxies. The median photo-\(z\)'s of galaxies with MIRI data are found to have a small 0.1% difference from spectroscopic redshifts and reducing the error by 20%. With MIRI data included in SED fits, we find that the measured stellar masses are unchanged, while the star formation rate is systematically lower by 0.1 dex. We also fit the median SED of active galactic nuclei (AGN) and star forming galaxies (SFG) separately. MIRI data provides tighter constraints on the AGN contribution, reducing the typical AGN contributions by \(\sim\)14%. In addition, we also compare the median SED obtained with and without MIRI, and we find that including MIRI data yields steeper optical and UV slopes, indicating bluer colours, lower dust attenuation, and younger stellar populations. In the future, MIRI/MRS will enhance our understanding by providing more detailed spectral information and allowing for the study of specific emission features and diagnostics associated with AGN. keywords: galaxies: formation - galaxies: general - galaxies: photometry - galaxies: star formation ## 1 Introduction In the vast expanse of the universe, many galaxies remain hidden behind a veil of dust, rendering them challenging to observe using traditional optical telescopes (e.g. Asboth et al., 2016; Fudamoto et al., 2017; Reuter et al., 2020). Dust particles can absorb or scatter the emitted light, obstructing our view and limiting our understanding of their properties and evolution. However, the advent of the James Webb Space Telescope (JWST) and its successful commissioning has opened up a new era of exploration at infrared wavelengths (Menzel et al., 2023; Rigby et al., 2023). JWST has started to revolutionise our ability to study the dusty universe by enabling deep imaging and spectroscopy in the 1-30 \(\mu\)m wavelength range. Its new capabilities, including high sensitivity and exceptional spatial resolution, have propelled our investigations into the basic features of galaxies (e.g. Pontoppidan et al., 2022; Adams et al., 2023; Castellano et al., 2022; Naidu et al., 2022; Yan et al., 2022; Harikane et al., 2022). By delving deep into the universe with this imaging, we can uncover intricate details about galaxy structures, stellar populations, and the interplay between stars, gas, and dust. Furthermore, the JWST's infrared observations provide valuable insights into star formation processes, dust distribution, and the activity of supermassive black holes at the centers of galaxies. The emission from dust can be divided into three broad components as wavelength increases towards the red. Around rest-frame 8 \(\mu\)m, the mid-infrared range is dominated by features known as polycyclic aromatic hydrocarbon (PAH) bands (Allamandola et al., 1989). These PAHs can absorb UV photons and re-emit the absorbed energy as fluorescence at longer wavelengths, typically in the mid-infrared range. As the wavelength increases beyond the mid-infrared range, the emission is progressively taken over by very small, warm grains. At higher radiation field intensities, equilibrium emission from these warm grains becomes dominant. Beyond 100 \(\mu\)m, the emission is increasingly attributed to larger, relatively cold grains. While the _Spitzer_ Space Telescope allowed observations in this mid-infrared range, it had severe limitations in sensitivity and resolution at longer wavelengths (e.g., Ashby et al., 2015; Timlin et al., 2016; Nayyeri et al., 2018). The James Webb Space Telescope's Mid Infrared Instrument (MIRI) has made significant advancements over this, offering higher sensitivity at a magnitude limit as deep as \(\sim\)29 mag (and perhaps beyond) and with sub-arcsecond resolution (Wright et al., 2023; Rigby et al., 2023). The advanced capabilities of MIRI thus enable more precise investigations into the impact of dust on star formation and galaxy evolution, as well as the analysis of PAH features in the mid-infrared (see Figure 1), surpassing the limitations of optical and earlier infrared observations. In principle longer wavelengths can be used to find AGN and this is another advantage that MIRI has over what can be carried out with just NIRCam to find and characterise these objects, although see Jodzbalis et al. (2023). With these motivations in mind, we have selected a well-studied, strong lensing galaxy cluster field, SMACS 0723 (Medezinski et al., 2007; Ebeling et al., 2010; Repp & Ebeling, 2018) to carry out an analysis of the uses of MIRI data for uncovering galaxy properties. Previous research on this cluster field has been conducted using various telescopes and instruments, including _Chandra, VLT/MUSE, Subaru_, the _Hubble Space Telescope_ (Reionization Lensing Cluster Survey; Coe et al., 2019), and _Planck_ (e.g., Richard et al., 2021; Lagattuta et al., 2022; Golubchik et al., 2022; Mahler et al., 2023). Mahler et al. (2023) determined the cluster redshift to be \(z=0.3877\) based on a sample of 26 spectroscopically confirmed cluster members. They also derived a cluster velocity dispersion of \(\sigma\sim 1180\pm 160\) km s\({}^{-1}\). According to the Planck estimation, the total mass of the cluster is approximately 8.39\(\times 10^{14}\)M\({}_{\odot}\)(Coe et al., 2019). Previous infrared observations with the _Spitzer_ and _Herschel_ Space Telescopes have revealed the presence of a population of dusty, infrared-luminous, red-sequence galaxies in the SMACS0723 field. In this paper, we use JWST MIRI observations of SMACS0723 to study the role of MIRI in measuring photometric redshifts of distant galaxies and to study the physical properties of the potentially dusty and AGN galaxies which are obscured at optical bands. This is important as we know that the fraction and amount of AGN at high-\(z\) is perhaps surprisingly high (e.g., Juodzbalis et al., 2023). Thus it is important to determine how we can measure the amount of AGN and their contribution to galaxy SEDs. Thus, this paper focuses on the selection and analysis of dusty galaxies selected by MIRI bands in conjunction with HST and JWST/NIRCam data. The structure of the paper is organised as follows. We describe the JWST and the ancillary datasets used in this study and the data reduc Figure 1: Plot showing the JWST and HST filters we use as well as SEDs for representative AGNs and SFGs. The broadband coverage of the AGN (Seyfert 2 galaxy) and starburst galaxy (NGC6090) templates (\(\lambda F_{A}\), in relative units of erg s\({}^{-1}\)) at different redshift bins (Weedman et al., 2006) are shown. The top panel presents the AGN and star forming galaxy templates, while the bottom panel displays the relative transmission functions for various filters: HST/ACS and WCS3/IR (F435W, F606W, F814W, F105W, F125W, F1400W, and F160W), JWST/NIRCam (F0900, F150W, F200W, F277W, F356W, and F444W), and JWST/MIRI (F770W, F1000W, F1500W, and F1800W). Emission lines and PAH features are appropriately labelled. Notably, the MIRI data enable us to probe the spectral energy distributions of galaxies up to \(\sim 5\mu\)m (at \(z=3\)) in the rest-frame, facilitating the characterization of PAH features and dust emission. tion process in SS2. We also describe the catalog generation process. In SS3, we present the MIRI selected sample and the physical properties from the spectral energy distribution (SED) fitting for the galaxy. In SS4, our study focuses on the notable advancements achieved through the utilisation of MIRI data. We examine the enhancements it brings to various aspects, such as the accuracy of redshift measurements, the characterisation of star populations in galaxies, and the impact on the SED analysis of both active galactic nuclei (AGN) and star-forming galaxies (SFG). In SS5, we provide a comprehensive summary of our findings and discuss the potential avenues for future research in this field. Throughout this paper, we assume a flat cosmological model with \(\Omega_{\Lambda}=0.7,\Omega_{m}=0.3\) and \(H_{0}=70\)km s\({}^{-1}\) Mpc\({}^{-1}\). All magnitudes used in this paper are in the AB system (Oke & Gunn, 1983). ## 2 Data reductions and catalog ### JWST NIRCam observations Observations of the SMACS-0723 galaxy cluster were taken on 2022 June 06, as part of the Early Release Observations (ERO) programme (ID: 2736, PI: K. Pontoppidan, Pontoppidan et al., 2022). The observations consist of 6 NIRCam photometric bands F090W, F150W, F200W, F277W, F356W, and F444W. The total integration time is 12.5 hr. Our NIRCam image reduction is performed using the procedure of Ferreira et al. (2022) and Adams et al. (2023). Below we summarise the procedure. The data were processed using the JWST Calibration Pipeline (v1.8.2 and CRDS v0995) using the default parameters for stages 1 and 2. This was the most up-to-date version at the time of writing, and includes the second round of post-flight calibrations. We then apply the 1/f noise correction1 derived by Chris Willott after stage 2. After stage 3, we subtract an initial flat background and carry out a 2-dimensional background subtraction. Then we align the final F444W image onto a GAIA-derived WCS using tweakreg, as part of the DrizzlePac python package. We then match all remaining filters to this derived F444W WCS.2 We then pixel-match the images to the F444W image with the use of astropy reproject.3 The final resolution of the drizzled images is 0.03 arcseconds/pixel. Footnote 1: [https://github.com/chriswillott/jwst](https://github.com/chriswillott/jwst) Footnote 2: [https://github.com/spacetelescope/drizzlepac](https://github.com/spacetelescope/drizzlepac) Footnote 3: [https://reproject.readthedocs.io/en/stable/](https://reproject.readthedocs.io/en/stable/) We use the SExtractor(Bertin & Arnouts, 1996) version 2.8.6 to identify our sources. We run this in dual-image mode with the F444W image used for object selection. Here the apertures of all measurements should be consistent. MIRI's PSF FWHM is 0.5 arcseconds in the F1500W filter. Thus we conduct forced circular aperture photometry within 1.0 arcsecond diameters. We perform the aperture correction derived from simulated WebbPSF point spread functions4 Figure 2: The SMACS0723 fields of view overlaid on HST images (R-JWST/MIRI, GJ-WST/NIRCam, B:HST). Before generating the catalog, we produce a mask to avoid the diffraction spikes of bright stars and image artifacts. These masks cover diffraction spikes, the few remaining snowballs, regions of intra-cluster medium, and a buffer around the edges of the images. The imaging data is from HST, the green dotted boxes show the coverage of NIRCam, and the red dashed lines show the area imaged by MIRI. for each NIRCam band. We experimented with many different aperture photometry measurement methods and found that this one is the best for recovering accurately the fluxes of our galaxies. The effects of galactic extinction are negligible in these IR bands (\(<0.1\) mag), and thus are not applied. ### JWST MIRI observations MIRI observations for this field were taken on June 14th 2022, covering a specific area measuring \(112\arcsec\)6\(\times\)73\(\arcsec\)5. The data acquisition included observations in the F770W, F1000W, F1500W, and F1800W bands within this field. Two versions of the reduced data are generated for analysis. In the first version, the data is processed using the grizli reduced by Brammer et al. in prep5. For the second version, MIRI images are acquired from the Mikulski Archive for Space Telescopes (MAST), and the data underwent reduction using the standard JWST pipeline, similar to the process utilised for NIRCam data. A comparative analysis of the standard JWST pipeline reduced images reveal the presence of pronounced background patterns, specifically stripes and gradients, predominantly around the edges of the images. However, the central region of the image exhibits no discernible impact from these artefacts. Consequently, in this paper the grizli reduced images are employed due to their superior quality within the central region. The resulting drizzled images have a resolution of \(0.04\arcsec\). Footnote 5: Images and catalogs of JWST/MIRI in the SMACS0723 field processed with the grizli software pipeline: [https://zenodo.org/record/6874301](https://zenodo.org/record/6874301) We then align the images to NIRCam F444W matching systems with separations \(\Delta<0.05\arcsec\). We then run SExtractor version 2.8.6 (Bertin & Arnouts, 1996) in dual-image mode to detect objects in each field. The detection image we use is MIRI F770W. We use the F770W filter as it has the best sensitivity and angular resolution in the MIRI bands. The apertures of 1.0 arcsecs are the same as before. We also perform aperture corrections derived from simulated WebbPSF MIRI point spread functions6 for each band. This aperture corrections are essential as it allows us to measure photometry on different bands and then to normalise these measurements by correcting for the effects of using an aperture which by its nature limits the amount of flux measured. Footnote 6: [https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-performance/miri-point-spread-functions](https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-performance/miri-point-spread-functions) ### HST imaging observations HST observations of SMACS0723 are from the Reionization Lensing Cluster Survey (RELICS). This survey observed 41 massive galaxy clusters with Hubble and Spitzer at 0.4-1.7\(\mu\)m and 3.0-5.0\(\mu\)m, respectively. SMACS0723 (ID: GO 14017; Coe et al., 2019) was observed in one WFC3/IR pointing, with a total of ten orbits in WCS3/IR (F105W, F125W, F140W, and F160W) and ACS imaging (F435W, F606W, and F814W). The observational details and the HST data reduction are available from Coe et al. (2019). The image resolution is \(0.06\arcsec\). As mentioned before, before the source extraction we align the HST images to NIRCam F444W to a level of \(\Delta<0.05\arcsec\)Then we run SExtractor version 2.8.6 (Bertin & Arnouts, 1996) in dual-image mode to detect objects in the field with an aperture of 1.0 arcsec for photometry measured in each filter image. The weighted stack of all the HST images is the input detection image, the same as that in Coe et al. (2019). We also perform aperture corrections based on the ACS/WFC7 and WFC3/IR PSF8 encircled energy fraction. We correct all photometry for Galactic extinction using the IR dust emission maps of Schlafly and Finkbeiner (2011). Footnote 7: [https://www.stsci.edu/hst/instrumentation/acs/data-analysis/aperture-corrections](https://www.stsci.edu/hst/instrumentation/acs/data-analysis/aperture-corrections) Footnote 8: [https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration/ir-encircled-energy](https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration/ir-encircled-energy) ### Source Photometry and Cataloguing To generate a matched catalog for all the sources in SMACS0723, we use TOPCAT to combine SExtractor's HST and JWST catalogs. The maximum separation allowed is \(0.3\arcsec\), which is a good compromise between the false positive rate achieved and how restricted it is due to the size of MIRI's PSF. For the final catalogue, we use a forced circular 1\(\arcsec\) diameter aperture. This diameter is chosen to enclose the central/brightest \(-\)86 per cent of the flux of a point source of NIRCam and \(-\)83 per cent of MIRI, enabling us to use the highest SNR pixels to calculate galaxy colours while avoiding reliance on strong aperture corrections that can be as high as the actual measurements made. It is also consistent with the circular apertures of \(0.9\arcsec\) diameter in Papovich et al. (2023). Additionally, we create a composite mask to avoid image artefacts. These masks cover diffraction spikes, the few remaining snowballs in the NIRCam imaging, as well as regions of intra-cluster medium (in the NIRCam modules containing any foreground cluster), and a buffer around the edges of the observations. The remaining total unmasked region is \(\sim 2.3\) arcmin\({}^{2}\). We plot the NIRCam and MIRI observations overlaid on the HST ACS F606W image, in Figure 2. SExtractor is known to underestimate the photometric errors of sources. To ensure accurate measurements, we calculate the local depth of our final images. We place circular apertures (1\(\arcsec\) ) in empty regions that are at least 1 arcsecond away from real sources in our images. We use the measured background flux in these apertures to derive a median depth for each field. Finally, we calculate the photometric errors for each individual source using the nearest 200 empty apertures to determine the local depth. The 5 sigma depths of each band can be found in Table 1. Finally, we use robust methods to construct the final samples. The relevant selection criteria are described in Section 3.2. In total, 181 galaxies are matched and meet our selection criteria. To comprehensively detect all sources in this field, especially at high redshift, we use NIRCam as the detection image and strive to determine the corresponding measurements for HST and MIRI. Specifically, we employ SExtractor++ (Bertin et al., 2022) in dual-image mode to measure HST and MIRI fluxes for the NIRCam detections. For high-redshift galaxies at \(z>6.5\), their blue-ward bands (HST) are anticipated to appear faint or undetected due to the Lyman break. Out of the total of 12 candidates at \(z>6.5\) identified by NIRCam, unfortunately, these candidates are not within the coverage of MIRI and HST. More detailed analysis of this \(z>6.5\) sample can be found in our EPOCHS paper I (Conselice in preparation). ## 3 MIRI selected galaxies In the following sections, we describe the main results of this paper. We outline SED fittings with and without MIRI, using cigale and EAZY and identify the types of galaxies that are preferentially selected with MIRI included to the depths we are reaching. Additionally, we will explore whether MIRI is capable of observing more galaxies compared to using NIRCamae alone. ### Spectral energy distribution modeling After generating our catalogues, we fit the spectral energy distributions of each source to derive photometric redshifts in various different ways. To calculate a preliminary photo-\(z\), we fit SEDs using cigale(Boquien et al., 2019). cigale better constrains the fluxes on the redder bands because it includes AGN contributions and more accurate dust templates compared to EAZY, which we use in other EPOCHS papers (e.g., Adams et al., 2023). Here we follow the setups used by Yang et al. (2023). We use the standard delayed-\(\tau\)'sfhelaely' star formation history within our fitting. We set the \(e\)-folding time and stellar age to vary from 0.5-5 Gyr and 1-5Gyr, respectively. We use Bruzual & Charlot (2003)(BC03) templates for the stellar population (SSP) models, assuming a Chabrier (2003) initial mass function (IMF), with a solar metallicity of \(Z=0.02\). We also include within our fits the nebular module (Villa-Velez et al., 2021) for emission from the HII regions, with an ionisation parameter of \(\log U=-2.0\), a gas metallicity = 0.02 and with lines width = 300 km/s. We use the'skirtorm2016' module to describe the AGN component (Stalevski et al., 2012, 2016), with the fraction of AGN fracAGN varying from 0 to 0.99 and the relative rest-frame wavelength \(\lambda_{\rm AGN}\) in the range of 3-30\(\mu\)m. The 9.7 \(\mu\)m optical depths allowed in our study includes all available values 3, 5, 7, 9, and 11. We fix the AGN viewing angle to be at 70 degrees to select the obscured AGN, which is a typical value for type II AGN (Yang et al., 2020, 2022). We also use the 'dl2014' module developed by Draine et al. (2014) to calculate dust emission. The dust emission comprises two components: a diffused emission and a photodissociation region (PDR) emission associated with star formation. In our fitting, we allow the fraction of PDR emission (\(\gamma\)) to vary from 0.01 to 0.9, the minimum radiation parameter (\(U\)min) to vary from 0.1, 1.0, 10, 50, and a maximum fixed value of \(U\)max = 10\({}^{7}\). The mass fraction of polycyclic aromatic hydrocarbon (PAH) in total dust is the same for both components, and we set it as [0.47, 2.5, 7.32]. For the dust attenuation, we adopt the 'dustatt' modified starburst module in cigale(Calzetti et al., 2000; Leitherer et al., 2002). The colour excess is set within the range \(E(B-V)=0-1\). In order to determine the most accurate photometric redshifts, we use the redshifting mode and a redshift grid ranging from \(z=0.0\) to 15.0, with a bin width of 0.1. We measure the properties of our sample of galaxies, including redshift, SFR, stellar mass, and fracAGN through both traditional least-\(\chi 2\) analysis and different types of Bayesian approaches. The latter methods take into account the full probability density functions (PDFs), and provides more comprehensive and informative results than the least-\(\chi^{2}\) approach (Boquien et al., 2019). In addition, we also utilise the EAZY photometric redshift code (Brammer et al., 2008) to assess the accuracy of the SED fitting derived from cigale, and EAZY, in conjunction with HST and NIRCam data. Our EAZY approach involves a modified Kroupa IMF (Kroupa \begin{table} \begin{tabular}{l c c c} \hline Instrument/ Filter & Zeropoint & Aperture correction & 5\(\sigma\) depths \\ & AB mag & AB mag & AB mag \\ (1) & (2) & (3) & (4) \\ \hline HST/F435W & 25.66 & -0.106 & 25.14 \\ HST/F606W & 26.50 & -0.095 & 25.39 \\ HST/F184W & 25.95 & -0.098 & 25.23 \\ HST/F160W & 26.27 & -0.136 & 25.17 \\ HST/F125W & 26.23 & -0.155 & 24.87 \\ HST/F140W & 26.45 & -0.164 & 24.67 \\ HST/F160W & 25.95 & -0.170 & 25.19 \\ NIRCam/F909W & 28.08 & -0.079 & 27.08 \\ NIRCam/F150W & 28.08 & -0.090 & 26.91 \\ NIRCam/F200W & 28.08 & -0.103 & 26.99 \\ NIRCam/F277W & 28.08 & -0.110 & 27.40 \\ NIRCam/F356W & 28.08 & -0.119 & 27.57 \\ NIRCam/F444W & 28.08 & -0.143 & 27.43 \\ MIRI/F770W & 28.9 & -0.202 & 24.95 \\ MIRI/F1000W & 28.9 & -0.326 & 25.15 \\ MIRI/F1500W & 28.9 & -0.369 & 24.65 \\ MIRI/F1800W & 28.9 & -0.421 & 24.18 \\ \hline \end{tabular} \end{table} Table 1: 5\(\sigma\) depths and correction factors of magnitude-zeropoints, apertures and extinctions. Figure 4: Colour-colour diagram of NIRCam and MIRI bands for the matched galaxies in SMACS0723. The symbols and points are otherwise the same as in Figure 3. Figure 3: Plot of observed NIRCam and MIRI mag-colour diagram for the matched robust galaxies in SMACS0723 field. The magnitude error is calculated using measurements of the local depth. The redder colour corresponds to higher redshift galaxies. A gradient in redshift can clearly be seen in the F444W-F770W colour. 2001) and the default templates (tweak_fsps_QSF_12_v3), which is comprised of younger stellar populations, lower metallicities, and more active star formation (Larson et al., 2022). The comparison between the redshift measurements obtained from these methods reveals a high level of concordance, with deviations typically falling within 15 percent, except for a small subset of targets (8/181) fit using EAZY at a redshift of approximately \(z\sim 6\). Due to the limited availability of dust and AGN templates at the red wavelengths, EAZY exhibits a less restrictive approach towards the data. It tends to primarily rely on the blue end of the data, using Lyman-break or Balmer-break techniques for redshift determination. This inclination can result in potential contamination when selecting samples with high redshifts at \(z>6\). It is important to note that the occurrence of such sources is relatively scarce. Therefore, when publishing high-redshift candidates, additional stringent selection criteria need to be employed for accurate screening, a topic which is discussed in our EPOCHS paper I (Conselice in prep.). Nevertheless, our results provide strong evidence supporting the reliability and stability of our SED fitting technique when leveraging the rich photometric information provided by HST and NIRCam observations. None the less, an important conclusion from our study is that some low redshift galaxies can be mistaken for high redshift ones without the use of MIRI data. ### A Robust Sample of MIRI selected galaxies In order to determine the physical properties of MIRI selected galaxies, we utilise the cigale SED fitting approach outlined in Section 3.1. We employ a series of selection criteria described below: 1. We require detections in both MIRI and NIRCam: \(\geq 5\sigma\) detection in 2 bands in MIRI, and \(\geq 5\sigma\) detections in 2 bands in NIRCam. 2. Removal of the sources close to the centre of the cluster, to avoid multiple imaging and excessive gravitational amplification caused by lensing. 3. Morphology checking to exclude non-galaxy targets, e.g. hot pixels, artefacts, blended features. 4. Matching with HST catalogue within \(0.3^{\prime\prime}\); for non-matched targets, we use SExtractor++ with the forced aperture to collect the flux at its position. 5. We require \(\chi^{2}_{red}<6\) for best-fitting SEDs to be classed as robust. 6. \(P(z_{gec})<0.5\times P(z_{phot})\) to ensure the probability of a secondary peak, if one exists, is less than 50% of the high-\(z\) solution. The broad emission features of PAHs in the 3-20 \(\mu\)m range are shifted to longer wavelengths with increasingly higher redshifts. As a result, these features are expected to dominate the flux at specific mid-infrared wavelengths, leading to significant redshift-dependent colour variations in broad-band photometry (Langerodoli and Hjorth, 2023). In Figure 3, we present the NIRCam and MIRI magnitude-colour (F444W vs. F444W-F770W) diagram for our sources, while the colour-colour (F444W-F770W vs. F770W-F1000W) diagram is shown in Figure 4. As explained earlier, we determine redshifts using a Bayesian analysis based on cigale fitting. In Figure 3, we observe a considerable number of cluster members that do not exhibit PAH emission and have low specific star formation rates (sSFR). Their redshifts are around \(z=0.4\) and they are located at the bottom of the mag-colour plot. In Figure 4, we find that galaxies primarily occupy the region towards the bottom left of the colour-colour diagrams, in several magnitudes of the flat-spectrum point located at position (0,0). Due to their colours, this region is likely populated by quiescent galaxies and higher redshift galaxies. We group our sources into the following primary categories based on the criteria above and primarily from the \(\chi^{2}_{red}\) fits. Figure 5 and Figure 6 summarize the cutout images and SED fitting results for each category. * AGN: The emission from AGN in the MIRI bands can arise from several components. One component is the thermal emission from the dusty torus surrounding the central black hole,(Fritz et al., 2006; Nenkova et al., 2008; Siebenmorgen et al., 2015) The temperature of the torus typically ranges from a few hundred to several thousand degree K, depending on the AGN's level of activity. This emission is influenced by the temperature and geometry of the torus, as well as the orientation of the system with respect to the observer. Another contribution from AGN at the MIRI bands is non-thermal emission originating from relativistic jets or outflows associated with the black hole (e.g. Honig and Kishimoto, 2017; Kakkad et al., 2023). These high-energy particles can produce synchrotron emission in the mid-infrared regime, which can be detected by MIRI. Disentangling the AGN contribution from other sources, such as star formation, allows for a more comprehensive analysis of the galaxy's overall emission and underlying processes. We will discuss this in more detail in Section 4.3. * High-\(z\) galaxies: With the broad wavelength coverage of the MIRI bands on JWST, several techniques can be employed to select \(z>2\) galaxies. Flux dropouts or steep declines in the spectral energy distribution (SED) due to Lyman and Balmer breaks can be identified as indicators of high-redshift sources. Additionally, MIRI enables the detection of key emission lines, such as optical emission lines and O iv] at 26 \(\mu\)m or PAH lines, which are redshifted to longer wavelengths for high-redshift sources, making them accessible in the MIRI bands. Leveraging the IR capabilities of JWST, we successfully applied the Lyman and Balmer break to select high-\(z\) objects that may be undetectable or faint in blue bands like HST and F115W. MIRI photometry provides robust constraints on the SEDs, enabling precise determinations of redshift and galaxy properties. In our final catalog, we identified 46 galaxies at \(z_{photo}>1\), of which 29 (63%) have confirmed high spectroscopic-\(z\) values (Carnall et al., 2023; Caminha et al., 2022; Noirot et al., 2023). For a detailed description of our extensive study on high-\(z\) objects at \(z>6.5\), see our EPOCHS paper I (Conselice in prep.). * Dusty star forming galaxies: MIRI offers a range of methods to search for dusty star-forming galaxies. The thermal emission from dust heated by UV/optical photons from young, massive stars can be detected using the MIRI bands. Moreover, The presence of PAH features at 6.2, 7.7, 8.6, 11.3, and 12.7 \(\mu\)m indicates actively star-forming galaxies, especially at high redshift (e.g. Langerodoi and Hjorth, 2023). Additionally, MIRI's broad wavelength coverage allows us to measure the spectral energy distribution (SED) shape and identifying characteristic features, such as the 9.7 \(\mu\)m silicate absorption line, providing insights into dust composition and distribution within these galaxies.(Rich et al., 2023) We employ the 'dl2014' module in cigale, which is comprised of a diffused emission and a PDR emission associated with star formation and PAH features. This fully considers the above situation and can effectively select dusty star-forming galaxies. * Quiescent galaxies: Quiescent galaxies are characterized by a low level of ongoing star formation and are typically associated with an older stellar population. These galaxies exhibit SEDs that peak at longer wavelengths, making them particularly noticeable in the MIRI bands. In the colour-colour diagram shown in Figure 4, quiescent galaxies tend to be found within a cluster at a redshift of \(z_{cl}=0.39\) and are observed to have a colour of (F444W - F1000W) \(\sim-0.5\) mag (AB), which is consistent with the predictions of the quiescent galaxies models (Figure 1 in Langeroodi & Hjorth 2023). Quiescent galaxies tend to cluster in the region towards the bottom-left of the stationary locus of the star-forming tracks. The position of these quiescent galaxies in this region are roughly independent of redshift due to their approximately power-law SEDs. We identified all the cluster galaxies occupying the region corresponding to quiescent galaxies using spectroscopic redshifts from MUSE observations (\(z=0.387\pm 0.02\)) as reported in Caminha et al. (2022). This is expected as various quenching mechanisms operate more efficiently in cluster environments (e.g., Donnari et al. 2021; Kim et al. 2022). Furthermore, in addition to the quiescent sources within the cluster, several quiescent galaxies at redshifts around \(z\sim 1-2\) have been discovered within overdensities associated with a significant number of star-forming galaxies (e.g., Noirot et al. 2023). We also check for sources with only MIRI detections, that are not found within NIRCam or HST observations. To ensure that we do not miss these sources, we utilised SExtractor++ and searched for detections with a 5\(\sigma\) threshold or higher on at least two MIRI bands. We then measure the NIRCam and HST flux at the same positions as before, using the same aperture and mask. Interestingly, we did not find any sources that are only detected solely with MIRI, indicating that NIRCam photometry is deep enough within this field and at the MIRI depth we study to capture all the IR bright sources. The 5\(\sigma\) depth of F770W and F1000W is 24.95 and 25.15 mags, which is 3 mags shallower than NIRCam F444W of 27.43 mag. This suggests that previous JWST work that relied solely on NIRCam detections is reliable in finding all galaxies to our MIRI depth. ## 4 Stars, Dust, and AGN Properties In this section we discuss the physical properties of our MIRI selected galaxies. We first explore their redshift, stellar mass and star formation history derived by cigale fitting and then we investigate how MIRI can improve the accuracy of these measurements. Additionally, we also analyse the AGN contribution and conduct a detailed study of the median SED of the selected galaxies. ### The impact of MIRI on redshift measurement Limited by the available JWST observations, most recent redshift measurement works only focus on the NIRCam analysis (e.g., Adams et al. 2023; Bouwens et al. 2023; Endsley et al. 2023). Here we test how and if MIRI improves the accuracy of redshift measurements. We use the cigale code again to determine the redshift with and without MIRI data. The parameters in the fit are the same as before. The results show that the redshifts are nearly consistent, as shown in Figure 7. These two methods have photometric redshift solutions within 15 per cent of that with MIRI. In addition, we find cigale fitting with MIRI data decreases the uncertainty of redshifts (\(\sigma_{\textrm{MIR1}}-\sigma_{\textrm{noMIR1}}\))/\(\sigma_{\textrm{MIR1}}\) by 50%. In Figure 7, there are three objects that stand out as outliers with a difference greater than \(\Delta z>2\). When using MIRI to measure photometric redshifts, these objects are at high redshifts \(z_{phot}>2.5\), whereas without MIRI, the derived redshift is \(z<1.0\). The identification of good photometric redshifts relies on either the Lyman break or Balmer break. While fitting without MIRI data, the photo-\(z\) code fits the gap between HST/ACS F435W and F606W as the Balmer break, thereby identifying them as being low redshift. However, fitting with MIRI data could change the measurement of redshift in two aspects. Firstly, MIRI data could improve constraints on the dust emission/attenuation at the redwards wavelength. Secondly, another factor to consider is the impact of nebular emission lines, including the PAH feature, on the flux in certain bands. This can potentially cause significant changes in the photometric redshift solutions. In such cases, the code fits the observed NIRCam/F200W excess as a Balmer break, resulting in a high-\(z\) solution. Although there are currently 17 multiband data points available in this field that effectively and accurately distinguish between high-\(z\) and low-\(z\) targets, it is evident that relying solely on photometry still creates significant uncertainties. Currently, 85 (50%) of our galaxies have spectroscopic redshift information available. In Figure 8, we present a comparison between the spectroscopic redshift and the photometric redshift with and without MIRI. The spectroscopic redshifts are measured by Suburu, VLT/MUSE, JWST/NIRISS and JWST/NIRSpec(Carnall et al. 2023; Caminha et al. 2022; Noirot et al. 2023). The photometric redshift data are almost all located within 15% of the spectroscopic redshift. It can be seen that the photometric redshift is quite reliable to a certain extent, even when utilising only HST and NIRCam data. This is due to the fact that the Lyman break/Balmer break is the basis for the photometric redshift, Figure 5: The different band images of a subset of the galaxies in log scale. Their IDs are labelled on the left. From left to right, the images are ACS F435W, ACS F606W, ACS F814W, NIRCam F090W, WCS3 F105W, WCS3 F125W, WCS3 F140W, NIRCam F150W, WCS3 F160W, NIRCam F277W, NIRCam F356W, NIRCam F444W, MIRI F770W, MIRI F1500W, MIRI F1800W. The text in blue, green, and red denotes different instruments: HST, NIRCam and MIRI, respectively. The images are \(2^{\prime\prime}\times 2^{\prime\prime}\) and are centred on the galaxy in each bandpass. The black circle is the aperture of \(1^{\prime\prime}\). which relies more heavily on data from the blue end. In contrast, an absence of HST data can cause a significant bias in the photometric redshift. Figure 8 (right) displays the relative difference between the spectral redshifts and photometric redshifts with and without MIRI data. This reveals that median photometric redshift estimates have a scatter of \(0.00^{+0.02}_{-0.04}\) (0.1%) and \(-0.04^{+0.04}_{-0.03}\) (4.0%) from the spectroscopic redshift for fits with and without MIRI data, respectively. The outlier fractions, defined as the fraction of photometric redshift that disagrees with the spectroscopic redshift by more than 15% in \((1+x)\), Figure 6: A subset of MIRI selected galaxies with fits done using cigaleshown are systems which we classify as AGN, high-\(z\) galaxies, dusty star forming galaxies, and quiescent galaxies. The black line represents the best fitting result from the cigale code. The purple points represent the observed fluxes for each band; the red points represent their fitted fluxes. The yellow line represents the star formation contribution; the green line is the fitted emission line template. The red and orange lines represent the contributions of AGN and dust, respectively. The lower part of each panel is the relative residual of the fitting. (\(|\Delta z|/(1+\)spec-\(z)\)\(>\)0.15), are 1% and 5%, respectively. Additionally, the results obtained from fitting with MIRI data show a closer alignment with the spectroscopic redshift and reduce the estimated errors on the photometric redshift by \((\sigma_{\rm X}-\sigma_{\rm spec})/\sigma_{\rm spec}\) of 20%. At present, spectroscopic observations are mostly at low redshifts. In the SMACS0723 field, only 10 sources with a redshift greater than 6.5 have been observed by NIRCam, and unfortunately, they have not been covered by MIRI observations. JWST mid-infrared and spectroscopic observations are still lacking at this stage. Upcoming follow-up studies are expected to provide more data, which will help to systematically constrain their redshifts and physical properties. ### Stellar Mass and Star Formation History Here we discuss the comparisons between star formation rate and masses derived when we include MIRI data and we excluded MIRI data as shown in Figure 9. We employ the standard delayed-r'sMRe-layed' star formation history and the bc03 stellar population module (Bruzual & Charlot, 2003), assuming using Chabrier2003 IMF (Chabrier, 2003). We have excluded the galaxies from our analysis, which positioned exceptionally close to the cluster's center. Thus we have not corrected the gravitational amplification for these physical parameters. In the present discussion focusing on the impact of MIRI on data fitting, the gravitational amplification does not have effect on our conclusions. In the Figure 9 left panel, the majority of stellar mass values fall within a 15% error range. Only a few galaxies lie away from the 1:1 line, but have a large error of \(>1\) dex. The range of preferred values for stellar mass and SFR have been narrowed down with the inclusion of MIRI data. The median \(\Delta M_{\star}\) error decreases 0.1 dex. This is a result of improved constraints on the dust emission and attenuation. For the star formation rate, cigale provides several SFR indicators based on different timescales: instantaneous SFR, as well as SFRs averaged over the last 10 Myrs and 100 Myrs. Generally, the SFR averaged over the last 100 Myrs is considered the most reliable indicator of the stable star-formation activity. Here we follow this custom to use the SFR averaged over the last 100 Myrs. We have excluded the quiescent galaxies with a low star formation rate of log sSFR \(<-10\) yr\({}^{-1}\) from this comparison. The SFRs derived with MIRI data are generally slightly lower by \(\sim 0.1\) dex. Papovich et al. (2023) also reported that adding the MIRI data could reduce SFRs for the galaxies with \(\Delta\)SFR of 0.15 dex at \(4<z<6\) and 0.29 dex at \(z>6\), matching our findings. However, for two high-\(z\) objects, the log SFRs fitted with MIRI data increase by more than three times, and the error bars also significantly decrease. This is because they are identified as low-\(z\) objects with a large uncertainty when we exclude MIRI data. In contrast, adding MIRI changes the best-fitting redshifts, so that they are both \(z\sim 3\) objects. In the middle and right panels of Figure 9, we re-run the cigale fitting with fixed redshift values obtained from fitting with MIRI data. This was done to eliminate the influence of redshift on the results. We can see that the results show good agreement. The results indicate that the impact on the galaxy mass and SFR measurements is primarily a consequence of changing the redshift. This effect can be attributed to the additional information provided by MIRI's mid-infrared observations, allowing for a better constraint on the galaxy's redshift and, consequently, improving the accuracy of its mass and SFR determinations. Figure 10 illustrates a representative example of a single galaxy fit, highlighting the significant impact of including MIRI data. The absence of MIRI data results in a loss of constraints at the red-end of the fit, leading to potential inaccuracies in various physical parameters such as redshift determinations. This emphasizes the crucial role of MIRI data in improving the accuracy and reliability of galaxy characterization and analysis. Generally speaking, including MIRI data gives approximately similar measurements of stellar masses and SFRs to we only using NIRCam and HST. We also find that MIRI reduces the error of the stellar masses and SFRs by \(\sim 0.1\) dex, narrowing down the preferred values of stellar population parameters. In some cases, the large differences are always caused by the redshift uncertainties. ### The impact of MIRI on AGN contribution We measure the contribution of AGN to our sample based on the best-fit frac\({}_{\rm AGN}\) parameter from cigale fitting, and distinguish the galaxies between star-forming galaxies and AGNs (referred to as SFG and AGN, respectively). The dale2014 module provides a basic template from the ultraviolet to the infrared for cigale fitting. The AGN fraction (frac\({}_{\rm AGN}\) ) is defined as the ratio of the AGN luminosity to the sum of the AGN and dust luminosities (Boquien et al., 2019). It is particularly sensitive to data in the red-end at wavelengths of 3 microns to several hundred microns, where the dominant emission is primarily attributed to the AGN. Thus, we are not using a binary approach to determine if a galaxy is all AGN or all'stars', but we are determining from this fitting what fraction of the light emitted arises from AGN. In Figure 11 we conduct a test to investigate the impact of including or excluding MIRI data on the frac\({}_{\rm AGN}\) measurement. Our findings indicate that frac\({}_{\rm AGN}\) has a mean value of 0.10\(\pm\)0.15 in the fit that includes the MIRI data points, which is smaller than the result that does not include MIRI, where we get a fraction of 0.23\(\pm\)0.10. This implies that the MIRI data lower the derived fraction of the AGN and that often the contribution is higher without the use of MIRI. The median frac\({}_{\rm AGN}\) difference between with MIRI and without MIRI is \(-0.14^{+0.11}_{-0.12}\). In Yang et al. (2021), the MIRISIM simulation of CEERS imaging yielded a \(\Delta\) frac\({}_{\rm AGN}=({\rm frac_{AGN,MIR}}-{\rm frac_{AGN,no MIRI}})\) value of \(\sim-0.2\) in Figure 12 Bottom panel, which aligns with our findings of \(-0.14\) Figure 7: Comparison of the photometric redshifts derived by cigale fitting with and without MIRI data. The black dashed line shows the one-to-one relation, which is the ideal 1:1 matching case of photometric vs. spectroscopic redshifts. The dotted lines show 15 percent offsets in \((1+z)\). The colour of the point represents the relative difference between the photometric redshifts of the galaxies with and without MIRI. Figure 8: Left: Diagnostic plot showing the comparison of spectroscopic redshifts with photometric redshifts for fits with and without MIRI data. The spectroscopic redshifts are from the observations of Subaru, VLT/MUSE, JWST/NIRISS and JWST/NIRSpec (Carnall et al., 2023; Caminha et al., 2022; Noirot et al., 2023). The black dashed line shows the one-to-one relation; the dotted lines show 15 percent offsets in (1+\(z\)). Right: the histogram of the relative difference between the photometric redshifts from our cigale fits with or without MIRI and the spectroscopic redshift in (1+\(z_{\rm spec}\)). The labelled scatter indicates the median of the relative difference, respectively. The error bars show the range of the 16th-84th percentiles. Figure 9: Comparisons between the derived stellar masses and star formation rates when including and excluding MIRI data. SFR and stellar masses are taken from the cigale SED fitting, as discussed in Section 3.1. The left panel shows the comparisons of stellar masses and SFR respectively, when the redshift is a free parameter. The black dashed line shows the one-to-one relation, while the red line shows the best polyfitting considering the error. In the middle and right panels, the redshifts are fixed to the values obtained from fitting the MIRI data. The right panel shows the difference (\(\Delta=\) X\({}_{\rm MIRI}\) - X\({}_{\rm nonMIR}\)) as a function of redshift. The colours of the points indicate the redshift. The stellar mass and SFR are not corrected for magnification, however this does not impact our results. In addition, the inclusion of MIRI has caused a significant decrease of \(\sim 0.17\) in the error of mean fracAGN, similar to the effect on redshift and other galaxy parameters. However, it becomes challenging to constrain the model in the early Universe, which results in a substantial increase in the error. For instance, at \(z<3\), the hot dust heated by the AGN is well tracked by the MIRI band, with a peak at \(10\mu\)m in the rest frame. On the contrary, at \(z>5\), the key emission from AGN-heated hot dust is shifted beyond MIRI detection ranges. The F1800W band corresponds to the rest frame wavelength of 3 microns, where the contribution of AGN has just started and is still relatively weak. This introduces significant challenges in the pursuit of identifying and investigating AGN beyond a redshift of \(z>3\). We refer readers to see our other paper in this series (Juodzbalis et al., 2023), dedicated to clarifying the complications and strategies entailed in probing AGN at \(z\sim 6\). ### SED analysis constraining AGN and dusty contributions In this section, we analyse the median SEDs of AGN and SFGs using similar redshift ranges and with accurate photometric redshifts. We also investigate the effects of including MIRI data on the median SEDs. Using fracAGN to identify AGN is not a strict criterion, and the value we use is somewhat arbitrary. To ensure the plausibility of our results, we tested different fracAGN values, ranging from 0.05 to 0.5, to calculate the proportion of AGN to the total number of galaxies, aiming to closely approximate the actual observed results. First, we select 151/181 best-fitting galaxies (\(\chi^{2}<6\)) and verify to ensure they exhibit a good fit in the red-end of the SED. As a comparison Chiang et al. (2019) used the Northern Elptic (NEP) wide-area catalogue who identified 6070 active galactic nuclei out of a total of 16464 IR-selected galaxies. Whilst this catalogue of galaxies is quite different from the JWST sample as the redshifts and magnitudes of sources are different, as well as having more bands, it does show how one can use our method to find a reasonable selection for AGN. The fitting for this NEP catalogue used LePhare fitting to find AGN. This NEP catalogue consists of 18 mid-infrared filters, including 9 from AKARI, 4 from WISE, and 5 from Spitzer. Our dataset exhibits a comparable redshift distribution within the range of \(0<z<2.5\), close to that of the NEP sample. Using similar methods as ours, the total proportion of AGN in this NEP catalogue is 36.9\(\pm\)0.5%, similar to what we find, however our systems are much fainter. Figure 12 left illustrates the results, and we find that fracAGN =0.1 is most consistent with the NEP observation statistics. In this case, the proportion of AGN (57/151) is 37.7%. As a result, we can conclude that if an object has a fracAGN value of less than 0.1, we classify it as SFG; otherwise, we classify it as AGN. Using this criterion, we identified 94 SFG and 57 AGN. Figure 12 right shows the photo-\(z\) distributions for different types of objects. We note that slightly altering these empirical classification criteria would not significantly affect our main results. One way see how different the AGN and star forming galaxies are in our sample is to compare their SEDs. When generating the median SEDs, we first exclude the quiescent galaxies, which have no discernible PAH features (qpah\(<1\)) and a lack of ongoing star formation activity, such as ID:6823 shown in Figure 6. Here qpah is the mass fraction of the PAH (Boujieu et al., 2019). Some of these galaxies may correspond to foreground cluster members. Then, we use the photometric redshift obtained from the cigale fit including MIRI to convert the best fitting models to its rest frame wavelength. We perform a linear interpolation for each model, ranging from 0.1 to 20 microns. Next, we use the bootstrap method to conduct 5000 repetitions calculating the median value and its error. Finally, we normalize the models at 3 microns, where the impact of emission lines and PAH can be avoided. We also employ a similar methodology to compute the median SED solely based on photometric data points, thereby mitigating the influence of fitting uncertainties. It is consistent in ensemble with that generated from the models. Figure 13 shows the median SED for both the AGN and SFG objects. The grey lines indicate each individually fitted model. The SEDs are relatively constant at wavelengths below approximately 4\(\mu\)m. But the slope of the SEDs begins to change at longer wavelengths as a result of the presence of dust and AGN. It is evident that AGN and dust greatly contribute to the red wavelengths. At redshifts of \(z=0-3.5\), MIRI F770W corresponds to a rest wavelength of \(2-8\mu\)m, and F1800W corresponds to \(4-18\mu\)m. In this case, the MIRI data is responsible for fitting data larger than 2\(\mu\)m. Note that we do not differentiate between different redshift bins due to a limited number of samples, but all our sample's photometric redshifts are less than 3.5. Thus, the results are not significantly impacted by a very wide redshift distribution and range. We over-lay on these SEDS a moderately luminous AGN - Seyfert 2 galaxy template11 and a star forming galaxy template. The MIRI-selected SFGs exhibit strong dust emission and prominent PAH fea Figure 10: An example for different ways of fitting the SEDs in our sample. We show here the cigale SED fitting for Galaxy ID: 1906 with or without MIRI data. The black open circles are observed fluxes in each band; while the red ones are cigale Bayesian best fitted fluxes. The red and blue lines are the best fit SEDs with and without MIRI, respectively. The bottom panel is the relative residual of the observed data points and the fitting results. There are no blue points at long wavelengths as in this situation there is no data here. tures. Their median SED closely resembles that of typical starburst galaxies. The median AGN SED is similar to Seyfert 2 in the ensemble sense, but has lower 6-9\(\mu\)m PAH emission. The 6-9\(\mu\)m emission primarily arises from highly vibrationally excited cations, whereas the 3.3\(\mu\)m, 8.6\(\mu\)m, and 11.3\(\mu\)m originate mostly from neutral PAH molecules (e.g., Allamandola et al., 1989; Li and Draine, 2001; Draine et al., 2021). The varying ratios, such as 6.2\(\mu\)m or 7.7\(\mu\)m / 11.3\(\mu\)m, indicate differences in the PAH ionization fraction (e.g., Galliano et al., 2008; Rigopoulou et al., 2021). AGN SEDs have a slightly lower average at 6.2 and 7.7\(\mu\)m compared to star-forming galaxies. This suggests a lower fraction of ionized PAH molecules in AGN-dominated systems from within our sample. These findings align with a PAH study on Seyfert galaxies and star-forming galaxies using Spitzer/InfraRed spectral data in Garcia-Bernete et al. (2022). They imply that the nuclear molecular gas concentration in AGN centers may play a role in shielding their PAH molecules. We emphasize that our current MIRI data points only rely on broadband photometric data. This approach may omit PAH characteristic lines, leading to inadequate fitting. To address this limitation, MIRI medium-resolution spectrometer (MRS) can provide high-resolution spectra, enabling us to determine PAH characteristic lines and mid-infrared band physical parameters more accurately. ### The impact of MIRI on median SED fitting One of the things we investigate in this subsection is the impact of MIRI data on the overall shape and form of SEDs. What we are interested in examining is how different these SEDs would be with and without MIRI data. Figure 14 shows the median SED and the Figure 11: Left: The inferred AGN fraction(\(\rm{fra}\,\rm{c_{AGN}}\)) as a function of redshift with and without MIRI data. Right: the distribution of the difference \(\rm{fra}\,\rm{c_{AGN}}\) (\(\rm{\Delta frac_{AGN}=frac_{AGN,MIR}-frac_{AGN,no MIRI}}\)) for galaxies with and without MIRI in the fits. The median value for this difference is \(-0.14^{\ast}_{-0.12}\), similar to what is found in the MIRISIM simulation of CEERS imaging Yang et al. (2021) who find a value \(-0.2\). The error bars show the range of the 16th-84th percentiles. Figure 12: Left: The proportion of AGN to the total number of galaxies as a function of redshift. We compare different \(\rm{fra}\,\rm{c_{AGN}}\) values [0.05, 0.1, 0.3, 0.5]. We mark the points with significant uncertainty greater than 1 as open circles. With the NEP observational statistics (although these galaxies are at different redshifts and magnitudes), we conclude that a \(\rm{fra}\,\rm{c_{AGN}}\) value of 0.1 is appropriate. The data shows that 37.7% of the sample consists of AGN in this case. Therefore, we classify objects with \(\rm{fra}\,\rm{c_{AGN}}\) values less than 0.1 as SFG and those above as AGN. Right: The redshift distribution of AGN and SFG is categorized based on this \(\rm{fra}\,\rm{c_{AGN}}\) =0.1 limit. difference when fitting the data with and without MIRI. The SED difference is not noticeable at wavelengths less than 4 microns. However, at longer wavelengths, including MIRI data leads to prominent PAH features compared to the case without it (Figure 14 top panel). This is because the absence of MIRI data would make it impossible to constrain the PAH emission line details in mid-infrared bands. But the dust continuum exhibits a similarity between the two cases. The cigale fitting procedure guesses a relatively accurate model of dust emission, which aligns with the actual properties of the galaxies under investigation. Note the quiescent galaxies were excluded from the analysis due to their infrared SED shapes that deviate significantly from those of other galaxies. At the rest wavelength between 4000 A and 1 micron, we find that including MIRI data in the fitting process yields a slightly steeper optical slope, though the effect is less pronounced. We also investigate the SEDs shown in Figure 14 (bottom), when it comes to light which is emitted at wavelengths less than 4000 AWe calculate the rest-frame UV slope (\(\beta\)) by fitting a power-law model of the form \(f_{\lambda}\propto\lambda^{\beta}\) to the UV photometry within the range \(1250\)A\(<\lambda_{\rm rest}<3000\)A using SED fitting (Bouwens et al., 2009; Finkelstein et al., 2012; Calzetti et al., 1994). The best-fitted average UV slope with MIRI data is \(\beta=-1.84\pm 0.01\); whereas it is \(\beta=-1.68\pm 0.01\) without MIRI. This indicates that the MIRI selected galaxies exhibit bluer colours, lower levels of dust attenuation, and younger stellar populations. This finding is also pointed out in Papovich et al. (2023) for the CEERS field. It is important to note that the resolution of MIRI broadband photometry data points may not be sufficient to accurately identify key spectral lines, leading to inaccuracies in the existing median SED. In the future, further research using MIRI/MRS would improve our understanding of SED in the mid-infrared band. ## 5 Conclusions In this eighth article of the EPOCHS series, we collect data from JWST/MIRI to analyse the field SMACS0723, which is the first public release of data from this instrument from JWST. In this study, we focus on the overlapping region between the MIRI, NIRCam and HST observations, covering an area of approximately 2.3 arcmin\({}^{2}\). Within this region, we select 181 sources from a MIRI based catalogue and measure their photometric redshifts. Furthermore, we conduct an extensive investigation of various properties, including star formation activity, stellar mass, and contributions from active galactic nuclei (AGN). Our primary findings include: * We use MIRI, NIRCam, and HST data to determine these galaxies' photometric redshifts of the range of \(z=0-3.5\). Furthermore, Figure 14: Comparison of median SEDs fitted when we include or exclude MIRI data. The SEDs have been normalized to 3500 Å, shown as the dashed line. The bottom panel is a zoom-in view in the range from 1000 to 4000 Å. The lower part of each panel shows the difference between best fit SEDs with or without MIRI data (\(\chi_{\rm MIRI}\)-\(\chi_{\rm no\,MIRI}\)), plotted as the black line. Figure 13: The median SEDs of AGN and SFGs fitting with MIRI using cigale. The gray lines indicate the individual robust fitting models, shifted to the rest-frame. The models are all normalized at 3 microns. The median SED and its error are obtained by sampling 5000 times using the bootstrap method. The purple solid line is a Seyfert 2 galaxy template from the SWIRE Template Library\({}^{10}\); the purple dashed line is the star forming galaxy template. These templates are also normalized at 3 microns. we conduct a detailed analysis of the stellar populations and the star formation and dust properties of each galaxy with and without theuse of MIRI data. * We conduct a comparison between the photometric redshifts obtained with and without MIRI data, and cross-check them with existing spectroscopic redshifts. We find the results of the photometric redshifts are in good agreement with spectroscopic redshifts. Including MIRI data leads to an average 0.1% difference between photometric and spectroscopic redshifts, which is 3% lower than the difference without MIRI data. Additionally, the fitting error has also been reduced by 20%. The redshifts of three galaxies vary by as much as \(\Delta z>2\), and there are instances where high redshift galaxies would incorrectly be put at low-z without the use of MIRI data. The photometric redshifts with MIRI are highly consistent with spectroscopic redshifts, showing that the MIRI fits are better. * We compare stellar masses and SFRs measured with and without MIRI data. Including MIRI is consistent with stellar mass measurements obtained only from HST and NIRCAM, while the SFR is slightly reduced systematically by 0.1 dex. Moreover, MIRI data also led to a decrease in both parameter errors by an average of \(\sim\)0.1 dex. * We select 151 the best fitting galaxies (\(\chi^{2}<6\)) and categorize these using the parameter frac\({}_{\rm AGN}\) where we consider galaxies with a value \(>0.1\) AGN. Out of the total samples, 37.7% (57/151) are found to be AGN. We determine the median values for AGN and SFG respectively. Our findings suggest that AGN and dust have a great impact on the long-wavelength flux, which is covered by the MIRI bands. Compared with the SED template, we find the SFGs match the starburst galaxy template very well. We also find that including MIRI data significantly reduces the mean value of frac\({}_{\rm AGN}\) to 0.11\(\pm\)0.15, with its uncertainty also decreased of \(\Delta\mu_{\rm err}=0.17\). * We compare the median SEDs of our sample with and without MIRI data. We find that at wavelengths greater than 4\(\mu\)m, including MIRI data reveals significant PAH features, while the dust continuum remains similar. Including MIRI data yields steeper optical and UV slopes, indicating bluer colours, lower dust attenuation, and younger stellar populations. At present, the MIRI observations remain relatively shallow, with an average depth approximately 3 mags shallower than that of NIRCam in the SMACS0723 field. Extending the depth of MIRI observations in the future will open up a promising avenue to explore the intricacies of these galaxies in detail, and to enable the discovery of fainter and hidden galaxies. Moreover, future research utilising MIRI/MRS will improve comprehension of SEDs in the mid-infrared band and offer a more efficient approach to get redshifts and star formation rates. Through combining this with spectroscopic observations, a more detailed and nuanced illustration of the galaxies' emissions, dust properties, and other significant attributes can be achieved. ## Acknowledgements QL, CC, JT, and NA acknowledge support from the ERC Advanced Investigator Grant EPOCHS (788113). DA and TH acknowledge support from STFC in the form of PhD students. This work is based on observations made with the NASA/ESA _Hubble Space Telescope_ (HST) and NASA/ESA/CSA _James Webb Space Telescope_ (JWST) obtained from the Mikulski Archive for Space Telescopes (MAST) at the _Space Telescope Science Institute_ (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST, and NAS 5-26555 for HST. The observations used in this work are associated with JWST program 2736. The authors thank all involved in the construction and operations of the telescope as well as those who designed and executed these observations. The authors thank Anthony Holloway and Sotirios Sanidas for their providing their expertise in high performance computing and other IT support throughout this work. This work makes use of astropy (Astropy Collaboration et al., 2013, 2018, 2022), matplotlib (Hunter, 2007), reproject, DrizzlePac (Hoffmann et al., 2021), SciPy(Virtanen et al., 2020) and photutils (Bradley et al., 2022).
2306.00250
Viscous damping in weltering motion of trapped hydrodynamic dipolar Fermi gases
We consider collective motion and damping of dipolar Fermi gases in the hydrodynamic regime. We investigate the trajectories of collective oscillations -- here dubbed ``weltering'' motions -- in cross-dimensional rethermalization experiments via Monte Carlo simulations, where we find stark differences from the dilute regime. These observations are interpreted within a semi-empirical theory of viscous hydrodynamics for gases confined to anisotropic harmonic potentials. The derived equations of motion provide a simple effective theory that show favorable agreement with full numerical solutions. To do so, the theory must carefully account for the size and shape of the effective volume within which the gas' behavior is hydrodynamic. Although formulated for dipolar molecules, our theoretical framework retains a flexibility to accommodate arbitrary elastic cross sections.
Reuben R. W. Wang, John L. Bohn
2023-05-31T23:50:15Z
http://arxiv.org/abs/2306.00250v1
# Viscous damping in weltering motion of trapped hydrodynamic dipolar Fermi gases ###### Abstract We consider collective motion and damping of dipolar Fermi gases in the hydrodynamic regime. We investigate the trajectories of collective oscillations - here dubbed "weltering" motions - in cross-dimensional rethermalization experiments via Monte Carlo simulations, where we find stark differences from the dilute regime. These observations are interpreted within a semi-empirical theory of viscous hydrodynamics for gases confined to anisotropic harmonic potentials. The derived equations of motion provide a simple effective theory that show favorable agreement with full numerical solutions. To do so, the theory must carefully account for the size and shape of the effective volume within which the gas' behavior is hydrodynamic. Although formulated for dipolar molecules, our theoretical framework retains a flexibility to accommodate arbitrary elastic cross sections. ## I Introduction Suppression of two-body collisional losses has been crucial for achieving stable samples of molecular quantum gases. Within the last decade, theoretical and experimental advances have brought to fruition the electric field shielding of polar molecules against chemical reaction and complex formation [1; 2; 3; 4; 5; 6; 7; 8; 9; 10], permitting the production of degenerate bulk molecular samples [11; 12]. But even before the onset of quantum degeneracy, these shielded molecules present a long-lived versatile platform for exploring dipolar physics [13; 14; 15]. For instance, dipole-dipole interactions lead to highly anisotropic two-body collision cross sections [16] and observable anisotropy in the collective dynamics of thermal gases [17; 18; 19; 20; 21]. For these nondegenerate bulk gases, thermalization is an essential mechanism with great utility in applications such as evaporative cooling [22; 23; 24; 25; 26; 27; 28; 29] and scattering length measurements [30; 31; 32; 33; 34]. The accuracy and efficacy of both these applications, in turn, rely on a deep understanding of thermalization in such systems. The difference between dilute and hydrodynamic limits is revealed clearly in a gas' response to perturbation. In particular, in a cross-dimensional rethermalization experiment, an initially equilibrated gas is preferentially heated along a particular axis, then allowed to rethermalize back to equilibrium [30]. Thermalization in the dilute regime is closely related to the collision rate [21; 30; 35], while the hydrodynamic regime sees similarly extracted relaxation rates close to the trapping frequency instead [12; 25; 36]. The difference between the two regimes is illustrated in Fig. 1. In both panels, a collection of \({}^{23}\)Na\({}^{40}\)K molecules is subjected to the same harmonic trapping potential \[V(\mathbf{r})=\frac{1}{2}m\sum_{i}\omega_{i}^{2}r_{i}^{2}. \tag{1}\] and subsequently excited along the \(z\) axis. The only difference is the molecule number: for fewer molecules in the upper panel (a), the dynamics is dilute, while for a greater number of molecules in the lower panel (b), it is hydrodynamic. In both cases, the behavior is tracked using time trace plots of the pseudotemperatures \(\mathcal{T}_{i}(t)\), shown in Fig. 1. A pseudotemperature is defined along axis \(i\) as [19] \[k_{B}\mathcal{T}_{i}(t)\equiv\frac{1}{2}m\omega_{i}^{2}\{r_{i}^{2}\}(t)+ \frac{1}{2}m\{v_{i}^{2}\}(t). \tag{2}\] Figure 1: Pseudotemperatures (2) obtained from Monte Carlo simulations in the dilute (upper panel, a) and hydrodynamic (lower panel, b) regimes. The gas consists of microwave shielded \({}^{23}\)Na\({}^{40}\)K molecules with dipole moment \(d=0.75\) D, oriented along \(\mathbf{\bar{x}}\), at temperature \(T=700\) nK. The gas is initially excited along \(z\) by an instantaneous trap frequency ramp to \(\omega_{z}=2\pi\times 147\) Hz, while \(\omega_{x}=\omega_{y}=2\pi\times 82.5\) Hz remain constant. The regimes are differentiated by the number of molecules \(N\), which are \(N=10^{4}\) in panel (a), and \(N=2\times 10^{5}\) in panel (b). where \(\{\ldots\}(t)\) denotes the time varying ensemble average over molecular positions \(\mathbf{r}\) and velocities \(\mathbf{v}\), \(m\) is the molecular mass, \(k_{B}\) is Boltzmann's constant. Details of the calculation that produced this figure are provided below. The dilute regime is characterized by collision rates small compared to the trap frequencies. Hence in this case, pseudotemperature in the warm, \(z\) direction gradually diminishes, while that in the other, cooler directions gradually increases, until the gas equilibrates on the time scale shown. The hydrodynamic gas, by contrast, behaves like a somewhat compressible fluid; excitation initially in the \(z\) direction is distributed almost immediately into the other directions, and the resulting dynamics is more like the irregular flow to and fro of this liquid about its stationary center of mass. The fluid expands sometimes in the radial direction, sometimes in the axial direction, with irregularly varying amplitudes, reminiscent of waves on an unquiet ocean. We therefore refer to this form of collective fluid excitation as _weltering_. [37] In the dilute gas case, the primary response of the gas is to come to thermal equilibrium, whereby its dynamics is largely summarized in a single, density-normalized equilibration rate, whose inverse defines the "number of collisions per rethermalization" [30]. For dipolar gases, this quantity can depend on the orientation of the dipoles relative to the excitation axis [16; 21]. Vice versa, the complex dynamics of the hydrodynamic fluid requires a more complete theoretical description. The purpose of this paper is to provide such a description. We will base full dynamics on a Monte Carlo simulation, to further elaborate the difference between dilute and hydrodynamic regimes. Further, we will develop a simplified formulation based on a Gaussian _ansatz_ for the width of a gas, which semi-empirically reproduces the numerics. Key to this model is the realization that the periphery of a harmonically trapped gas is always dilute [38; 39], which necessitates defining an effective volume inside which hydrodynamics is a good idea. We identify the dependence of this volume on the anisotropy of the trap and of the collision cross section among polarized dipoles. Our theory is also presented in a manner that accommodates arbitrary elastic cross sections, opening its applicability to a broader variety of ultracold molecular gas experiments with far from threshold collisions [40]. The remainder of this paper is organized as follows: In Sec. II, we describe the numerical tools adopted to study trapped hydrodynamic gases, and present notable differences from the dilute limit. We then introduce the equations of motion employed to model a nondegenerate hydrodynamic dipolar gas in Sec. IV, with the assumption of threshold scattering. A variational ansatz is employed in Sec. IV.1, to derive effective dynamical equations governing weltering oscillations in a harmonic trap. A comparison of our theory to full numerical solutions is presented in Sec. IV.3, from which we purport several considerations about the hydrodynamic extent of gases in traps. Finally, conclusion are drawn in Sec. V, along with possible extensions of this current work. ## II Numerical method A gas is said to be hydrodynamic when the molecular mean-free path is much smaller than the characteristic length over which fluid flow occurs [41]. The ratio of these scales is given by the Knudsen number \(\mathrm{Kn}\). For a harmonically trapped gas with mean density \(\langle n\rangle=\frac{1}{N}\int n^{2}(\mathbf{r})d^{3}r\) and molecules with total cross section \(\sigma_{\mathrm{coll}}\), the mean-free path is given by \(L=(\langle n\rangle\sigma_{\mathrm{coll}})^{-1}\). With a given geometric mean frequency \(\overline{\omega}\) and temperature \(T\), the thermal width of the gas is \(R_{\mathrm{th}}=\sqrt{k_{B}T/m\overline{\omega}^{2}}\). Alternatively, the Knudsen number can also be written as the ratio of mean trapping frequency over the collision rate \(\gamma_{\mathrm{coll}}=\langle n\rangle\sigma_{\mathrm{coll}}\langle v_{ \mathrm{coll}}\rangle\), where \(\langle v_{\mathrm{coll}}\rangle=\sqrt{16k_{B}T/(\pi m)}\) is the mean collision velocity. Explicitly, these relations are summarized as \[\mathrm{Kn}=\frac{L}{R_{\mathrm{th}}}=\frac{4\,\overline{\omega}}{\pi^{ \nicefrac{{1}}{{2}}}\gamma_{\mathrm{coll}}}=\frac{8\pi^{\nicefrac{{3}}{{2}}}k _{B}T}{Nm\overline{\omega}^{2}\sigma_{\mathrm{coll}}}. \tag{3}\] A trapped gas is said to be hydrodynamic if \(\mathrm{Kn}\ll 1\). The relations above provide an approximate mean Knudsen number. In practice, the thermal width can differ in directions with different trap frequencies, while the cross section, for dipolar scattering, can depend on the direction of the collisions axis. Thus the boundary between hydrodynamic and dilute flow can be anisotropic, a topic to be dealt with below. To compute dynamics in either regime, we utilize the direct simulation Monte Carlo (DSMC) method [42] to obtain numerical solutions to the Boltzmann equation. In doing so, these numerical simulations allow for explorations of hydrodynamic phenomena, while later also serving as a benchmark for our semi-empirical theory. The DSMC implementation we adopt for this work follows very closely that described in Refs. [19; 20], which study similar systems but in the dilute regime. Described briefly, the Boltzmann equation is solved by approximating the phase space distribution with a discrete ensemble of \(N\) molecules \[f(\mathbf{r},\mathbf{v})\approx\sum_{k=1}^{N}\delta^{3}(\mathbf{r}-\mathbf{r}_{k})\delta^{3}( \mathbf{v}-\mathbf{v}_{k}). \tag{4}\] Most crucial to an accurate hydrodynamic simulation is that collisions are handled adequately. The DSMC does so by constructing a discrete spatial grid within the simulation volume, binning particles into each grid cell based on their positions, then sampling their collisional interactions from a probability distribution derived from the differential cross section [19]. Choosing a uniform grid that is appropriate for maintaining accuracy and computational efficiency becomes tricky at large collision rates, so we utilize a locally adaptive discretization scheme instead. At every numerical time step, the locally adaptive grid is built in two phases. Phase one constructs a master grid, consisting of uniform volume cells that span the simulation volume. The resolution of the grid is then refined in phase two, with an octree algorithm [43]. The octree algorithm further discretizes the simulation volume by recursively subdividing cells into eight octants, terminating when each cell has at most \(N_{\text{cell}}^{\text{max}}\) particles. The parameter \(N_{\text{cell}}^{\text{max}}\), is initialized at the start of the simulation, which we optimize for stochastic convergence. ## III Numerical results For our numerical experiments, we envision an ultracold gas of microwave shielded \({}^{23}\)Na\({}^{40}\)K molecules with the parameters in Tab. 1. The initial temperature is chosen such that the gas remains nondegenerate with \(T>T_{F}\)[44] for all values of Kn in consideration, and the trap is assumed cylindrically symmetric with \(\omega_{x}=\omega_{y}\equiv\omega_{\perp}\) but \(\omega_{\perp}\neq\omega_{z}\). Key variables of interest to this study will be: a) the number of molecules \(N\), which affects Kn and therefore how hydrodynamic the gas is; b) the trap anisotropy \(\lambda=(\omega_{z}/\omega_{\perp})^{2}\); c) and the dipole orientation \(\hat{\mathbf{\mathcal{E}}}\). For the sake of illustration, collision cross sections are described by the analytical formulas for point dipoles given in Ref. [16], although at sufficient temperature, realistic cross sections may differ from these. For convenience, we only allow \(\hat{\mathbf{\mathcal{E}}}\) to tilt within the \(x,z\)-plane, allowing us to define a dipole tilt angle \(\Theta=\cos^{-1}\hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{z}}\), that parameterizes the collisional anisotropy. The behavior of the fluid after excitation in the \(z\) direction is shown in Fig. 2. This is done in a prolate (cigar) trap with \(\lambda=0.2\), containing \(N=5\times 10^{5}\) molecules, with Knudsen number \(\text{Kn}\approx 0.04\). This figure plots the separated position and momentum space pseudotemeratures \(\mathcal{T}_{r_{i}}(t)=m\omega_{i}^{2}\{r_{i}^{2}\}(t)/k_{B}\) and \(\mathcal{T}_{v_{i}}(t)=m\{v_{i}^{2}\}(t)/k_{B}\) respectively. The position space time trace shows the clear out-of-phase oscillations between the widths in the radial and axial directions, expected for a weltering fluid. The momentum space time trace has oscillations of considerably smaller magnitude than \(\mathcal{T}_{r_{i}}\), and also shows a phasing in oscillations amongst the different \(\mathcal{T}_{v_{i}}\) traces. These observations showcase how large collision rates diminish the effect of out-of-equilibrium thermodynamics on the hydrodynamic welter of the gas. The difference between dilute and hydrodynamic regimes is sharpened by comparing the dependence of dynamics on the tilt angle \(\Theta\) of the dipoles. To this end, Fig. 3 plots the three components of pseudotemperature \(\mathcal{T}_{i}\) for the dilute (upper row) and hydrodynamic (lower row) gases, at the 3 different dipole tilt angles \(\Theta=0^{\circ},45^{\circ},90^{\circ}\). As anticipated in Fig. 1, the dilute gas responds to the excitation primarily by melting back to thermal equilibrium while the hydrodynamic gas exhibits radial weltering motion, resulting from oscillating fluid flow toward and away from the trap center. In Fig. 3 a second dramatic difference appears. For the dilute gas, with the dipoles tilted away from the axis of trap symmetry (\(z\)), the rates of warming of the gas in the \(x\) and \(y\) directions differ, as a consequence of the anisotropic scattering cross section [16; 19; 21]. By contrast, the excitations in the \(x\) and \(y\) directions in the hydrodynamic regime are nearly equal. In the hydrodynamic regime, relatively rapid collisions scramble memory of the dipole orientation. Note that a slight difference in \(x\) and \(y\) motions occurs, due to a residual anisotropy of the viscosity tensor, described in the next section. Nevertheless, this anisotropy is not a main driving force in the dynamics. It is true, however, that the overall damping rate of the weltering excitations does depend on the dipole tilt angle, as will be elaborated Figure 2: Plots of the \(\mathcal{T}_{r_{i}}\) (upper panel a) and \(\mathcal{T}_{v_{i}}\) (lower panel b) vs time from a cross-dimensional rethermalization experiment, with excitation along \(z\). The gas is hydrodynamic with \(N=5\times 10^{5}\) (\(\text{Kn}\approx 0.04\)), \(\lambda=0.2\) and the parameters in Tab. 1. ## IV Hydrodynamic formulation The Monte Carlo simulation, while accurate, is nevertheless somewhat cumbersome for calculating the response of the gas. For this reason, in the hydrodynamic regime, it is useful to formulate the fluid's motion directly in terms of hydrodynamics. When hydrodynamic, a non-degenerate gas behaves as a thermoviscous fluid [45, 46, 47] with thermal conductivity \(\kappa_{ij}\), and viscosity \(\mu_{ijk\ell}\), which are, in general, coordinate dependent and formulated as rank-2 and rank-4 tensors respectively [48]. The equations of motion of the fluid are [49]: \[\frac{\partial\rho}{\partial t}+\sum_{j}\partial_{j}\left(\rho U_{ j}\right) =0, \tag{5a}\] \[\frac{\partial}{\partial t}\left(\rho U_{i}\right)+\sum_{j} \partial_{j}\left(\rho U_{j}U_{i}\right) =-\partial_{i}\left(nk_{B}T\right)-n\partial_{i}V(\mathbf{r})\] \[+\sum_{j,k,\ell}\partial_{j}\left(\mu_{ijk\ell}\partial_{\ell}U_{ k}\right),\] (5b) \[\frac{\partial}{\partial t}(\rho T)+\sum_{j}\partial_{j}\left( \rho TU_{j}\right) =-\frac{2}{3}\rho T\sum_{i}\partial_{i}U_{i}\] \[+\frac{2m}{3k_{B}}\sum_{i,j,k,\ell}(\partial_{j}U_{i})\mu_{ijk \ell}(\partial_{\ell}U_{k})\] \[+\frac{2m}{3k_{B}}\sum_{i,j}\partial_{i}\left(\kappa_{ij}\partial _{j}T\right). \tag{5c}\] These equations govern the dynamics of the velocity averaged field variables of mass density, flow velocity and temperature: \[\rho(\mathbf{r},t) =mn(\mathbf{r},t)=\int d^{3}vf(\mathbf{r},\mathbf{v},t)m, \tag{6a}\] \[\mathbf{U}(\mathbf{r},t) =\frac{1}{n(\mathbf{r},t)}\int d^{3}vf(\mathbf{r},\mathbf{v},t)\mathbf{v},\] (6b) \[T(\mathbf{r},t) =\frac{2}{3n(\mathbf{r},t)k_{B}}\int d^{3}vf(\mathbf{r},\mathbf{v},t)\frac{1} {2}m\mathbf{u}^{2}, \tag{6c}\] where \(f(\mathbf{r},\mathbf{v},t)\) denotes the phase space distribution of the molecules and \(\mathbf{u}(\mathbf{r})=\mathbf{v}-\mathbf{U}(\mathbf{r})\) is the comoving molecular velocity, relative to the frame of fluid flow. It is worth pointing out that the local fluid kinetic temperature is related to the flow velocity via \[\frac{3}{2}n(\mathbf{r},t)k_{B}T(\mathbf{r},t) =\int d^{3}vf(\mathbf{r},\mathbf{v},t)\frac{1}{2}m\mathbf{v}^{2}\] \[\quad-\frac{1}{2}\rho\mathbf{U}(\mathbf{r},t)^{2}, \tag{7}\] where the integral term is the local kinetic energy density. This relation emphasizes a central difference between di Figure 3: Pseudotemperature times traces \(\mathcal{T}_{x}(t)\) (solid green curves), \(\mathcal{T}_{y}(t)\) (dashed blue curves) and \(\mathcal{T}_{z}(t)\) (dotted red curves) for 3 values of \(\Theta=0^{\circ},45^{\circ},90^{\circ}\), in subplots (a, d), (b, e) and (c, f) respectively. The 2 rows are differentiated by the number of molecules, with the upper row (subplots a, b, c) having \(N=2\times 10^{3}\) (\(\text{Kn}\approx 11.10\)), while the lower row (subplots d, e, f) has \(N=3\times 10^{5}\) (\(\text{Kn}\approx 0.07\)). The experimental parameters are those in Tab. 1 with \(\lambda=0.2\). Note that the simulation times are different between the upper (\(t=0\) to \(0.1\)s) and lower (\(t=0\) to \(0.04\)s) rows. lute and hydrodynamic trapped gases: temperature, in the sense of equilibrium thermodynamics, is well defined throughout the entire dynamical evolution when hydrodynamic, but only upon global equilibration when dilute. Such a distinction identifies time-of-flight imaging, common to ultracold gas experiments, as an indirect form of thermometry to hydrodynamic gases, that probes an ensemble averaged sum of both the fluid local temperature and mechanical energy from flow. In this work, we assume that the transport tensors arise from two-body collisions with elastic differential cross section \(d\sigma/d\Omega\), as derived with the first-order Chapman-Enskog method [50, 51, 52]. We shall later see that only viscosity is relevant to this work, so we omit further details of the thermal conductivity. At this level of approximation, the anisotropic viscosity tensor for arbitrary \(d\sigma/d\Omega\) works out to be density independent, and is given as [52, 53] \[\mathbf{\mu}=-\frac{2}{\beta}\left(\frac{n}{m\beta}\right)^{2}\left(\int d^{3}u \mathbf{W}(\mathbf{u})\otimes C[f_{0}\mathbf{W}]\right)^{-1}, \tag{8}\] where \(\beta=(k_{B}T)^{-1}\) is the usual inverse temperature, \[\mathbf{W}=\mathbf{u}\mathbf{u}^{T}-\frac{1}{3}\mathbf{u}^{2}\mathbf{I}, \tag{9}\] is a rank-2 comoving velocity tensor, and \(\mathbf{I}\) is the identity matrix. The collision integrals \[C[f_{0}\mathbf{W}]=\int d^{3}u_{1}|\mathbf{u}-\mathbf{u}_{1}|f_{0}(\mathbf{u})f_{ 0}(\mathbf{u}_{1})\int d\Omega^{\prime}\frac{d\sigma}{d\Omega^{\prime}}\Delta\mathbf{ W}, \tag{10}\] with \(\Delta\mathbf{W}=\mathbf{W}^{\prime}+\mathbf{W}_{1}^{\prime}-\mathbf{W}-\mathbf{W}_{1}\) and primes denoting post-collision quantities, are evaluated with the Maxwell-Boltzmann equilibrium phase space distribution function \(f_{0}(\mathbf{u})\)[54]. The symbol \(\otimes\) denotes a dyadic product which takes two tensors of rank \(N_{1}\) and \(N_{2}\), and forms a tensor of rank \(N_{1}+N_{2}\) (e.g. \(A_{ij}\otimes B_{k\ell}=C_{ijk\ell}\)). Of interest here is the anisotropic cross section resultant from close-to-threshold scattering [55] between ultracold fermionic polar molecules or dipolar atoms [7, 12, 33, 34]. At low enough temperatures with electric fields that align the dipoles along \(\mathbf{\hat{\mathcal{E}}}\), dipolar scattering is energy independent and permits the viscosity tensor to be computed analytically [53]. It is this analytic viscosity tensor that we use below. ### Viscous damping of a trapped fluid The fluid equations in (5) are highly nonlinear and, in general, require numerical methods to obtain solutions. For our purposes, we instead adopt a variational ansatz approach to solving these partial differential equations [56]. External confinement from a harmonic potential results in the equilibrium (denoted by subscript 0) density distribution following \[\rho_{0}(\mathbf{r})=\frac{mN}{Z}\exp\biggl{(}-\frac{V(\mathbf{r})}{k_{B}T_{0}}\biggr{)}, \tag{11}\] where \(Z=\int d^{3}r\mathrm{e}^{-\frac{V(\mathbf{r})}{k_{B}T_{0}}}\) gives the appropriate normalization and \(N\) is the number of molecules. If we were then only to consider collective oscillations and damping from long wavelength excitations that do not induce center-of-mass sloshing, Eq. (11), motivates a Gaussian variational ansatz for the local density: \[\rho(\mathbf{r},t)=mN\prod_{i=1}^{3}\frac{1}{\sqrt{2\pi\sigma_{i}^{2}(t)}}\exp \left(-\frac{r_{i}^{2}}{2\sigma_{i}^{2}(t)}\right), \tag{12}\] where \(\sigma_{i}(t)\) is the distribution widths along each axis \(i\) that we allow to vary in time (depicted in Fig. 4). Plugging the ansatz of Eq. (12) into the continuity equation (5a) gives \[\sum_{i=1}^{3}\left[\partial_{i}U_{i}(\mathbf{r})-U_{i}(\mathbf{r})\left( \frac{r_{i}}{\sigma_{i}^{2}(t)}\right)\right.\] \[\left.\qquad\qquad\qquad\left.+\left(\frac{r_{i}^{2}}{\sigma_{i}^{ 2}(t)}-1\right)\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)}\right]=0, \tag{13}\] which admits the velocity field solution \[U_{i}(\mathbf{r})=\left(\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)}\right)r_{i}. \tag{14}\] Thus, as expected, the fluid flow vanishes in the trap's center for the excitations we consider. These functional forms for \(\rho\) and \(\mathbf{U}\) then render the Navier-Stokes equation (5b), of the form \[\ddot{\sigma}_{i}(t)+\omega_{i}^{2}\sigma_{i}(t) =\frac{k_{B}}{m}\left(\frac{1}{\sigma_{i}(t)}-\frac{\sigma_{i}(t)} {r_{i}}\partial_{i}\right)T(\mathbf{r},t)\] \[\quad+\sigma_{i}\sum_{j,k,\ell}\frac{\partial_{j}\mu_{ijk\ell}(T)} {r_{i}\rho(\mathbf{r})}\delta_{k,\ell}\frac{\dot{\sigma}_{k}}{\sigma_{\ell}}, \tag{15}\] which bears no dependence on the thermal conductivity. Since \(\sigma_{i}(t)\) does not depend on spatial coordinates, consistency requires that we take a spatial average to suppress local fluctuations of the temperature field in Eq. (15). This average is taken by multiplying Eq. (15) and the temperature balance equation (5c), by \(n(\mathbf{r},t)\), then integrating over \(d^{3}r\). App. A gives further details of the spatial averaging procedure, which results in Figure 4: Cartoon of a density slice along axis \(r_{i}\), through the Gaussian _ansatz_ for \(\rho(\mathbf{r},t)\) with time varying widths \(\sigma_{i}(t)\). \[\ddot{\sigma}_{i}(t)+\omega_{i}^{2}\sigma_{i}(t)+\frac{1}{3\sigma_{i}(t)}\sum_{j} \left[\omega_{j}^{2}\sigma_{j}^{2}(t)+\dot{\sigma}_{j}^{2}(t)\right]-\frac{2k_{ B}T_{0}}{m\sigma_{i}(t)}\approx-\frac{2}{5}\frac{\mathcal{V}_{\rm hy}}{Nm}\sum_{j} \frac{\mu_{iijj}(T(t))}{\sigma_{i}(t)}\frac{\dot{\sigma}_{j}(t)}{\sigma_{j}(t)}. \tag{16}\] The relevant viscosity matrix elements can be recast in terms of a unit-free matrix \[M_{ij}(\Theta) \equiv\frac{\mu_{iijj}(T;\Theta)}{\mu_{0}(T)}\] \[=\frac{1}{512}\begin{pmatrix}117\cos(4\Theta)+84\cos(2\Theta)+41 5&-28(3\cos(2\Theta)+11)&-(117\cos(4\Theta)+107)\\ -28(3\cos(2\Theta)+11)&616&28(3\cos(2\Theta)-11)\\ -(117\cos(4\Theta)+107)&28(3\cos(2\Theta)-11)&117\cos(4\Theta)-84\cos(2 \Theta)+415\end{pmatrix}, \tag{17}\] as is taken from Ref. [53], where the isotropic unit-full viscosity coefficient is given by [50] \[\mu_{0}(T)=\frac{5}{16a_{d}^{2}}\sqrt{\frac{mk_{B}T}{\pi}}. \tag{18}\] With the parameters in Tab. 1, the isotropic viscosity has a value of \(\mu_{0}\approx 2.5\times 10^{-15}\) Pa\(\cdot\)s, which is around \(10^{10}\) times less than air at room temperature and pressure [57]. The \(M_{ij}(\Theta)\) matrix elements are plotted in Fig. 5, with components coupled to the \(x\) and \(z\) axes showcasing a significant variation with \(\Theta\). We see in Fig. 5 that the magnitude of off-diagonal matrix elements \(M_{13}=M_{xz}\) and \(M_{23}=M_{yz}\) become maximally separated around \(\Theta\approx 45^{\circ}\), explaining the slight separation of \(\mathcal{T}_{x}(t)\) and \(\mathcal{T}_{y}(t)\) in Fig. 3, otherwise negligible when \(\Theta=0^{\circ},90^{\circ}\). Eq. (16) above treats the temperature field appearing in \(\mu_{ijk}(T)\) to be spatially uniform over the region where the gas is hydrodynamic. Such an approximation follows from the form of collective oscillations implied by the density (12) and flow velocity fields (14) in an initially isothermal gas, disallowing a spatial temperature variation on the order of the gas spatial widths [47; 38]. Hence, temperature as appears in the viscosity is simply treated as \(T\approx T(t)\). In doing so, we were required to define an effective hydrodynamic volume \(\mathcal{V}_{\rm hy}=\int d^{3}r\)[58]. Proper identification of this volume, including its dependence on aspect ratio, density, and dipole tilt, is essential to the performance of the model, and is our main undertaking here. We define this volume to be the spheroidal volume bounded by the outer classical turning radius of the trap, multiplied by an empirical factor \(\eta\). The outer turning radius is obtained by equating \(E_{\rm total}=V(R_{\rm HD},\theta,\phi)\), to give (see App. A) \[R_{\rm HD}^{2}(\theta)=\frac{6k_{B}T(t)}{m\omega_{\perp}^{2}}\left[\sin^{2} \theta+\lambda\cos^{2}\theta\right]^{-1}, \tag{19}\] where \(\lambda=(\omega_{z}/\omega_{\perp})^{2}\) quantifies the trapping anisotropy. The effective hydrodynamic volume is then computed as \[\mathcal{V}_{\rm hy}(\lambda,{\rm Kn}) =\frac{\eta(\lambda,{\rm Kn})}{3}\int R_{\rm HD}^{3}(\Omega)d\Omega\] \[=\frac{4\pi}{3}\left(\frac{6k_{B}T(t)}{m\omega_{\perp}^{2}} \right)^{3/2}\frac{\eta(\lambda,{\rm Kn})}{\sqrt{\lambda}}. \tag{20}\] As written, we have assumed that \(\eta\) could depend on the trapping geometry through \(\lambda\) and on the Knudsen number, which in turn, also implicitly depends on \(N\) and the dipole angle \(\Theta\). These dependencies are addressed later in the paper. Such generality allows \(\eta\) to act as a coarse-graining parameter which accounts for all non-hydrodynamic effects excluded from our current theoretical treatment. Additionally, Eq. (18) implies the temperature dependence of viscosity goes as \(\mu_{iijj}(T)\propto\sqrt{T}\), for which we will simply approximate as \(T\approx T_{0}\) for all times [59]. For the relevance of time-of-flight imaging, we point out that the momentum space temperature, which differs from the local temperature of Eq. (6c), can also be obtained from solutions to Eq. (16) via the relation \[k_{B}T_{p}(t) =\frac{1}{3N}\int d^{3}rd^{3}vf(\mathbf{r},\mathbf{v},t)m\mathbf{v}^{2}\] \[=2k_{B}T_{0}-\frac{1}{3}\sum_{i}m\omega_{i}^{2}\sigma_{i}^{2}(t), \tag{21}\] as follows from Eqs. (7), (14) and (11). Figure 5: \(M_{ij}\) matrix elements as a function of \(\Theta\). The diagonal elements are plotted on the left in subplot (a), whereas the negated (multiplied by a minus sign) off-diagonal elements plotted on the right in subplot (b). ### Linear analysis Some proceeding discussions on collective dynamics are made more accessible in the language of normal modes, motivating a linear analysis of Eq. (16). If only taken perturbatively out-of-equilibrium, we can consider small deviations away from the equilibrium widths by writing \(\sigma_{i}(t)=\sigma_{0,i}+\delta\sigma_{i}(t)\). Then expanding to first-order in \(\delta\sigma_{i}(t)\), Eq. (16) becomes \[\ddot{\delta\sigma}_{i}(t)+2\sum_{j}\Gamma_{ij}\dot{\delta\sigma}_{j}(t)+\sum_ {j}O_{ij}\delta\sigma_{j}(t)\approx 0, \tag{22}\] with squared-frequency and damping matrices \[O_{ij} =2\omega_{i}^{2}\delta_{i,j}+\frac{2}{3}\omega_{i}\omega_{j}, \tag{23a}\] \[\Gamma_{ij} =\frac{\mu_{0}\mathcal{V}_{\text{hy}}}{5Nk_{B}T_{0}}\omega_{i}M_{ ij}(\Theta)\omega_{j}. \tag{23b}\] The matrices above encode the anisotropies from both the trap and anisotropic collisions. A factor 2 multiplies \(\mathbf{\Gamma}\) in Eq. (22) as is convention in damped harmonic oscillators. With \(\mathbf{\Gamma}\) multiplying the first-order time derivative terms \(\dot{\delta\sigma}_{i}\), it is made clear that damping of weltering oscillations results from the trap frequency weighted viscosities within the hydrodynamic volume. Diagonalizing the squared-frequency matrix \(\mathbf{O}\) gives the eigenvalues \[\omega_{0}^{2} =2\omega_{\perp}^{2}, \tag{24a}\] \[\omega_{\pm}^{2} =\frac{1}{3}\left(4\lambda+5\pm\sqrt{16\lambda^{2}-32\lambda+25} \right)\omega_{\perp}^{2}, \tag{24b}\] which are exactly those obtained for inviscid Euler flow in Refs. [45; 47], and correspond to the respective eigenmodes (up to arbitrary normalization) \[\mathbf{o}_{0} =\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}, \tag{25a}\] \[\mathbf{o}_{\pm} =\begin{pmatrix}5-4\lambda\pm\sqrt{25+16\lambda(\lambda-2)}\\ 5-4\lambda\pm\sqrt{25+16\lambda(\lambda-2)}\\ 4\sqrt{\lambda}\end{pmatrix}. \tag{25b}\] The eigenmode \(\mathbf{o}_{0}\) is a strictly radial quadrupole mode, while \(\mathbf{o}_{-}\) and \(\mathbf{o}_{+}\) are 3-dimensional quadrupole and breathing modes respectively. Similarly, \(\mathbf{\Gamma}\) results in two nontrivial eigenvalues \(\gamma_{\pm}\), that constitute the principle damping rates of the system. Although it is tempting to assign one of these principle rates as the overall relaxation rate, the eigenmodes associated to each \(\gamma_{\pm}\), are in general, not the eigenmodes of \(\mathbf{O}\). Consequently, coupling between the eigenmodes of \(\mathbf{\Gamma}\) is inevitable during dynamical evolution, enforcing that accurate relaxation trajectories are best obtained from full solutions to Eq. (22). ### The hydrodynamic volume Returning to the main argument, Eq. (16) is expected to be a reasonable representation of dynamics, provided the shape of the gas remains nearly Gaussian. To employ these equations, we must establish the value of the effective hydrodynamic volume. A first guess at this volume is given in Eq. (20), which left available a free parameter \(\eta\), that may depend on \(\lambda\) and \(\mathrm{Kn}\). As noted in Sec. IV.1, \(\mathrm{Kn}\) is implicitly dependent on \(N\) and \(\Theta\), which are taken as the relevant independent variables for this study. To extract \(\eta\), we perform multiple DSMC runs while varying \(\lambda\), \(N\) and \(\Theta\), which provides us time traces of \(T_{p}(t)\) (21) for each combination of parameter values. We then fit \(T_{p}(t)\) as computed from our theory (16) to those from the DSMC simulations while floating \(\eta\), such that it minimizes the relative root-mean-squared error \[\varepsilon(\eta)=\sqrt{\sum_{t}\left(\frac{T_{p}^{\text{DMSC}}(t)-T_{p}^{ \text{theory}}(t;\eta)}{T_{p}^{\text{DMSC}}(t)}\right)^{2}}. \tag{26}\] In these numerical experiments, we tune the trap anisotropy in a manner that does not the affect \(\mathrm{Kn}\), by setting \(\omega_{\perp}=\overline{\omega}/\lambda^{1/6}\) and \(\omega_{z}=\overline{\omega}\lambda^{1/3}\). This construction ensures that \(\overline{\omega}\), and therefore \(\mathrm{Kn}\), both remain independent of \(\lambda\). The dipoles are taken to point along \(\hat{\mathbf{x}}\) for the data shown. Dependence on dipole orientation will be included below. Results of several such fits are shown in Fig. 6, which compares the \(T_{p}\) time traces for a series of cross-dimensional rethermalization experiments with \(N=5\times 10^{5}\) (\(\mathrm{Kn}\approx 0.04\)) over a range of \(\lambda=0.13\) to \(8.0\), as obtained from DSMC simulations (solid black curves) and our fitted theory (dashed red curves). Noticeably, there is a clear beating of various modes with different frequencies which our theory is able to describe, showing favorable agreement in both the amplitude and phase of oscillations. A representative comparison plot of \(\mathcal{T}_{r}(t)\) as obtained from DSMC and Eq. (16) is also provided in Fig. 7, with \(N=5\times 10^{5}\) (\(\mathrm{Kn}\approx 0.04\)) and \(\lambda=0.32\). Good agreement is seen in all \(\mathcal{T}_{r_{i}}(t)\) time traces as well. We note that temperature time traces tend to show better agreement to the DSMC ones for excitation along the long axis of a prolate trap, even for larger Knudsen numbers (\(\mathrm{Kn}\approx 0.1\)). So, we stick to this excitation geometry for a more focused study. For a given orientation of the dipoles, it may be expected that \(\eta\) depends on both the trap aspect ratio \(\lambda\) and the number of molecules \(N\). Increasing \(N\), _ceteris paribus_, evidently increases the density and hence likely the hydrodynamic volume. As for aspect ratio, a tentative \(\lambda\) dependence of \(\mathcal{V}_{\text{hy}}\) is already taken into account by (20), whereby the scaling parameter \(\eta\) may depend only weakly on \(\lambda\). This hypothesis is supported by the numerics as shown in Fig. 8, where we find that \(\eta\) is linearly dependent on \(N\), but largely independent of \(\lambda\) for the range of these parameters we explore. Finally, for a given \(\lambda\) and \(N\), it remains to resolve the dependence of \(\eta\) on the dipole orientation \(\hat{\mathbf{\mathcal{E}}}\). In this context, recall that the dilute and hydrodynamic regimes are distinguished by the Knudsen number, which is inversely proportional to the collision cross section, Eq. (3). We saw in Sec. IV.1, that this cross section results in anisotropic viscosities, that work to bring local thermodynamic fluctuations back to equilibrium. Having accounted for this aspect of differential scattering, we posit that \(\eta\) should only depend on the post-collision averaged cross section \(\sigma_{\text{coll}}=\int d\Omega^{\prime}\frac{d\sigma}{d\Omega^{\prime}}\), which still preserves an incoming-collision angle dependence [16]. As to how so, we present the following argument. Prolate traps have a weak trapping axis \(z\), along which the gas has a larger thermal width. As a result, the mean-free path along that axis is relatively smaller compared to the sample size, and consequently more hydrodynamic. Collisions that occur with relative momentum directed along the long axis, are then most able to keep molecules behaving collectively as hydrodynamic. The bulk total cross section is, therefore, most simply taken as \[\sigma_{\text{coll}}=a_{d}^{2}\frac{\pi}{3}\big{[}3+18\cos^{2}( \hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})-13\cos^{4}(\hat{\mathbf{ \mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})\big{]}, \tag{27}\] where \(\hat{\mathbf{e}}_{\text{hy}}=\hat{\mathbf{z}}\) denotes the most hydrodynamic axis (weakest trap frequency), so that \(\hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}}=\Theta\). We indeed find that \(\eta\) follows a \(\Theta\) dependence very similar to that of Eq. (27), when comparing \(\eta\) as obtained from DSMC experiments, to a fitting function of the form \((\sigma_{\text{coll}}/\overline{\sigma}_{\text{coll}})\alpha+\beta\) in Fig. 9, where \(\overline{\sigma}_{\text{coll}}=\int\sigma_{\text{coll}}(\hat{\mathbf{e}}_{\text{hy }})d\hat{\mathbf{e}}_{\text{hy}}=32\pi a_{d}^{2}/15\) is the angular averaged total cross section. The observations above motivate the functional form \[\eta\approx a+b\left(\frac{N}{10^{5}}\right)\left[1+c\left(\frac{ \sigma_{\text{coll}}}{\overline{\sigma}_{\text{coll}}}\right)\right], \tag{28}\] for some constants \(a,b\) and \(c\), which we determine from fits to be \(a\approx 2.21\pm 0.017\), \(b\approx 0.67\pm 0.020\) and \(c\approx 0.26\pm 0.015\). See App. C for further details. Our functional weak trapping axis \(z\), along which the gas has a larger thermal width. As a result, the mean-free path along that axis is relatively smaller compared to the sample size, and consequently more hydrodynamic. Collisions that occur with relative momentum directed along the long axis, are then most able to keep molecules behaving collectively as hydrodynamic. The bulk total cross section is, therefore, most simply taken as \[\sigma_{\text{coll}}=a_{d}^{2}\frac{\pi}{3}\big{[}3+18\cos^{2}( \hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})-13\cos^{4}(\hat{\mathbf{ \mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})\big{]}, \tag{29}\] where \(\hat{\mathbf{e}}_{\text{hy}}=\hat{\mathbf{z}}\) denotes the most hydrodynamic axis (weakest trap frequency), so that \(\hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}}=\Theta\). We indeed find that \(\eta\) follows a \(\Theta\) dependence very similar to that of Eq. (27), when comparing \(\eta\) as obtained from DSMC experiments, to a fitting function of the form \((\sigma_{\text{coll}}/\overline{\sigma}_{\text{coll}})\alpha+\beta\) in Fig. 9, where \(\overline{\sigma}_{\text{coll}}=\int\sigma_{\text{coll}}(\hat{\mathbf{e}}_{\text{hy }})d\hat{\mathbf{e}}_{\text{hy}}=32\pi a_{d}^{2}/15\) is the angular averaged total cross section. The observations above motivate the functional form \[\eta\approx a+b\left(\frac{N}{10^{5}}\right)\left[1+c\left(\frac{ \sigma_{\text{coll}}}{\overline{\sigma}_{\text{coll}}}\right)\right], \tag{30}\] for some constants \(a,b\) and \(c\), which we determine from fits to be \(a\approx 2.21\pm 0.017\), \(b\approx 0.67\pm 0.020\) and \(c\approx 0.26\pm 0.015\). See App. C for further details. Our functional weak trapping axis \(z\), along which the gas has a larger thermal width. As a result, the mean-free path along that axis is relatively smaller compared to the sample size, and consequently more hydrodynamic. Collisions that occur with relative momentum directed along the long axis, are then most able to keep molecules behaving collectively as hydrodynamic. The bulk total cross section is, therefore, most simply taken as \[\sigma_{\text{coll}}=a_{d}^{2}\frac{\pi}{3}\big{[}3+18\cos^{2}( \hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})-13\cos^{4}(\hat{\mathbf{ \mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}})\big{]}, \tag{31}\] where \(\hat{\mathbf{e}}_{\text{hy}}=\hat{\mathbf{z}}\) denotes the most hydrodynamic axis (weakest trap frequency), so that \(\hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{e}}_{\text{hy}}=\Theta\). Figure 6: Comparison of the momentum space temperature \(T_{p}\) (21) vs time \(t\), obtained from DSMC simulations (black solid curves) and our theory (red dashed curves) with \(N=5\times 10^{5}\) (\(\text{Kn}\approx 0.04\)) and parameters in Tab. 1. The subplots (a) to (h) correspond to various values of trapping anisotropy with \(\lambda=0.13\) to \(8.0\) as labeled in the subplot headers. The fitted values of \(\eta\) are also provided in the subplot headers with their fitting standard uncertainties. ## V Discussions and Conclusions A trapped gas transitions to one that is hydrodynamic when the molecular mean-free path is far exceeded by the extent of its thermal cloud. Collisional thermalization is then a local and rapid process, for which collective dynamics becomes likened to that of a fluid. In this work, we have studied the damping and oscillations of hydrodynamic welter in harmonically confined dipolar gases, with cross-dimensional rethermalization experiments. Unlike its dilute counterpart, a hydrodynamic dipolar gas has its distribution width (second moment) dynamics closely follow the symmetries imposed by the confining potential. This adherence to the extrinsic trap symmetry arises from a high frequency of collisions, suppressing the intrinsic dipolar properties from manifesting on macroscopic scales. But since local thermal equilibration is not truly instantaneous, dipolar collisions still result in anisotropic viscous shearing between fluid layers, damping the macroscopic fluid welter. We have constructed a model to describe such damped weltering dynamics, presented in Eq. (16). Embedded in this model is a semi-empirical quantity \(\mathcal{V}_{\rm hy}\), which quantifies the hydrodynamic extent of the trapped gas and its consequence to damping. Through use of numerical experiments, we obtain a functional form for \(\mathcal{V}_{\rm hy}\) in Eq. (29), expected to work in the range of \(\lambda\), \(N\) and \(\Theta\) explored here. Larger Knudsen numbers and trap anisotropies will increase the dilute fraction, requiring more nuanced treatments of the non-hydrodynamic regions. Moreover, the approximation made in Sec. IV of threshold dipolar scattering, may not be adequate in hydrodynamic samples of polar molecular gases. Threshold scattering requires that the collision energies relative to the dipole energy are sufficiently low [60], but there be high enough collision rates to remain hydrodynamic, as is detailed in App. B. This raises issues for Bose gases within the presented formalism, since lowering the temperature to achieve threshold scattering would result in a significant condensate fraction. On the other hand, Fermi gases below \(T_{F}\) still have collective excitations well described by classical kinetic theories, if Pauli blocking effects are included [46]. Lastly, dipolar mean-field effects have been ignored, thermal energies being much larger than the average dipolar mean-field energy per particle [14]. All these considerations, albeit important to current molecular ultracold experiments, are not within the current scope of this work and will be considered in future investigations. ###### Acknowledgements. The authors would like to thank X. Y. Luo, A. Schindewolf and X. Y. Chen for insightful and motivating discussions on ultracold molecular trapped gases in the hydrodynamic regime. This work is supported by the National Science Foundation under Grant Number PHY2110327. Figure 7: Comparison of the position space pseudotemperatures \(\mathcal{T}_{r}\) vs time \(t\), obtained from DSMC simulations (upper subplot a) and our theory (lower subplot b) with the parameters in Tab. 1, \(N=5\times 10^{5}\) (\(\text{Kn}\approx 0.04\)) and \(\lambda=0.32\). Figure 8: Plot of \(\eta\) vs \(N\) for various values of \(\lambda=0.13,0.20,0.32,0.50\), all of which are prolate (cigar) geometries. Also plotted is a linear function ansatz in Eq. (28) (gray dashed line), for comparison with data from DSMC simulations (blue data). Error bars on the DSMC data points denote standard fit uncertainties. ## Appendix A Averaging out spatial coordinates To obtain the spatially averaged equations of motion in Sec. IV.1, we start by defining a notation for spatially averaged quantities: \[\langle\ldots\rangle=\frac{1}{N}\int n(\mathbf{r},t)\left(\ldots\right)d^{3}r. \tag{10}\] This renders the density averaged equation for \(\sigma_{i}(t)\) as \[\frac{\langle r_{i}^{2}T\rangle}{\sigma_{i}^{2}(t)}-\langle r_{i} \partial_{i}T\rangle =\frac{m}{k_{B}}\left(\frac{\ddot{\sigma}_{i}(t)}{\sigma_{i}(t)}+ \omega_{i}^{2}\right)\langle r_{i}^{2}\rangle\] \[\quad-\sum_{j,k,\ell}\frac{\dot{\sigma}_{k}}{\sigma_{\ell}}\delta _{k,\ell}\int\frac{d^{3}r}{Nk_{B}}r_{i}\partial_{j}\mu_{ijk\ell}(T)\] \[=\frac{m}{k_{B}}\left(\frac{\ddot{\sigma}_{i}(t)}{\sigma_{i}(t)}+ \omega_{i}^{2}\right)\sigma_{i}^{2}(t) \tag{11}\] \[\quad-\sum_{j,k,\ell}\frac{\dot{\sigma}_{k}}{\sigma_{\ell}}\int \frac{d^{3}r}{Nk_{B}}r_{i}\partial_{j}\mu_{ijk\ell}(T)\delta_{k,\ell}.\] As for the temperature balance equation: \[\frac{\partial T(\mathbf{r},t)}{\partial t}+\sum_{i}U_{i}\partial_{i} T(\mathbf{r},t)+\frac{2}{3}\sum_{i}\partial_{i}U_{i}T(\mathbf{r},t)\] \[=\frac{2}{3n(\mathbf{r},t)k_{B}}\sum_{i,j,k,\ell}(\partial_{j}U_{i})( \partial_{\ell}U_{k})\mu_{ijk\ell}(T)\] \[\quad+\frac{2}{3n(\mathbf{r},t)k_{B}}\sum_{i,j}\partial_{i}\left[ \kappa_{ij}\partial_{j}T(\mathbf{r},t)\right], \tag{12}\] we first note the relation \[\frac{d\langle T\rangle}{dt} =\int\frac{d^{3}r}{N}\left[n(\mathbf{r},t)\frac{\partial T(\mathbf{r},t) }{\partial t}+T(\mathbf{r},t)\frac{\partial n(\mathbf{r},t)}{\partial t}\right]\] \[=\left\langle\frac{\partial T}{\partial t}\right\rangle+\sum_{i} \frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)}\left(\frac{\langle r_{i}^{2}T\rangle }{\sigma_{i}^{2}(t)}-\langle T\rangle\right), \tag{13}\] where we utilized the continuity equation. Then multiplying the temperature balance equation by \(n(\mathbf{r},t)/N\) and integrating over \(d^{3}r\) gives \[\frac{d\langle T\rangle}{dt} +\frac{5}{3}\sum_{i}\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)} \langle T\rangle-\sum_{i}\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)}\left(\frac {\langle r_{i}^{2}T\rangle}{\sigma_{i}^{2}(t)}-\langle r_{i}\partial_{i}T \rangle\right)\] \[=\frac{2}{3Nk_{B}}\sum_{i,j,k,\ell}\frac{\dot{\sigma}_{i}(t)}{ \sigma_{i}(t)}\delta_{i,j}\left(\int d^{3}r\mu_{ijk\ell}\right)\delta_{k,\ell} \frac{\dot{\sigma}_{\ell}(t)}{\sigma_{\ell}(t)}\] \[\quad\quad\quad+\frac{2}{3Nk_{B}}\sum_{i,j}\int d^{3}r\left[ \partial_{i}(\kappa_{ij}\partial_{j}T)\right]. \tag{14}\] Combining equations (11) and (14), we get \[\frac{d\langle T\rangle}{dt} +\frac{5}{3}\sum_{i}\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)} \langle T\rangle\] \[-\frac{m}{k_{B}}\sum_{i}\dot{\sigma}_{i}(t)\left[\ddot{\sigma}_ {i}(t)+\omega_{i}^{2}\sigma_{i}(t)\right]\] \[\approx\frac{2}{3Nk_{B}}\sum_{i,j,k,\ell}\frac{\dot{\sigma}_{i}(t) }{\sigma_{i}(t)}\delta_{i,j}\left(\int d^{3}r\mu_{ijk\ell}\right)\delta_{k,\ell }\frac{\dot{\sigma}_{\ell}(t)}{\sigma_{\ell}(t)}\] \[\quad-\frac{1}{Nk_{B}}\sum_{i,j,k,\ell}\frac{\dot{\sigma}_{i}(t)}{ \sigma_{i}(t)}\left(\int d^{3}rr_{i}\partial_{j}\mu_{ijk\ell}\right)\delta_{k, \ell}\frac{\dot{\sigma}_{k}}{\sigma_{\ell}}\] \[\quad+\frac{2}{3Nk_{B}}\sum_{i,j}\int d^{3}r\left[\partial_{i}( \kappa_{ij}\partial_{j}T)\right]. \tag{15}\] At this point, conservation of energy has that \[E_{\rm total} =\frac{m}{2}\sum_{i}\left(\omega_{i}^{2}\langle r_{i}^{2}\rangle+ \int\frac{d^{3}rd^{3}v}{N}f(\mathbf{r},\mathbf{v},t)v_{i}^{2}\right)\] \[=\frac{m}{2}\sum_{i}\left(\omega_{i}^{2}\sigma_{i}^{2}+\int\frac {d^{3}rd^{3}v}{N}f(\mathbf{r},\mathbf{v},t)v_{i}^{2}\right), \tag{16}\] where \(E_{\rm total}\) is the total energy of the hydrodynamic system. Therefore, the relation above along with Eqs. (7) and (14) motivates the form for \(\langle T\rangle\) as \[\langle T\rangle=\frac{2E_{\rm total}}{3k_{B}}-\frac{m}{3k_{B}}\sum_{i}\left[ \omega_{i}^{2}\sigma_{i}^{2}(t)+\dot{\sigma}_{i}^{2}(t)\right], \tag{17}\] and its time-derivative \[\frac{d\langle T\rangle}{dt}=-\frac{2m}{3k_{B}}\sum_{i}\left[\omega_{i}^{2} \dot{\sigma}_{i}(t)\sigma_{i}(t)+\ddot{\sigma}_{i}(t)\dot{\sigma}_{i}(t)\right]. \tag{18}\] Figure 9: Plot of \(\eta\) vs \(\Theta\) from a cross-dimensional rethermalization experiment. The data points (points with error bars) are obtained from DSMC simulations, which is compared to the fitting function (dashed curves) in Eq. (28). The data is obtained with the parameters in Tab. 1 and \(\lambda=0.2\), for 3 values of \(N=4\times 10^{5}\) (black data, \({\rm Kn}\approx 0.06\)), \(N=3\times 10^{5}\) (gray data, \({\rm Kn}\approx 0.07\)) and \(N=2\times 10^{5}\) (light gray data, \({\rm Kn}\approx 0.11\)) Plugging these relations into Eq. (10) and assuming each axis can be solved independently, we obtain \[\dot{\sigma}_{i}(t)\left[\ddot{\sigma}_{i}(t)+\omega_{i}^{2}\sigma_{i }(t)\right]\] \[+\frac{\dot{\sigma}_{i}(t)}{\sigma_{i}(t)}\left[\frac{1}{3}\sum_{j }\left(\omega_{j}^{2}\sigma_{j}^{2}(t)+\dot{\sigma}_{j}^{2}(t)\right)-\frac{2E _{\text{total}}}{3m}\right]\] \[\approx\frac{3}{5Nm}\sum_{j,k,\ell}\frac{\dot{\sigma}_{i}(t)}{ \sigma_{i}(t)}\left(\int d^{3}rr_{i}\partial_{j}\mu_{ijk\ell}\right)\delta_{k, \ell}\frac{\dot{\sigma}_{k}}{\sigma_{\ell}}\] \[\quad-\frac{2}{5Nm}\sum_{j,k,\ell}\frac{\dot{\sigma}_{i}(t)}{ \sigma_{i}(t)}\delta_{i,j}\left(\int d^{3}r\mu_{ijk\ell}\right)\delta_{k,\ell }\frac{\dot{\sigma}_{\ell}(t)}{\sigma_{\ell}(t)}\] \[\quad-\frac{2}{5Nm}\sum_{j}\int d^{3}r\left[\partial_{i}(\kappa_ {ij}\partial_{j}T)\right]. \tag{12}\] Finally, the conserved total energy \(E_{\text{total}}\), is made up of the potential energy and thermal equilibrium temperature \(T_{0}\): \[E_{\text{total}}=\frac{3}{2}k_{B}T_{0}+\frac{m}{2}\sum_{i}\omega_{i}^{2}\sigma _{0,i}^{2}=3k_{B}T_{0}, \tag{13}\] where we utilized that \(\sigma_{0,i}=\sqrt{k_{B}T_{0}/m\omega_{i}^{2}}\). ## Appendix B Considerations for threshold scattering The analytic results obtained for the viscosities in Sec. IV.1 are applicable for close to threshold dipolar scattering, which is energy independent [16]. However this assumption is only appropriate when the collision energy is much smaller than the characteristic dipole energy \(E_{\text{dd}}=16\pi^{2}e_{0}^{2}\hbar^{6}/m^{3}d^{4}\), where \(d\) is the electric dipole moment [60]. At the same time, the transport coefficients are derived with classical kinetic theory that assumes a nondegenerate sample. Implicit in this formulation is, therefore, that the gas temperature remains well above the Fermi temperature \(T_{F}=\hbar\overline{\omega}(6N)^{1/3}/k_{B}\)[44]. The applicability of our current theory requires that temperature lies in the range \(T_{F}<T\ll E_{\text{dd}}/k_{B}\). Furthermore, the derivation above relies on the gas being hydrodynamic, as is characterized by the Knudsen number Kn. The requirements to remain in the regime of validity as formulated in Sec. IV.1 are summarized as \[\frac{\hbar^{2}}{4ma_{d}^{2}} \gg k_{B}T>\hbar\overline{\omega}(6N)^{1/3}, \tag{14a}\] \[N \gg\frac{15\sqrt{\pi}}{4}\frac{k_{B}T}{m\overline{\omega}^{2}a_{ d}^{2}}, \tag{14b}\] which is only ever possible if \(a_{d}/a_{\text{HO}}\ll 0.04\), where \(a_{d}=md^{2}/(8\pi\epsilon_{0}\hbar^{2})\) is the dipole length and \(a_{\text{HO}}=\sqrt{\hbar/(m\overline{\omega})}\) is the harmonic oscillator length. In heteronuclear alkali dimers, these microwave shielded molecules with \(d\sim 1\) D and \(m\sim 50\) amu have dipole lengths on the order of \(a_{d}\sim 5000a_{0}\) to \(10,000a_{0}\), in units of Bohr radius \(a_{0}\). The necessary trap frequencies to permit threshold scattering above \(T_{F}\) would thus need to be of order \(\omega\ll 10\) Hz, which is very weak compared to typical ultracold experiments. For the parameters in Tab. 1, we find that \(k_{B}T/E_{\text{dd}}\approx 28\), implying a more accurate cross section would be that obtained from the semi-classical Eikonal approximation [61; 62; 60]. We opt to proceed with the effective cross section obtained with threshold energy scattering as it still serves to illustrates the effectiveness of our theory, as formulated for arbitrary cross sections. ## Appendix C A simple functional form for the hydrodynamic volume From Fig. 8, we saw that \(\eta\) is mostly independent of \(\lambda\), which leaves us with \(\eta=\eta(N,\Theta)\). Then assuming that \(\eta\) is separable in its 2 arguments, this allows us to write \(\eta(N,\Theta)=\eta_{N}(N)\eta_{\Theta}(\Theta)\). Within the range of \(N\) we explore, we could Taylor expand \(\eta_{N}\) around a number of molecules that is sure to be hydrodynamic \(N_{0}\), so that \[\eta(N,\Theta)\approx\left(\eta_{N}(N_{0})+(N-N_{0})\left.\frac{ \partial\eta_{N}}{\partial N}\right|_{N_{0}}\right)\eta_{\Theta}(\Theta). \tag{15}\] Then also assuming that the dependence of \(\eta_{\Theta}\) on \(\Theta\) purely arises through \(\sigma_{\text{coll}}(\Theta)\) (i.e. \(\eta_{\Theta}=\eta_{\Theta}(\sigma_{\text{coll}})\)), we then treat \(\xi=\sigma_{\text{coll}}/\overline{\sigma}_{\text{coll}}\) as a small parameters and Taylor expand \(\eta_{\Theta}\) to give \[\eta(N,\Theta)\approx a+b\left(\frac{N}{10^{5}}\right)\left[1+c\left(\frac{ \sigma_{\text{coll}}(\Theta)}{\overline{\sigma}_{\text{coll}}}\right)\right], \tag{16}\] as in Eq. (28), where \[a =\eta_{\Theta}(0)\left(\eta_{N}(N_{0})-N_{0}\left.\frac{\partial \eta_{N}}{\partial N}\right|_{N_{0}}\right), \tag{17a}\] \[b =10^{5}\times\eta_{\Theta}(0)\left.\frac{\partial\eta_{N}}{ \partial N}\right|_{N_{0}},\] (17b) \[c =\frac{1}{\eta_{\Theta}(0)}\left.\frac{\partial\eta_{\Theta}}{ \partial\xi}\right|_{\xi=0}, \tag{17c}\] having used the notation \(\eta_{\Theta}(0)=\eta_{\Theta}(\xi=0)\).
2310.20666
StairNet: Visual Recognition of Stairs for Human-Robot Locomotion
Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to create the StairNet initiative to support the development of new deep learning models for visual sensing and recognition of stairs, with an emphasis on lightweight and efficient neural networks for onboard real-time inference. In this study, we present an overview of the development of our large-scale dataset with over 515,000 manually labeled images, as well as our development of different deep learning models (e.g., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks) and training methods (e.g., supervised learning with temporal data and semi-supervised learning with unlabeled images) using our new dataset. We consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. We also deployed our models on custom-designed CPU-powered smart glasses. However, limitations in the embedded hardware yielded slower inference speeds of 1.5 seconds, presenting a trade-off between human-centered design and performance. Overall, we showed that StairNet can be an effective platform to develop and study new visual perception systems for human-robot locomotion with applications in exoskeleton and prosthetic leg control.
Andrew Garrett Kurbis, Dmytro Kuzmenko, Bogdan Ivanyuk-Skulskiy, Alex Mihailidis, Brokoslaw Laschowski
2023-10-31T17:30:57Z
http://arxiv.org/abs/2310.20666v1
# StairNet: Visual Recognition of Stairs for Human-Robot Locomotion ###### Abstract Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to create the StairNet initiative to support the development of new deep learning models for visual sensing and recognition of stairs, with an emphasis on lightweight and efficient neural networks for onboard real-time inference. In this study, we present an overview of the development of our large-scale dataset with over 515,000 manually labeled images, as well as our development of different deep learning models (e.g., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks) and training methods (e.g., supervised learning with temporal data and semi-supervised learning with unlabeled images) using our new dataset. We consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. We also deployed our models on custom-designed CPU-powered smart glasses. However, limitations in the embedded hardware yielded slower inference speeds of 1.5 seconds, presenting a trade-off between human-centered design and performance. Overall, we showed that StairNet can be an effective platform to develop and study new visual perception systems for human-robot locomotion with applications in exoskeleton and prosthetic leg control. computer vision, deep learning, wearable robotics, prosthetics, exoskeletons ## 1 Introduction Robotic leg prostheses and exoskeletons can provide locomotor assistance to individuals affected by impairments due to aging and/or physical disabilities such as stroke [1]. Most control systems for human-robot walking use a hierarchical strategy with high, mid [2], and low [3] level controls. Robotic leg control requires continuous assessment of locomotor states for seamless transitions between different operating modes. Previous high-level controllers relied on mechanical, inertial, and/or electromyographic (EMG) sensors for state estimation, which are generally limited to the current state, analogous to walking blind. Inspired by the human vision system [4, 5], egocentric vision can uniquely detect environmental states prior to physical interaction and thus aid in smooth and accurate transitions. However, the classification of walking terrains such as stairs presents additional challenges because of the complex nature of real-world environments, which can vary significantly in style, material, and geometry. The classification of stairs is particularly important because of the increased risk of severe injury from falls if the environment is misclassified. Previous vision systems have been developed to recognize stairs for robotic leg control using hand-designed feature extractors [6, 7, 8, 9, 10] or automated feature engineering via convolutional neural networks (CNNs) [11, 12, 13, 14, 15, 16, 17, 18, 19]. However, these systems have inherent limitations in terms of performance and generalizability to new environments because of suboptimal hand engineering and/or training on relatively small image datasets. Recent studies have significantly expanded the number of labeled images [20] and presented the opportunity to use deep learning models to increase performance and generalizability. The purpose of this study is to provide an overview of our StairNet initiative, which we created to support the development of new deep learning models for visual sensing and perception of stair environments for human-robot walking. The initiative emphasizes lightweight and efficient neural networks for onboard real-time deployment on mobile and embedded devices. We discuss the development of our large-scale dataset with over 515,000 manually labeled images, as well as our development of different deep learning models and training methods using our new dataset. Building on this work, the StairNet initiative can support the development of next-generation environment-adaptive control systems for robotic leg prostheses, exoskeletons, and other assistive technologies for human locomotion. ## 2 StairNet Dataset Our StairNet dataset contains over 515,000 RGB images that we manually annotated using hierarchical class labels for environments encountered during level-ground and stair locomotion. To our knowledge, this dataset is the largest and most diverse dataset of egocentric images of stair environments published to date. We made the dataset open source at [https://ieee-dataport.org/documents/stairnet-computer-vision-dataset-stair-recognition](https://ieee-dataport.org/documents/stairnet-computer-vision-dataset-stair-recognition) to support the research community and allow for direct comparisons between different machine learning models. The dataset includes annotated class labels to reduce class overlap and maximize the theoretical performance for model development. We developed the StairNet dataset using images from ExoNet [20], which were captured using a chest-mounted wearable camera (iPhone XS Max) in indoor and outdoor environments. The images were saved at 5 frames/s with a resolution of 1280x720 pixels with multiple users with varying heights and camera pitch angles. In our initial study, we found that the ExoNet labels contained many overlapping classes, which resulted in limited performance for models trained using these annotations [12]. Therefore, we developed new class definitions to manually re-label the images and to increase the precision of the cut-off points used between the different walking environments. We defined four new classes, including level-ground (LG), level-ground transition to incline stairs (LG-IS), incline stairs (IS), and inclined stairs transition to level-ground (IS-LG). We performed three manual labeling pass-throughs to increase annotation accuracy and precision. We removed images that did not contain either level-ground terrain or incline stairs or had significant camera obstructions. Since our dataset is designed for stair recognition, there is no loss of characteristics related to the intended application by removing these images, as any classifications made outside of these classes are considered out of scope and would require additional models for classification. Our dataset repository also includes information related to the class distribution and definitions. The dataset mainly comprises images of level-ground terrain (86% of samples) and incline stairs (9%), with two minority classes, IS-LG and LG-IS, which contain approximately 2% and 3% of the samples, respectively. This imbalance is important to consider when selecting classification and resampling methods. When using our dataset for model development, we suggest using a video-based train-validation-test split, as outlined in [20]. This method assigns all frames within a video episode (i.e., group of neighboring frames) to one of the dataset splits to prevent data leakage and to provide a better estimation of real-world performance and generalizability [21]. Scripts for this validation approach and data preprocessing can be found on our GitHub. Using the StairNet dataset, we developed and tested a number of different deep learning models and training methods to directly evaluate and compare their advantages and disadvantages on a common platform, as subsequently discussed. ## 3 Deep Learning Models ### Baseline Model The first StairNet model [12], also known as our baseline model, was developed using supervised learning, which predicted each frame independently, as shown in Figure 1. We developed an efficient 2D CNN based on the architecture of MobileNetV2 for image classification, which was designed by Google for mobile and embedded vision applications [22], [23]. MobileNetV2 uses depth-wise separable convolutions with width and resolution multipliers to create a lightweight framework with the trade-off of slightly lower accuracy for significant reductions in computational requirements, which is suitable for onboard real-time inference for robot control. We developed the baseline model using TensorFlow 2.7 [24], starting with the default parameter values from [18], [25], [26]. We used a Google Cloud Tensor Processing Unit (TPU) to help efficiently optimize model parameters. A global average pooling 2D layer and softmax dense prediction layer were added for transfer learning with pretrained weights from ImageNet [27]. Five freeze layer hyperparameters were tested: 141, 100, 50, 25, and 5, with each variation trained for 60 epochs. Five frozen layers with 2.2 million trainable parameters resulted in the highest validation accuracy and lowest validation loss. A grid search found an optimal combination of a batch size of 256 and a learning rate of 0.0001. Using these hyperparameters, pretrained weights were compared with randomly initialized weights. After 60 epochs, both validation accuracy curves plateaued, with the pretrained model outperforming the randomly initialized model with validation accuracies of 98% and 97%, respectively. However, characteristics of overfitting were observed for both models. To address this, additional regularization was implemented via a dropout layer with dropout rates between 0.1 and 0.5, and L2 weight regularization. A dropout rate of 0.2 resulted in reduced overfitting and an increase in validation performance, while additional L2 weight regularization was removed with no impact on model performance. To further reduce overfitting, we oversampled the underrepresented transition classes (IS-LG and LG-IS). Images were randomly resampled and augmented during training with five oversampling values (i.e., 25,000, 40,000, 60,000, 200,000, and 400,000) to control the minimum number of images per class. Our experiment showed that a higher minimum value per class decreased the overall validation accuracy. However, the categorical accuracy for the underrepresented classes increased, creating a more even categorical accuracy distribution across the different walking environments. Given that more significant consequences could result from a false negative than a false positive for human-robot locomotion, a minimum value of 400,000 images per class was used to minimize the probability of false negatives, as seen in the increased accuracy in the IS and IS-LG classes, with increases of 0.3% and 2.2%, respectively. The baseline model underwent a final round of hyperparameter optimization for batch size and learning rate in a high epoch run. After multiple iterations, we finalized the model using a reduced base learning rate of 0.00001, a batch size of 128, and a cosine weight decay learning policy. The model included pretrained weights, five frozen layers, 2.3 million parameters, and a minimum categorical image count of 400,000 images. We also added a dropout layer with a dropout rate of 0.2. The final model was trained for 100 epochs with early stopping. The model was evaluated using the train, validation, and test sets of the StairNet dataset described in Section 2. The model achieved 99.3% and 98.5% accuracies on the training and validation sets, respectively. When evaluated on the test set, the model achieved an overall classification accuracy of 98.4%, correctly classifying 35,507 of the 36,085 images. Additionally, the model achieved an F1 score of 98.4%, weighted precision value of 98.5%, and weighted recall value of 98.4%. The model achieved this performance with 2.3 million parameters and 6.1 GFLOPs. The classification accuracy on the test set varied between different environments, with categorical accuracies of 99.0% for LG, 91.7% for LG-IS, 96.9% for IS, and 90.5% for IS-LG. The two transition classes (i.e., LG-IS and IS-LG), comprising only 3.1% and 1.8% of the total number of images, respectively, achieved the lowest categorical accuracies. Our baseline model had failure cases such as incorrectly predicting a transition to incline stairs when level-ground images contained strong horizontal lines in the top section of the image. Images with strong horizontal lines throughout the image (e.g., brick flooring or tiles) also presented difficulties for the model and led to incorrect classification of incline stairs. False negatives were less common but occurred from encountering unique stair characteristics such as unusual materials (e.g., a stair under repair with a wood plank over the base material) or viewing angle (e.g., looking to the left or right while walking upstairs). We used this baseline model as a reference and benchmark for our subsequent models that we developed and studied. ### Mobile Deployment To evaluate the real-world performance of our baseline model, we custom-designed a mobile app using TensorFlow Lite (TFLite) [28] and Swift 5 and Xcode 13.4.1 [29] for on-device inference [13]. The mobile app prep prepares an image from the camera feed and scales the input resolution using a square crop to match the resolution of our deep learning model input size (i.e., 224x224). The model then runs on-device inference, outputting the tensor results in a float-array format containing the confidence values for the four walking environments for each image. The mobile interface displays the output information with the class predictions, along with the onboard inference speed (ms) for the last image. We use a TFLite interpreter for the on-device computation, which has several advantages over other deployment methods such as cloud computing. It allows offline execution and inference on edge devices without requiring an internet connection or the need to communicate with a machine learning server. Performing offline inference can significantly reduce power requirements and privacy concerns, particularly in clinical applications, as no data is required to leave the device. TFLite also has a small binary size and supports highly efficient models for low inference times, with minimal impact on accuracy during compression. For mobile deployment, the baseline model was converted from its original h5 format to a TFLite flat buffer format. This conversion allows for onboard processing and inference via the on-device interpreter and built-in TFLite infrastructure (see Figure 2), which supports multiple backend processing options such as central processing units (CPUs), graphics processing units (GPUs), and neural processing units (NPUs). We experimented with five different conversion methods with varying degrees of compression, which can increase inference speed at the expense of accuracy. These compression formats included: 1) Float32 compression, the default format for general TFLite deployment; 2) post-training float16 quantization, which reduces the model size and boosts its performance on hardware with optimized float16 computation; 3) post-training int8 weight quantization, which reduces the model size and improves performance on CPU hardware; 4) post-training quantization with int16 activations to reduce the model size and make it compatible with integer-only accelerators; and 5) post-training int8 full model quantization (i.e., model weights, biases, and activations), which reduces the model size and increases processor compatibility. Each compression format was evaluated using the StairNet test set to determine the effect of model compression on accuracy. We tested the inference speeds of our baseline model on four different mobile devices (i.e., iPhone 8+, iPhone X, iPhone 11, and iPhone 13) with four different backend processing options, including a single-threaded CPU, a multithreaded CPU, GPU, and a combination of CPU, GPU, and NPU. We developed these backend processing options using APIs with access to hardware accelerators. These APIs included the Apple Metal delegate for direct GPU compute and the Apple CoreML delegate, which uses the three iOS processing options to maximize performance while minimizing memory usage and power consumption. An offline test was performed on each device and backend processing option using a pre-recorded video, eliminating variation in camera input on the inference speed test. The pre-recorded video contained stair ascent in indoor and outdoor environments and was loaded to the mobile app to mimic the camera feed. The average inference time was calculated using inference times sampled at 5-second intervals during the video for each experiment. When compressed for mobile deployment, our baseline model had accuracy reductions between 0.001-0.111% compared to the full-sized model. The compressed model formats of float32 and float16 quantization resulted in the highest accuracy post-conversion (98.4%). In contrast, the int8 quantization format with both int8 and int16 activations had the lowest post-conversion accuracies of 98.3% and 98.3%, respectively. The model achieved an inference speed of 2.75 ms on our mobile app using the CoreML delegate and float32 model. The Core ML and Metal delegates, which use parallel processing of CPU, GPU, and NPU, and direct GPU compute, performed best on newer devices such as the iPhone 11 and iPhone 13. The inference times for these devices were 2.75 ms and 3.58 ms, respectively. In contrast, CPU processing resulted in slower inference times of 9.20 ms and 5.56 ms when using single and multithreaded CPUs. On older devices such as iPhone 8+ and iPhone X, multithreaded CPU achieved faster inference times when compared to single-threaded CPU and GPU processing. When using the CoreML delegate, the float32 compression format delivered the fastest inference speed across all devices. Similarly, the float32 format achieved the fastest inference speeds when running on a GPU with metal delegate. For mobile CPU performance, int8 quantization with int16 model activations resulted in the fastest inference time for single and multithreaded processing, with average speeds of up to 9.20 ms and 5.56 ms, respectively. Accordingly, we developed a baseline model using the StairNet dataset and deployed the model on our custom-designed mobile app for stair recognition, achieving high classification accuracy and low latency. However, this research was limited to standard supervised learning and did not take into consideration the temporal nature of human-robot walking, which motivated our subsequent research. ### Temporal Neural Networks To study the effect of sequential inputs on classification performance compared to our baseline model, which used independent frames, we developed several temporal neural networks [30] to exploit information from neighboring frames in the StairNet dataset (see Figure 3). We experimented with different deep learning models, including the new lightweight 3D CNN architecture called MoViNet [31], and a number of hybrid encoder architectures, including VGG-19 [32], EfficientNet-BO [33], MobileNetV2 [23], MobileViT [34], and ViT-B16 [35], each paired with a temporal long-short term memory (LSTM) backbone [36], and a transformer encoder [37]. We performed focused testing on the 3D MoViNet model, MobileViT with LSTM, and MobileNetV2 with LSTM, which were selected based on their potential to accurately recognize images of stairs and capture temporal dynamics. We first experimented with MoViNet- a modified version of MobileNetV3 designed for videos. We used MoViNet's neural architecture search (NAS) to optimize the model parameters such as the number of layers, convolutional filter width, and number of feature map channels. To reduce the growth of model memory, we implemented a stream buffer to act as a cache feature applied to the boundaries of the video subsequences. The cache was zero-initialized. To compute the feature map, we applied a temporal operation (i.e., 3D convolution) over the concatenation of the buffer and the subsequence. The buffer was updated in subsequent feature maps by concatenating the current buffer and new sequence using the following formula: \[B_{i+1}=\left(B_{i}x_{i}^{clip}\right)_{[-b:]}\quad(1)\] where \(B_{i}\) is the buffer, \(x_{i}^{clip}\) is the original input sequence, and \([-b:]\) is the selection of the last \(b\) frames of the concatenated feature sequence. We used a stream buffer to reduce the memory use of the MoViNet model at the expense of a small reduction in accuracy. However, we mitigated this loss in accuracy by using an ensemble of models with two identical MoViNet architectures at a half-frame rate. During inference, the input sequence was fitted to both networks and the mean values of the two models were obtained and passed through the softmax activation function. We also experimented with MobileNetV2 combined with LSTM. Similar to our baseline model, the MobileNetV2 architecture was chosen for its efficient model design, optimized for mobile and embedded devices. MobileNetV2 was applied to each frame of the sequence, resulting in a stack of feature maps, which was then fed into an LSTM layer to capture temporal dynamics. The output of the LSTM layer was a sequence of labels for sequence-to-sequence classification or the last predicted label of the LSTM recurrence operation for sequence-to-one classification. Lastly, we experimented with MobileViT, a hybrid encoder model that combines local information from convolutional layers and global information using MobileViT blocks. The model has convolution layers represented by a stride of 3x3 convolution on the input and several MobileNetV2 blocks. The MobileViT blocks in the later layers extract feature maps that are used to encode global information. Standard convolutions are applied to encode the local spatial information, and point-wise convolutions are used to project to a high-dimensional space. These high-dimensional projections were then unfolded into non-overlapping flattened patches and encoded using transformer blocks. The transformer outputs were projected back to the original low-dimensional space and fused with the original feature maps. Similar to MobileNetV2, the MobileViT model was applied to each frame of the sequence. This resulted in a sequence of feature maps, with each map corresponding to one frame. These feature maps were then passed through the transformer layer to capture temporal dynamics of the feature maps of each sequence. In sequence-to-sequence classification, the output of the last transformer block passed through a linear classification head. In sequence-to-one classification, we flattened the transformer layer output before the classification head. We performed hyperparameter optimization using KerasTuner. The hyperparameter space for each group of models was selected based on the experimental setup and architecture. Once the optimal hyperparameters were determined, each model was trained for 20 epochs using an NVIDIA Tesla V100 32GB GPU. The Adam optimizer [38] was used with a learning rate of 0.0001, along with a cosine annealing learning rate scheduler. We used NetScore [39] to compare the optimized models, which balances the network performance with efficiency and is represented by the following equation: \[\Omega(N)=20\log\frac{acc(N)^{\alpha}}{param(N)^{\beta}\ flops(N)^{\gamma}} \tag{2}\] where \(acc(N)\) is the classification accuracy (%), \(param(N)\) is the number of model parameters, which is indicative of the memory storage requirements, \(flops(N)\) is the number of floating point operations, which is indicative of the computational requirements, and \(\alpha,\beta,\gamma\) are coefficients that control the influence of each parameter on the overall NetScore. We assessed the sequence-to-one models as regular classification models, in which the predicted label of the sequence was compared to the ground truth. Sequence-to-sequence models were evaluated in two ways. The first method was sequence-to-sequence evaluation, in which the predicted sequence of labels was compared to the ground truth of the sequence of labels. We also compared the sequence-to-one evaluated with the anchor frame label, as was used to compare the performance of sequence-to-sequence models with sequence-to-one models. Of the temporal neural networks that we studied, the 3D MoViNet model achieved the highest classification performance on the StairNet test set, with 98.3% accuracy and an F1-score of 98.2%. The hybrid models, which contain a 2D-CNN encoder and temporal blocks (i.e., MobileNetV2 with LSTM and MobileViT with LSTM), struggled to capture inter-frame dependencies with minimal sequences (i.e., five frames per sample) [40] and thus achieved lower classification performance compared to our 3D model. The 3D model had the highest NetScore of 167.4, outperforming the 2D encoder models with scores of 155.0 and 132.1 for MobileViT with LSTM and MobileNetV2 with LSTM, respectively. We calculated a NetScore of 186.8 for our baseline model, outperforming all temporal neural networks that we studied in terms of efficiency due to its relatively low number of parameters and numerical operations. Among the hybrid models, MobileViT with LSTM had slightly lower classification performance compared to MobileNetV2 with LSTM with F1-scores of 96.8% and 97.0%, respectively. However, the hybrid MobileViT model had a much higher NetScore, with disproportionally less parameters (3.4 million) and operations (9.8 billion FLOPS) compared to 6.1 million parameters and 54 billion FLOPS for hybrid MobileNetV2. We also showed an increase in performance using sequence-to-one methods on sequence-to-sequence models over the standard sequence-to-sequence method, with an accuracy of 97.3% and 70.7%, respectively, using the same sequence-to-sequence model. In summary, of the temporal neural networks that we studied using sequential images for stair recognition, we showed that the 3D model outperformed the 2D models with temporal backbones in terms of classification accuracy and efficiency, which takes into consideration the computational and memory storage requirements. We also showed that the 3D video model achieved a higher classification accuracy (98.3%) compared to our 2D baseline model when retested on the video-based StairNet test set (97.2%). However, the 3D model had a lower NetScore (i.e., less efficient) due to disproportionally more parameters and operations. ### Semi-Supervised Learning Compared to the aforementioned research, all of which relied on standard supervised learning, we wanted to study the use of semi-supervised learning [41] to improve training efficiency by using large amounts of unlabeled data. Our acquisition and manual labeling of hundreds of thousands of images to develop the StairNet dataset in Section 2 was time-consuming, labour-intensive, and a significant bottleneck in the development of our initial deep learning models. Using large amounts of publicly available unlabeled data [20] is a viable option to increase training efficiency. The purpose of this work was to show the potential to improve efficiency by reducing the number of labeled images required for stair recognition while maintaining performance compared to our baseline model. We used unlabeled images from ExoNet that were not included in the StairNet dataset. However, using unlabeled data can present challenges, including lack of information about the class distributions and viability of the images. We performed a visual search of the images and found that the unlabeled data had limitations similar to those in the StairNet dataset [12, 13], with images containing environments that were not relevant to stair recognition (i.e., outside of the four StairNet classes) and had significant camera obstructions. We used the FixMatch semi-supervised learning algorithm [42] due to its intuitive and feasible implementation compared to more complex algorithms such as self-training with noise students [43], meta-pseudo-labels [44], AdaMatch [45], and contrastive learning for visual representation [46]. Our semi-supervised pipeline consisted of three major steps (Figure 4): 1) labeled and unlabeled raw images were loaded and oversampled from the labeled dataset with augmentations to help mitigate false positives during training; 2) unlabeled image logits were retrieved using a supervised pretrained model to preprocess the unlabeled dataset. The most probable pseudo-labels were then selected if they surpassed the cutoff parameter \(\tau\). Weak augmentations (i.e., horizontal flips) and strong augmentations (i.e., color intensity, saturation, small rotations, and horizontal flips) were applied to the images. The batch size ratio parameter \(\mu\) is the difference between the labeled and unlabeled batch sizes. During training, the unlabeled data required a larger batch size than the labeled dataset. The labeled and unlabeled batches were used as inputs, inferred using weakly augmented images, and the received logits were then thresholded using a pseudo-label cut-off parameter; 3) the models were trained using a supervised loss (i.e., cross-entropy loss) and unsupervised loss (i.e., cross-entropy loss of the thresholded pseudo-label logits calculated against strong augmented images). The weight of the unsupervised loss on training was adjusted using the parameter \(\lambda\). These semi-supervised parameters (\(\tau\), \(\lambda\), and \(\mu\)) were tuned to provide a high degree of model flexibility. As previous research has shown that automated feature extractors are superior to handcrafted features, particularly on large-scale image datasets [21], we used convolutional and transformer-based architectures for our model development. We first developed a vision transformer model with the base architecture of MobileViT [34], which uses automated feature engineering similar to standard CNNs. MobileViT, which was also used in Section 3.3, is a transformer-based model that employs mechanisms of attention and depth-wise dilated convolution. The model uses low-level convolution and transformer blocks, allowing for high efficiency and inference speed similar to the lightweight CNN used in our baseline model [12, 13]. We tested three different backbones for MobileViT (i.e., XXS, XS, and S), which varied in terms of the number of transformer layers, more sophisticated feature extraction, and parameter count, allowing for an optimal trade-off between model size and performance. We developed our model using TensorFlow 2.0 and trained using a high-performance Google Cloud TPU. Using the same StairNet dataset split distribution as our baseline model [12, 13], we reduced the labeled training data from 461,328 to 200,000 images to study the impact of reduced annotations. To address the issue of unknown class distribution and image quality of the unlabeled data, we used our supervised baseline model to retrieve the logits of the 4.5 million unlabeled images from ExoNet, which were thresholded using the FixMatch approach. After processing the unlabeled dataset, 1.2 million images surpassed the 0.9 \(\tau\) cut-off threshold. The resulting subset of images within the threshold had a pseudo-label distribution that closely resembled the original StairNet Dataset [12, 13] (i.e., 5.5% for IS, 1% for IS-LG, 90.1% for LG, and 3.4% for LG-IS). The lightest MobileViT XXS model was the fastest to train and infer among the three variants but had low accuracy during training. The balanced MobileViTXS backbone provided the best trade-off between model compactness and performance. The largest MobileViTS with 4.9 million parameters had the slowest training and inference times, while having worse overall performance likely due to overfitting. Our best semi-supervised learning model was MobileViTXS, pretrained on ImageNet. The model was trained using stochastic gradient descent (SGD) with 0.9 momentum, a batch size of 64, Nesterov acceleration, randomly initialized, and with FixMatch parameters of \(\tau\) = 0.98 and a loss weight of \(\lambda\)=1. During training, the data imbalance of both the labeled and unlabeled data was handled by replacing standard cross-entropy with a focal loss class weight penalization of \(\nu=3\) to penalize hard negatives. We also tested the exponential moving average, which averaged the parameters and produced significantly better results than the final weight matrices. The resulting model showed good convergence, but the overall image validation accuracy was inferior to that of the previous vanilla cross-entropy loss experiments. To reduce the number of false positives, augmentations were implemented on the labeled training set, including minor translations, rotations, contrast, and saturation. Variations were tested in the L2 parameter loss and decoupled weight decay [47]. Our best models did not include weight decay regularization. We experimented with both cosine weight decay, as suggested by FixMatch [42], and cosine decay with restarts [48]. The former was found to be more resilient and consistent and thus was implemented in our final model. Several experiments were conducted to determine the optimal ratio of labeled to unlabeled data (\(\upmu\)) and the unsupervised loss weight parameter (\(\lambda\)). The final model was trained using 300,000 labeled images (i.e., approximately 65% of the original training set) and 900,000 unlabeled images. The model had a MobileViT XS backbone and was optimized using SGD with Nesterov. The final set of hyperparameters was a learning rate of 0.045, pseudo-label cut-off \(\tau\) of 0.9, a supervised batch size of 64, a batch size ratio \(\upmu\) of 3, an unsupervised loss weight \(\lambda\) of 1.03, and a cosine decay learning rate schedule. To address class balancing, focal loss was replaced with categorical cross-entropy loss. The final model was trained for 42 epochs. Our semi-supervised learning model achieved classification accuracies of 99.2% and 98.9% on the StairNet training and validation sets, respectively. When evaluated on the test set, the model achieved an overall image classification accuracy of 98.8%, a weighted F1-score of 98.9%, a weighted precision value of 98.9%, and a weighted recall value of 98.8%. Similar to our baseline model, the two transition classes (LG-IS and IS-LG) achieved the lowest categorical accuracies (90.6% and 90.4%), which can be attributed to having the smallest class sizes. Overall, our semi-supervised learning model achieved a similar image classification performance on the StairNet dataset as our baseline model [12], [13] but used 35% fewer labeled images, therein improving the training efficiency. ### Embedded Deployment Lastly, building of the aforementioned studies, we developed an integrated smart glasses solution to move towards a more human-centred design [49]. One of the limitations of our previous deep learning models was their use of images from a chest-mounted smartphone camera. These images do not necessarily coincide with the user's visual field, and thus are more difficult to infer intent, and are susceptible to obstructions such as the user's arms. However, previous head-mounted cameras [50]-[52] have mainly been limited to off-device inference using desktop computers and cloud computing. An integrated visual perception system has yet to be designed, prototyped, and evaluated on edge devices with low inference speeds. This gap could be explained by limitations in embedded computing, which have only recently been alleviated by advances in hardware and deep learning model compression methods. Therefore, the purpose of this work was to develop a novel pair of AI-powered smart glasses that uniquely integrate both sensing and computation for visual perception of human-robot walking environments while achieving high accuracy and low latency. We integrated our mechatronic components all within a single device, which is lightweight and small form factor as to not obstruct mobility or user comfort. Computationally, it has sufficient memory and processing power for real-time inferencing with live video stream. We custom-designed 3D-printed mounts to allow our system to attach and be transferable to a wide range of eyeglass frames. The main mechatronic components is a lightweight camera to sense the walking environment and a microcontroller to process and compute the images. Inspired by commercial smart glasses such as Google Glass [50] and Ray-Ban Stores [51], our design features a forward-facing camera aligned with the user's field of view (i.e., egocentric), with the computational processing on the side of the glasses. This design allows for a slightly larger processor to support onboard inference without obstructing the visual field. We used the ArduCam HM0360 VGA SPI camera due to its relatively high resolution, fast frame rate, and low power consumption (i.e., under 19.6 mW [53]). The low power consumption allows for an "always-on" operating mode for continuous visual detection and assessment of the walking environment. The camera frame rate of 60 fps can support environment-adaptive control of human-robot locomotion. The camera resolution (640 x 480) is larger than the input size of most deep learning models (e.g., MobileNetV2 has a default input of 224x224), while providing enough information to portray the environmental state. We used the Raspberry Pi Pico W microcontroller for the onboard computational processing. This newly developed board offers increased memory and CPU power compared to smaller boards. Its enhanced processing power, large memory, small form factor, and wireless communication make it suitable for our design. The Pico contains Dual ARM 133 MHz processors, outperforming microcontrollers of comparable sizes, such as the Arduino Nano 33 BLE with a 64 MHz processing speed. This added processing power allows for increased speed and parallel processing to compute video stream and model inference. The Pico also has 64 kB SRAM and 2 MB QSPI flash memory, which is important as deep learning models must be stored on the embedded system to perform on-device inference. The Pico has a small form factor of 21 mm x 51.3 mm, which can more easily integrate into eyeglass frames. The microcontroller can also wirelessly communicate and interface with external robotic devices and computers via a single-band 2.4 GHz Wi-Fi connection or through Bluetooth 5.2. We developed a deep learning model using a similar approach as our baseline model in Section 3.1. However, fine-tuning was required to convert the model from the chest-mounted domain to an eye-level domain. To do this, the baseline model was retrained using 7,250 images from the Meta Ego4D dataset [54] that we manually annotated, which contained walking environments that matched the StairNet classes (i.e., LG, IG-IS, IS, and IS-LG), with an input size of 96x96. We used the lightweight MobileNetV1 architecture to reduce the model size for embedded deployment compared to larger architectures like MobileNetV2. We performed hyperparameter optimization for batch size and learning rate with optimal values of 32 and 0.0001, respectively. The final model contained 219,300 parameters, was converted to a TensorFlow Lite model using int8 quantization and further reduced to a TensorFlow Micro model for deployment (Figures 5 and 6). We measured the onboard inference time as the loop of loading the most recent image captured and running the model inference directly on the microcontroller. The average onboard inference speed was 1.47 seconds from reading the image to outputting the predicted label. Prior to domain fine-tuning, the model achieved a similar performance to our baseline model on the StairNet test set of 98.3% accuracy. Once fine-tuned with the Ego4D images from head-mounted cameras, the model could classify complex stair environments with 98.2% accuracy. To our knowledge, these AI-powered smart glasses are the first to integrate both sensing and computation for visual perception of human-robot walking environments. ## 4 Discussion In this study, we present a comprehensive overview of our StairNet initiative, which was created to support the development of new deep learning models for visual perception of stair environments for human-robot locomotion. The initiative places emphasis on efficient neural networks for onboard real-time inference on mobile and embedded devices. First, we outlined our development of the StairNet dataset with over 515,000 manually labeled images, followed by our development of different state-of-the-art deep learning models (e.g., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks) and training methods (e.g., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images) using our new dataset. Our models consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model size and performance. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. When deployed on our CPU-powered smart glasses, which account for human-computer interaction, the inference speed was slower (i.e., 1.5 s). Overall, we showed that StairNet can serve as an effective platform to develop and study new visual perception systems for human-robot locomotion, with applications in environment-adaptive control of prosthetic legs, exoskeletons, and other mobility assistive technologies. Our models offer several benefits over other stair recognition systems [6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 25, 26]. Many previous studies have been limited to statistical pattern recognition and machine learning algorithms that require manual feature engineering. In contrast, our models use multilayer deep neural networks for automatic feature extraction, which has shown to be superior to handcrafted features [21]. Additionally, our models benefit from the high quantity and quality of the StairNet dataset, with over 515,000 manually annotated images, allowing for more generalizable systems. Previous research has mainly been limited to smaller datasets (see Table 1). These differences have important practical implications as the performance and use of machine learning models require large amounts of diverse data. The increased generalization potential of our models also eliminates the need for explicit requirements for the pose or angle of our camera, as observed in past studies that relied on meticulous rule-based thresholds for the dimensions of the user and environments [10]. Our system only offers general suggestions for the type of camera and mount location, which provides greater flexibility for future research. We studied a wide variety of deep learning models and training methods (Table 2), each of which offer unique advantages and trade-offs. For example, the MoViNet 3D CNN using temporal data [30] achieved the highest classification accuracy on our StairNet test set compared to our baseline 2D CNN model, with a performance increase of 1.1%, demonstrating the benefit of temporal data for human-robot walking. However, the model contains a relatively large number of parameters (4.03 million) and numerical operations (2.5 GFLOPs), which could hinder deployment and real-time inference on mobile and embedded devices with limited computational resources; such models might be better suited for use cases with access to reliable cloud computing. For model efficiency, our MobileViT XS model trained using semi-supervised learning achieved the highest NetScore of 202.4 [41], demonstrating the benefit of using lightweight vision transformers to reduce model parameter count compared to standard convolutional neural networks. Additionally, our semi-supervised learning model showed the ability to use unlabeled data to improve training efficiency by reducing the number of annotated images required by 35% while maintaining the classification performance. The high efficiency of the MobileViT XS model makes it well-suited for our computer vision application. We also studied different state-of-the-art edge devices through our development of a new mobile app [13] and smart glasses [49]. The mobile app uses a TFLite interpreter and on-device GPU and NPU accelerators to maximize inference. The app achieved fast inference speeds of 2.75 ms when running our baseline model. However, the mobile app used a chest-mounted smartphone with images that do not necessarily coincide with the user's visual field of view and are susceptible to obstructions such as the user's arms. This motivated our development of the smart glasses to improve human-computer interaction with an integrated design that takes into account the head orientation, thus having greater potential to infer the user's locomotor intent. However, limitations in the embedded system yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centred design and performance. These slower inference speeds could affect the feasibility for real-time environment-adaptive control. Future work will thus focus on improving the onboard inference speed and further reducing the size of the hardware. Despite these developments, our research has several limitations. First, to evaluate performance, we used the StairNet test set. Although test sets are common practice in deep learning [21], the true real-world performance and generalizability of our models was not analyzed in a deployed environment. Also, during the development of our temporal models, we identified a limitation of the training method used for our baseline and semi-supervised models as the train/validation/test splits were performed randomly between images. This caused data leakage between the different data subsets, with unintentionally higher classification performances for our baseline and semi-supervised models. Retesting revealed an updated baseline accuracy of 97.2% when using dataset splits with randomly sorted videos without neighboring frames in multiple data subsets. To address this, the performance evaluations were made based on the change in accuracy compared to our baseline model of the respective test set. For future development using our StairNet dataset, we suggest using these video-based training/validation/test splits. It is important to also mention that state-of-the-art models and methods are continuously being developed. For example, during the course of our development of the temporal models, research on transformers [55] and multilayer perceptrons [56] showed the ability to eliminate the need to process each frame for the encoder and temporal blocks separately by adapting the models to take 3D sequence inputs by modifying the patch-embedding block, which can significantly improve the efficiency in processing and inference. For our semi-supervised learning research, many other algorithms besides FixMatch [42] could have been used to further reduce the number of labeled images required for stair recognition such as invariant semantic information clustering [57] and cross-level discrimination for unsupervised feature learning [58]. Our visual perception systems, especially the smart glasses, could also be extended to other applications such as providing sensory feedback to persons with visual impairments by leveraging some of the recent advances in large vision-language models [59]. Lastly, we want to emphasize that our visual perception systems as part of the StairNet initiative are meant to supplement, not replace, the existing intent recognition systems for human-robot walking that use mechanical, inertial, and/or EMG data. Our lab views computer vision as a means to improve the speed and accuracy of locomotion mode recognition by minimizing the search space of potential solutions based on the perceived walking environment. In future work, we plan to focus on sensor fusion of vision with EMG and/or inertial data to determine if and when vision can improve performance. In conclusion, we showed that StairNet can be an effective platform to develop and study new visual perception systems for human-robot locomotion with applications in control of prosthetic legs, exoskeletons, and other mobility assistive technologies. ## 5 Acknowledgements We want to thank members of the Bionics Lab, a part of the Artificial Intelligence and Robotics in Rehabilitation Team at the KITE Research Institute, Toronto Rehabilitation Institute, for their help. This study was supported by the Walter and Maria Schroeder Institute for Brain Innovation and Recovery and the AGE-WELL Networks of Centres of Excellence program. We dedicate this study to the people of Ukraine in response to the 2022 Russian invasion and war. ## References * [1] A. J. Young and D. P. Ferris, "State of the Art and Future Directions for Lower Limb Robotic Exoskeletons," _IEEE Transactions on Neural Systems and Rehabilitation Engineering_, Feb. 2017. * [2] A. Dashkovets and B. Laschowski, "Reinforcement Learning for Control of Human Locomotion in Physics Simulation," _bioRxiv_, 2023. * [3] K. Zhang, C. W. de Silva, and C. Fu, "Sensor Fusion for Predictive Control of Human-Prosthesis-Environment Dynamics in Assistive Walking: A Survey," _arXiv_, Mar. 2019. * [4] M. R. Tucker _et al._, "Control strategies for active lower extremity prosthetics and orthotics: a review," _Journal of NeuroEngineering and Rehabilitation_, Jan. 2015. * [5] A. E. Patla, "Understanding the Roles of Vision in the Control of Human Locomotion," _Gait & Posture_, Feb. 1997. * [6] A. H. A. Al-Dabbagh and R. Ronsse, "Depth Vision-Based Terrain Detection Algorithm During Human Locomotion," _IEEE Transactions on Medical Robotics and Bionics_, Nov. 2022. * [7] N. E. Krausz and L. J. Hargrove, "Recognition of Ascending Stairs from 2D Images for Control of Powered Lower Limb Prostheses," _IEEE International Conference on Neural Engineering_, Apr. 2015. * [8] Y. Massalin, M. Abdrakhmanova, and H. A. Varol, "User-Independent Intent Recognition for Lower Limb Prostheses Using Depth Sensing," _IEEE Transactions on Biomedical Engineering_, Aug. 2018. * [9] H. A. Varol and Y. Massalin, "A Feasibility Study of Depth Image based Intent Recognition for Lower Limb Prostheses," _IEEE Engineering in Medicine and Biology Society_, Aug. 2016. * [10] N. E. Krausz, T. Lenzi, and L. J. Hargrove, "Depth Sensing for Improved Control of Lower Limb Prostheses," _IEEE Transactions on Biomedical Engineering_, Nov. 2015. * [11] G. Khademi and D. Simon, "Convolutional Neural Networks for Environmentally Aware Locomotion Mode Recognition of Lower-Limb Amputees," _ASME Dynamic Systems and Control Conference_, Nov. 2019. * [12] A. G. Kurbis, B. Laschowski, and A. Mihailidis, "Stair Recognition for Robotic Exoskeleton Control using Computer Vision and Deep Learning," _IEEE International Conference on Rehabilitation Robotics_, Jul. 2022. * [13] A. G. Kurbis, A. Mihailidis, and B. Laschowski, "Development and Mobile Deployment of a Stair Recognition System for Human-Robot Locomotion." _bioRxiv_, Apr. 2023. * [14] B. Laschowski, W. McNally, A. Wong, and J. McPhee, "Preliminary Design of an Environment Recognition System for Controlling Robotic Lower-Limb Prostheses and Exoskeletons," _IEEE International Conference on Rehabilitation Robotics_, Jun. 2019. * [15] B. Zhong, R. L. da Silva, M. Li, H. Huang, and E. Lobaton, "Environmental Context Prediction for Lower Limb Prostheses With Uncertainty Quantification," _IEEE Transactions on Automation Science and Engineering_, Apr. 2021. * [16] B. Zhong, R. L. da Silva, M. Tran, H. Huang, and E. Lobaton, "Efficient Environmental Context Prediction for Lower Limb Prostheses," _IEEE Transactions on Systems, Man, and Cybernetics: Systems_, Jun. 2022. * [17] K. Zhang _et al._, "A Subvision System for Enhancing the Environmental Adaptability of the Powered Transfemoral Prosthesis," _IEEE Transactions on Cybernetics_, Jun. 2021. * [18] B. Laschowski, W. McNally, A. Wong, and J. McPhee, "Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons," _International Conference of the IEEE Engineering in Medicine & Biology Society_, Nov. 2021. * [19] C. Wang, Z. Pei, S. Qiu, and Z. Tang, "Deep Leaning-Based Ultra-Fast Stair Detection," _arXiv_, Sep. 2022. * [20] B. Laschowski, W. McNally, A. Wong, and J. McPhee, "ExoNet Database: Wearable Camera Images of Human Locomotion Environments," _Frontiers in Robotics and AI_, Dec. 2020. * [21] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," _Nature_, May 2015. * [22] A. G. Howard _et al._, "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," _arXiv_, Apr. 2017. * [23] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," _arXiv_, Mar. 2019. * [24] M. Abadi _et al._, "TensorFlow: A System for Large-Scale Machine Learning," _arXiv_, May. 2016. * [25] B. Laschowski, W. McNally, A. Wong, and J. McPhee, "Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks," _Frontiers in Neurorobotics_, Feb. 2022. * [26] B. Laschowski, "Energy Regeneration and Environment Sensing for Robotic Leg Prostheses and Exoskeletons," Doctoral Thesis, University of Waterloo, 2021. * [27] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "ImageNet: A Large-Scale Hierarchical Image Database," _IEEE Conference on Computer Vision and Pattern Recognition_, Jun. 2009. * [28] "TensorFlow Lite | ML for Mobile and Edge Devices," TensorFlow. [Online]. Available: [https://www.tensorflow.org/lite](https://www.tensorflow.org/lite) * [29] A. Inc, "Apple Developer," Apple Developer. [Online]. Available: [https://developer.apple.com/](https://developer.apple.com/) * [30] B. Ivanyuk-Skulskiy, A. G. Kurbis, A. Mihailidis, and B. Laschowski "Sequential Image Classification of Human-Robot Walking Environments using Temporal Neural Networks," _bioRxiv_, 2023. * [31] D. Kondratyuk _et al._, "MoViNets: Mobile Video Networks for Efficient Video Recognition," _IEEE Conference on Computer Vision and Pattern Recognition_, 2021. * [32] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," _arXiv_, Apr. 2015. * [33] M. Tan and Q. V. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," _arXiv_, Sep. 2020. * [34] S. Mehta and M. Rastegari, "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer," _arXiv_, Mar. 2022. * [35] A. Dosovitskiy _et al._, "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," _arXiv_, Jun. 2021. * [36] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory," _Neural Computation_, Nov. 1997. * [37] A. Vaswani _et al._, "Attention is All you Need," _Advances in Neural Information Processing Systems_, Dec. 2017. * [38] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," _arXiv_, Jan. 2017 * [39] A. Wong, "NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical On-Device Edge Usage," _arXiv_, Aug. 2018. * [40] J. Carreira and A. Zisserman, "Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset," _arXiv_, Feb. 2018. * [41] D. Kuzmenko, O. Tsepa, A. G. Kurbis, A. Mihailidis, and B. Laschowski, "Efficient Visual Perception of Human-Robot Walking Environments using Semi-Supervised Learning," _bioRxiv_, Jun. 2023. * [42] K. Sohn _et al._, "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence," _arXiv_, Nov. 2020. * [43] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, "Self-Training with Noisy Student Improves ImageNet Classification," _arXiv_, Jun. 2020. * [44] H. Pham, Z. Dai, Q. Xie, M.-T. Luong, and Q. V. Le, "Meta Pseudo Labels," _arXiv_, Mar. 2021. * [45] D. Berthelot, R. Roelofs, K. Sohn, N. Carlini, and A. Kurakin, "AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation," _arXiv_, Mar. 2022. * [46] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A Simple Framework for Contrastive Learning of Visual Representations," _arXiv_, Jun. 2020. * [47] I. Loshchilov and F. Hutter, "Decoupled Weight Decay Regularization," _arXiv_, Jan. 2019. * [48] I. Loshchilov and F. Hutter, "SGDR: Stochastic Gradient Descent with Warm Restarts," _arXiv_, May 2017. * [49] D. Rossos, A. Mihailidis, and B. Laschowski, "Al-Powered Smart Glasses for Sensing and Recognition of Human-Robot Walking Environments," _bioRxiv_, Oct. 2023. * [50] "Google Glass Teardown." [Online]. [http://www.catwig.com/google-glass-teardown/](http://www.catwig.com/google-glass-teardown/) * [51] "Discover Ray-Ban(r) Stories Features" [Online]. [https://www.ray-ban.com/canada/en/discover-rayban-stories/clp](https://www.ray-ban.com/canada/en/discover-rayban-stories/clp) * [52] O. Tsepa, R. Burakov, B. Laschowski, and A. Mihailidis, "Continuous Prediction of Leg Kinematics during Walking using Inertial Sensors, Smart Glasses, and Embedded Computing," _bioRxiv_, Feb. 2023. * [53] "Arducam HM0360 VGA SPI Camera Module for Raspberry Pi Pico," [Online]. [https://www.arducam.com/product/arducam-hm0360-vga-spi-camera-module-for-raspberry-pi-pico-2/](https://www.arducam.com/product/arducam-hm0360-vga-spi-camera-module-for-raspberry-pi-pico-2/) * [54] K. Grauman _et al._, "Ego4D: Around the World in 3,000 Hours of Egocentric Video," _IEEE Conference on Computer Vision and Pattern Recognition_, Jun. 2022. * [55] Z. Liu _et al._, "Video Swin Transformer," _IEEE Conference on Computer Vision and Pattern Recognition_, Jun. 2022. * [56] D. J. Zhang _et al._, "MorphMLP: An Efficient MLP-Like Backbone for Spatial-Temporal Representation Learning," _arXiv_, Aug. 2022. * [57] X. Ji, J. F. Henriques, and A. Vedaldi, "Invariant Information Clustering for Unsupervised Image Classification and Segmentation," _arXiv_, Aug. 2019. * [58] X. Wang, Z. Liu, and S. X. Yu, "Unsupervised Feature Learning by Cross-Level Instance-Group Discrimination," _IEEE Conference on Computer Vision and Pattern Recognition_, Jun. 2021. * [59] H. Tan, A. Mihailidis, and B. Laschowski, "A Sensory Feedback System for Persons with Visual Impairments Powered by Vision-language Models," _bioRxiv_, 2023. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Reference & Camera & Position & Dataset Size & Classifier & Computing Device & Test Accuracy \\ \hline [11] & RGB & Waist & 7,284 & Convolutional neural network & NVIDIA Titan X & 99.6\% \\ \hline [10] & Depth & Chest & 170 & Heuristic thresholding and edge detector & Intel Core i5 & 98.8\% \\ \hline [9] & Depth & Leg & 8,455 & Support vector machine & Intel Core i7-2640M & 98.5\% \\ \hline StairNet & RGB & Chest & 515,452 & Convolutional neural network & Google Cloud TPU & 98.4\% \\ \hline [17] & Depth & Leg & 3,000 & Convolutional neural network & NVIDIA Quadro P400 & 96.8\% \\ \hline [8] & Depth & Leg & 109,699 & Cubic kernel support vector machine & Intel Core i7-2640M & 95.6\% \\ \hline [14] & RGB & Chest & 34,254 & Convolutional neural network & NVIDIA Titan Xp & 94.9\% \\ \hline [15] & RGB & Head & 123,979 & Bayesian deep neural network & NVIDIA Jetson TX2 & 93.2\% \\ \hline [16] & RGB & Leg & 123,954 & Bayesian deep neural network & NVIDIA Titan X2 & 92.4\% \\ \hline [18] & RGB & Chest & 542,868 & Convolutional neural network & Google Cloud TPU & 70.8\% \\ \hline \end{tabular} \end{table} Table 1: Summary of previous vision-based stair recognition systems for robotic leg prostheses and exoskeletons. The dataset size (i.e., the number of images) and test accuracy are only for the environment classes relating to level-ground walking and stair ascent. The systems are organized in terms of the test accuracy (%). \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Type & Dataset size & Training approach & Architecture & Change in accuracy compared to baseline & NetScore & Model Parameters (millions) \\ \hline Baseline & 515,452 labeled & SL - Single frame & MobileNetV2 & 0\% & 186.8 & 2.3 \\ Network & & & & & \\ \hline Temporal & SL \(-\) M1 & MoViNet & +1.1\% & 167.4 & 4.0 \\ Neural & SL \(-\) M1 & MobileNetV2 & +0.1\% & 132.1 & 6.1 \\ Networks* & & & + LSTM & & \\ & & SL \(-\) M1 & MobileViT- & -0.2\% & 155.0 & 3.4 \\ & & & XXS + LSTM & & & \\ & & SL \(-\) MM & MobileNetV2 & -26.5\% & 120.1 & 6.0 \\ & & & + LSTM & & & \\ \hline Semi- Supervised & 300,000 labeled, 1.8M unlabeled & & MobileViT-XS & +0.4\% & 202.4 & 1.9 \\ Network & & & & & & \\ & & SSL \(-\) Fix Match & MobileViT- & -0.7\% & 186.5 & 0.9 \\ & & & XXS & & & \\ & & SSL \(-\) Fix Match & MobileViT-S & -1.2\% & 169.7 & 4.9 \\ \hline \multicolumn{7}{|p{56.9pt}|}{*Evaluated using the video-based train/validation/test split as described in Section 3.3} \\ \end{tabular} \end{table} Table 2: Summary of our stair recognition systems (StairNet). The models were evaluated based on image classification accuracy and efficiency (i.e., NetScore – higher is better). The systems are organized by model type. We tested supervised learning (SL) and semi-supervised learning (SSL) methods, and many-to-one (M1) and many-to-many (MM) temporal neural networks. The dataset sizes for our baseline and temporal neural networks were 515,452 labeled images, and 300,000 labeled images and 1.8 million unlabeled images for our semi-supervised learning networks. Figure 1: The inference and development pipelines for our baseline StairNet model [12] trained using supervised learning and single images for stair recognition. We developed this model as a reference and benchmark for our other deep learning models. [MISSING_PAGE_POST] ## References * [1] S. Agarwal, A. Agarwal, and A. Agarwal. (2016) A unified framework for image recognition. In 2016 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SS1. * [2] J. A. B. ## References * [1] S. Agarwal, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-10. Cited by: SS1. * [2] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-10. Cited by: SS1. * [3] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-10. Cited by: SS1. * [4] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [5] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [6] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [7] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [8] J. A. Berman, A. Agarwal, and A. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [9] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [10] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [11] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [12] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [13] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [14] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [15] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [16] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [17] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [18] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [19] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [20] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [21] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [22] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [23] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [24] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [25] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [26] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [27] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [28] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [29] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [30] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [31] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [32] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [33] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [34] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [35] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [36] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [37] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [38] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [39] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [40] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [41] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [42] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [43] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [44] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [45] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [46] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [47] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [48] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [49] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [50] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [51] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [52] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [53] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [54] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-1. Cited by: SS1. * [55] J. A. Berman, A. Agarwal, and A. Agarwal. Agarwal. Agarwal. Agarwal. (2016) A unified framework for deep learning. In Proceedings of
2309.15966
Noise Reduction Methods for Large-scale Intensity-mapping Measurements with Infrared Detector Arrays
Intensity mapping observations measure galaxy clustering fluctuations from spectral-spatial maps, requiring stable noise properties on large angular scales. We have developed specialized readouts and analysis methods for achieving large-scale noise stability with Teledyne 2048$\times$2048 H2RG infrared detector arrays. We designed and fabricated a room-temperature low-noise ASIC Video8 amplifier to sample each of the 32 detector outputs continuously in sample-up-the-ramp mode with interleaved measurements of a stable reference voltage that remove current offsets and $1/f$ noise from the amplifier. The amplifier addresses rows in an order different from their physical arrangement on the array, modulating temporal $1/f$ noise in the H2RG to high spatial frequencies. Finally, we remove constant signal offsets in each of the 32 channels using reference pixels. These methods will be employed in the upcoming SPHEREx orbital mission that will carry out intensity mapping observations in near-infrared spectral maps in deep fields located near the ecliptic poles. We also developed a noise model for the H2RG and Video8 to optimize the choice of parameters. Our analysis indicates that these methods hold residual $1/f$ noise near the level of SPHEREx photon noise on angular scales smaller than $\sim30$ arcminutes.
Grigory Heaton, Walter Cook, James Bock, Jill Burnham, Sam Condon, Viktor Hristov, Howard Hui, Branislav Kecman, Phillip Korngut, Hiromasa Miyasaka, Chi Nguyen, Stephen Padin, Marco Viero
2023-09-27T19:30:10Z
http://arxiv.org/abs/2309.15966v1
Noise Reduction Methods for Large-Scale Intensity Mapping Measurements with Infrared Detector Arrays ###### Abstract Intensity mapping observations measure galaxy clustering fluctuations from spectral-spatial maps, requiring stable noise properties on large angular scales. We have developed specialized readouts and analysis methods for achieving large-scale noise stability with Teledyne 2048\(\times\)2048 H2RG infrared detector arrays. We designed and fabricated a room-temperature low-noise ASIC Video8 amplifier to sample each of the 32 detector outputs continuously in sample-up-the-ramp mode with interleaved measurements of a stable reference voltage that remove current offsets and \(1/f\) noise from the amplifier. The amplifier addresses rows in an order different from their physical arrangement on the array, modulating temporal \(1/f\) noise in the H2RG to high spatial frequencies. Finally, we remove constant signal offsets in each of the 32 channels using reference pixels. These methods will be employed in the upcoming SPHEREx orbital mission that will carry out intensity mapping observations in near-infrared spectral maps in deep fields located near the ecliptic poles. We also developed a noise model for the H2RG and Video8 to optimize the choice of parameters. Our analysis indicates that these methods hold residual \(1/f\) noise near the level of SPHEREx photon noise on angular scales smaller than \(\sim 30\) arcminutes. ## 1 Introduction The history of galaxy formation can be studied using intensity mapping measurements of the near-infrared extragalactic background light. Galaxy clustering produces large-scale structure on the order of tens of arcminutes (Cooray et al. (2004), Kashlinsky et al. (2004)). The sources that comprise the background can be accessed using fluctuation measurements rather than galaxy-by-galaxy photometry (e.g. Zemcov et al. (2014)). Such intensity-mapping measurements promise to trace total light production, including diffuse and faint sources of emission, over the history of galaxy formation back to the Epoch of Reionization (Kovetz et al. (2017)). In order to optimize measurements of these large-scale background fluctuations, it is critical to minimize readout noise on tens of arcminute scales in astrophysical images. The Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) mission is an all-sky infrared (0.75-5.0 \(\mu\)m) survey satellite (Dore et al. (2014), Crill et al. (2020)) selected under NASA's Medium Explorer (MIDEX) program. Alongside other mission objectives, in deep maps, SPHEREx will study large-scale infrared extragalactic background structure. SPHEREx uses 6 Teledyne HAWAII-2RG (H2RG) HgCdTe sensors (Blank et al. (2012)), each with a 2048\(\times\)2048 array of pixels. Included in this count is a frame of reference pixels which are not sensitive to light, consisting of the outermost 4 pixels surrounding the sensor. These reference pixels are designed to emulate the electrical response of a typical optical pixel, allowing them to be used to correct for dark current and electrical noise in an image. The array is divided into 32 2048\(\times\)64 pixel channels, each of which is read out simultaneously row by row. Noise from the readout electronics can manifest itself as spatial \(1/f\) noise when the H2RG array is read. In order to minimize spatial noise on angular scale \(\theta\approx 30^{\prime}\) in intensity mapping observations, we present several new methods for reducing the effects of electronics \(1/f\) noise. "Row-chopping" reduces the effect of temporal \(1/f\) noise by reading rows non-sequentially. Combined with multiple H2RG reference pixel samplings, we obtain significant noise reduction on the spatial scales of interest, optimizing large-scale background fluctuation measurements. We use a phantom pixel correction scheme based on measuring a stable reference voltage to remove dark current offsets produced by our custom ASIC amplifier. We validated these noise reduction methods first in a noise simulator and later in laboratory tests with a physical H2RG array. In this paper, spatial noise power is expressed in terms of the spatial power spectrum \(P(k)\), which represents the azimuthally-averaged spatial power as a function of angular wave number. For SPHEREx detectors, each array pixel corresponds to 6.2 arcseconds of sky angle, resulting in a wave number \(k=2\pi(6.2^{\prime\prime}/\theta)\) in pix\({}^{-1}\) units for an angular period \(\theta\). This paper is structured as follows: In Section 2 we describe the SPHEREx readout electronics and Video8 amplifier, and outline various noise reduction techniques. In Section 3 we describe a full noise simulator to model the electronics noise, followed by optimization of noise reduction techniques in Section 4. Section 5 compares the simulation outputs and modeled noise reduction with measured laboratory images. Finally, Section 6 summarizes our conclusions. ## 2 Spherex Readout Electronics ### Readout Boards The SPHEREx readout electronics consist of 6 identical readout boards (see Figure 1), a central electronics board (CEB), and a low voltage power supply. The boards are housed in a card cage and communicate with each other and the spacecraft via a backplane. Each readout board services a single H2RG array, providing bias, control signals, and readout via low-noise Video8 amplifiers (see Section 2.2) operated in 32-channel 100kHz mode. The channel blocks are 64 pixels wide (see Figure 2). The readout board logic and processor are embedded in an RTG4 FPGA, with a 3 Gbit SDRAM device providing on-board sample-up-the-ramp processing for all \(\sim 4\) million H2RG pixels. The processor, aided by dedicated logic, compresses and formats the sample up the ramp data before sending it to the spacecraft via the CEB. Each readout board also has a housekeeping system to monitor various voltages and temperatures. The CEB interfaces with the spacecraft, communicates with the 6 readout boards, monitors 25 Cernox temperature sensors mounted on the telescope, and controls the temperature of the focal plane arrays via precision readout of 8 Cernox sensors and 16 bit control of 8 heaters. An RTG4 FPGA with an embedded processor orchestrates the CEB activity, which includes the generation of low-speed (38.4 kbaud) housekeeping telemetry and the booting of all the embedded processors using code stored in MRAM. The H2RG bias supply is designed for stability and is housed in an oven-controlled region of the readout board to minimize thermal drifts. Each of the 32 H2RG channels sees a common bias voltage from this supply, resulting in any bias noise fluctuations or temperature offsets being repeated across all channels. Figure 1: (a) Overview of the SPHEREx readout electronics and (b) internal schematic of a single Video8 preamplifier channel, one of eight on the chip. Differential input lines are capactively coupled to a low-noise integrator, and the output is read with one of two sample and hold circuits. The Video8 input can be switched to a reference voltage intermittently to track amplifier drift. ### Video8 Amplifier The Video8 amplifier is an Application-Specific Integrated Circuit (ASIC) designed at Caltech for SPHEREx. It interfaces preamplifiers and integrators between SPHEREx H2RG arrays and external RT2378-20 20 bit ADCs. The Video8 amplifiers are designed for stable operation over a wide temperature range from -50\({}^{\circ}\)C to +50\({}^{\circ}\)C. For SPHEREx, the Video8 devices are located within the instrument electronics box and do not require special cooling. A cryo-harness connects the H2RG detectors, operating at 50-80K, to the Video8 inputs. In order to minimize spatially correlated noise, the Video8 was designed for low \(1/f\) noise and low channel-to-channel crosstalk. Table 1 summarizes the performance characteristics of the Video8 architecture. The Video8 incorporates 8 fully differential readout channels, each composed of a preamplifier and a pair of integrators (Figure 1). The 8 channels are grouped into two banks of 4, with each bank being serviced by a single off-chip 20 bit ADC. The integrators box-car filter the signal for optimum noise performance, then perform a sample and hold. The two integrators associated with a given preamplifier operate in ping-pong fashion, with one performing an integration while the other holds a prior integration result for multiplexed output to one of the two off-chip ADCs. Multiplexing four channels to each ADC results in ADC operation at 400 kHz. The amplifiers and other internal circuitry of Video8 operate with a 5V supply voltage, while a separate 3.5V supply voltage determines the digital input signal voltage range. Analog switches at the inputs to the preamplifiers provide flexibility for the input of reference and test levels. The inputs to the preamplifier are capacitively coupled and the feedback is purely capacitive, with a reset switch in parallel. Closing the reset switch nominally brings the preamplifier outputs to 2.5V. Additional capacitor/switch networks at the preamplifier inputs allow the preamplifier output levels to be adjusted, post-reset, in preparation for operation. The outputs of each preamplifier are connected through off-chip resistors to the inputs of a pair of integrators (see Figure 1). The feedback for the integrators is again purely capacitive with an associated reset switch in parallel. Switches at the integrator outputs multiplex the integrator signals for off-chip analog-to-digital conversion. The Video8 switches may be sequenced by external logic for optimal performance. Using off-chip resistors allows for some adjustment of the overall channel gain. With the use of low temperature coefficient (TC) resistors, the feedback capacitors (\(\sim\) 20ppm/\({}^{\circ}\)C) dominate the temperature susceptibility. The Video8 devices are manufactured using the C5N 0.5 \(\mu\)m CMOS process at Onsemi (ON Semiconductor), the same process used for manufacturing several ASICs designed at Caltech for prior space missions. Guard rings around groups of NMOS and PMOS FETs provide a "rad-hard by design" feature that boosts the latch-up threshold to \(>\)80 MeV/(mg-cm2), verified in test with heavy ions. The total dose tolerance has been verified up to 20 krad, well beyond the 7.5 krad SPHEREx lifetime requirement. The Video8 die size is 3.3x3.3 mm, and each device is packaged in a standard 84 pin ceramic quad flat pack. Caltech and the NASA Jet Propulsion Laboratory worked closely on the Video8 space qualification, following the process for prior Caltech-designed space ASICs. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value \\ \hline Amplifier bandwidth & 20 MHz \\ Read noise & 1-2 e \\ Gain stability & 20 ppm \\ Offset stability & 3 \(\mu\)V/C \\ Integral non-linearity & \(<\)0.07\% \\ Channel-to-channel cross-talk & \(<\)2e-4 \\ Room temperature preamp output drift rate & \(<\)1 mV/s \\ \hline \end{tabular} \end{table} Table 1: Video8 Performance Characteristics ### Row-chopping Intensity mapping measurements of the extragalactic background require stable noise performance on tens of arcminute spatial scales where linear clustering peaks. However, noise stability is less important on small scales due to prominent Poissonian fluctuations from undetected galaxies. A new method for reading out channel rows, "row-chopping", transfers correlated spatial noise from low to high spatial frequencies. Because noise at small spatial scales is largely subdominant to photon noise, row-chopping improves stability on large spatial scales. Row-chopping reads out the rows of an image in a pre-defined non-sequential order. Rows can then be re-ordered after an exposure has been taken to recover the sky image. The result transfers \(1/f\) noise power produced by the readout electronics to smaller spatial scales. The row-chopping algorithm we employ incorporates an integer skip parameter \(S\) which controls the scale of the noise reduction provided (Figure 3): 1. Read the first row in the image. 2. Skip over \(S-1\) rows without reading them. If this results in skipping over rows that are beyond the last physical row in the image, wrap around to the beginning of the image and continue counting, counting one additional time each time you wrap back to the first row. 3. Read the current row. 4. Repeat steps 1-3 with the same \(S\) until every physical row on the array has been read once. The SPHEREx readout electronics execute row-chopping on each of the 32 channels simultaneously during readout. The parameter \(S\) controls the spatial scale to which noise is transferred, allowing for the selection of an \(S\) that minimizes noise power at low spatial frequencies (see Section 4.2). Row-chopping effectively reduces spatial \(1/f\) noise along the vertical direction of the array, perpendicular to the rows. \(1/f\) noise along each row, unaffected by row-chopping, is already small due to the short time required to read each row. ### Reference Pixels The largest remaining large-scale effect after row-chopping is an overall offset present within each channel due to common \(1/f\) noise in the channel readout. We can use reference pixels to remove these per-channel offsets. Each H2RG array contains a 4 pixel-wide frame of the outermost pixels in the overall 2048\(\times\)2048 grid which are not sensitive to light. These reference pixels are meant to reproduce the electrical behavior of an optical pixel without any photoresponse or photon noise. We use these pixels to correct for channel-level offsets from residual \(1/f\) noise common to a channel readout. These reference pixels are most easily read out in physical order, leaving 8 rows of reference pixels for each channel. There are an additional 4 columns of reference pixels on the right and left sides of the frame (only in channels 1 and 32). Figure 2: Illustration of the H2RG readout operated in 32-channel mode, truncated to 4 channels. Each channel is read simultaneously, row-by-row, progressing downward. The array has a surrounding border of reference pixels 4 pixels wide. We vary the sequence of row order for improved noise stability. To optimize offset correction, the SPHEREx readout system can sample reference rows additional times per image, interspersing them evenly through an exposure (see Section 4.3). These additional reads are intended to increase the accuracy of the reference pixels through statistical averaging. In some cases we also note a separate offset between even and odd columns within a channel (see Section 3.2). While this even/odd offset only enters at high spatial frequencies and is therefore not relevant to measurements at large spatial frequencies, it can be corrected by using even/odd reference pixels. ### Phantom Pixel Correction Each Video8 periodically samples a DC reference from the bias supply, injected into the data stream as a "phantom pixel". The phantom pixel data are processed by the same sample-up-the-ramp algorithm as the H2RG pixels, correcting preamplifier \(1/f\) noise, leakage current, and temperature drift. The Video8 leakage current is temperature-dependent and significant, typically a few electrons per second. Phantom pixel samples accurately remove these effects because only the low amplifier noise is present when sampling the reference. During readout, we collect \(c\) phantom measurements at the output of the Video8 with the input connected to a stable low-noise reference voltage. These phantom measurements are then averaged together into a single phantom pixel per row. Following this, we read the whole row of 64 pixels. We correct rows of the output image by subtracting the subsequent phantom pixel from each pixel in a row, resulting in 32\(\times\)2048 corrections, as each H2RG channel has its own set of independent phantom pixels. ## 3 Noise Simulation ### Simulation Inputs In order to assess various correction parameters and to provide simulated noise images for the SPHEREx sky simulator package, we developed a Python simulation to generate unique noise images from predefined noise sources. We first characterize separately the noise power spectra produced by the various components of the readout electronics. Once power spectra are defined, we can generate an arbitrary quantity of simulated noise timestreams for a simulated image array. We generate an array of frames in this way for a simulated exposure and fit slopes for each pixel to produce a sample-up-the-ramp image. Figure 3: Illustration of row-chopping on an example grid for \(S=4\), with black arrows representing the first pass through the array and red arrows representing the second. A pre-defined number of rows (here, 4) are skipped over after reading each row, with a return to the first unread row every time the bottom of the array is reached. We found that the SPHEREx readout noise timestream can be modeled by a noise power spectrum function \(p\) of the form: \[p=\alpha+\beta/f+\gamma/f^{1.3}. \tag{1}\] The factors \(\alpha\) and \(\beta\) correspond to the white and \(1/f\) noise coefficients of the system, respectively. Here, \(\gamma\) is only used to model a component of excess noise in the readout integrated circuit (ROIC) amplifier, with the additional power of 1.3 being found to best match the measured noise data. ### Generating Simulated Images We determined the noise coefficients for each of the readout components by fitting \(\alpha\), \(\beta\), and \(\gamma\) to a measured noise power spectrum (Table 2). Power spectra corresponding to the different noise components are show in Figure 4. We measured the ROIC amplifier white noise by correlated double sampling due to this noise component overwhelming other sources of noise at high sampling rates. For the Video8 and ROIC noise sources, we generate independent unique noise timestreams for each of the 32 H2RG channels. Since the noise components of the bias supply act simultaneously across all channels, we generate a single bias timestream for the entire 32-channel image at the start of each simulation. We further split bias contributions into three components to model how they affect phantom pixel measurements (see Section 2.5). These consist of a "phantom uncorrelated" bias term which is only visible to the phantom pixel reads, a "phantom uncorrelated" bias term which is only visible to the array and not to the phantom pixels, and a "phantom correlated" bias term that is visible to both. There terms account for the fact that the phantom reference and the detector bias are partially correlated, with independent and common-mode noise. The magnitude of the independent components largely determines how effectively the phantom pixel correction method in Section 2.5 corrects for bias fluctuations in addition to its primary function of removing Video8 amplifier leakage current. With noise coefficients for all sources defined, we produce unique timestream realizations. Let \(R\) represent a random number with a uniform distribution on (0, 1) and let \(f_{s}\) be the sampling frequency of the system (100 kHz). Given an arbitrary \(n\) x 1 two-sided power spectrum array \(\vec{p}\), \(n\) odd, we produce a timestream realization \(\vec{x}\) corresponding to \(\vec{p}\) by applying random phases to the non-DC components of the two-sided power spectrum components and taking an inverse Fourier transform as follows: \[\vec{A}=[p_{0}^{1/2},p_{1}^{1/2},...,p_{n}^{1/2}], \tag{2}\] \[\vec{\phi}=[e^{2\pi iR_{0}},e^{2\pi iR_{1}},...,e^{2\pi iR_{\rm{ floor(n/2)}}}], \tag{3}\] \[\vec{a}=\vec{A}\cdot[1,\vec{\phi},\vec{\phi}^{*}], \tag{4}\] \begin{table} \begin{tabular}{l c c c} \hline \hline Noise Source & \(\alpha\) & \(\beta\) & \(\gamma\) \\ & e\({}^{2}\)/Hz & e\({}^{2}\) & e\({}^{2}\ast\mathrm{Hz}^{0.3}\) \\ \hline Video8 & 3.71e-5 & 8.40e-3 & 0 \\ ROIC amplifiers & 1.97e-3 & 0 & 1e-1 \\ Phantom correlated bias & 0 & 8.80e-5 & 0 \\ Phantom uncorrelated bias, visible to phantom pixels & 0 & 3.64e-6 & 0 \\ Phantom uncorrelated bias, invisible to phantom pixels & 2.91e-6 & 3.07e-7 & 0 \\ \hline \hline \end{tabular} Note. – Coefficients are calculated assuming a one-sided power spectrum convention \end{table} Table 2: Measured best-fit noise components for SPHEREx readout electronics \[\vec{x}=\Re(\mathcal{F}^{-1}[\sqrt{nf_{s}}*\vec{a}]). \tag{5}\] Since this calculation depends on floor(\(n/2\)) random numbers, we can generate a large number of relevant timestreams given enough computation time. Once we generate a unique timestream for each channel and combine it with the bias timestream component, we can produce a unique frame corresponding to the noise power present on a perfectly dark exposure with no fixed dark current. For a sequence of \(N\) frames, we fit a slope to each pixel to produce a full noise image realization. An additional source of H2RG noise is present in the form of an offset between the even- and odd-numbered columns of each channel, referred to by Rauscher (2015) as alternating column noise (ACN). ACN effectively results in two common offsets for each channel. ACN primarily produces spatial noise at scales significantly smaller than those relevant to large-scale background measurements, but we include it for completeness. We model ACN by splitting all noise components (except for the common bias components) into two realizations and applying one to each column parity, producing two independent offsets per channel. ### Unmodeled Noise Our measurements (see Section 5) also indicate the presence of a low level of low-frequency per-pixel telegraph noise. Such a noise component increases the effective white noise level in long integrations. This same noise also reduces the efficiency of multiple reference sampling once the readout noise falls below the per-pixel noise. We have chosen not to model this per-pixel noise, but point out its effects in long integrations, where it causes an offset in noise levels between simulated and measured images. ## 4 Noise Reduction and Optimization After producing simulated noise images as detailed in Section 3, we apply the following noise reduction techniques on simulated images, calculate their spatial power spectra, and compare the post-processed power spectra with the spectra of the original images. Figure 4: Simulated spatial power spectra realizations corresponding to each noise component power spectrum applied to a 2048\(\times\)2048 image obtained in readout order, with no dark current. The ROIC amplifier noise dominates at all spatial frequencies. The prominent peak at high \(k\) for the Video8 and ROIC spectra is caused by alternating column noise, and is not relevant for large-scale measurements. ### Phantom Pixel Correction Performance For the particular noise sources monitored by the phantom pixel reads, phantom pixel corrections are very effective at reducing spatial \(1/f\) noise due to the low Video8 noise. For simulated images generated using only noise sources visible to the phantom pixels and using \(c=4\) co-adds, the spatial noise is significantly reduced at low spatial frequencies (see Figure 5). For a full SPHEREx readout simulation with all noise sources active, the noise power contributed by the Video8 and bias is subdominant to the ROIC noise. However, this correction remains critical to remove the large leakage current from the Video8 amplifiers. ### Row-chopping Optimization The spatial scale for noise reduction in the row-chopping operation is determined by the parameter \(S\), the number of rows skipped during each row read. This value is selected during system development and is not changed during flight. For \(S\) much less than the total number of rows in the image (in this case, 2048), increasing \(S\) generally pushes noise to smaller and smaller spatial scales. As \(S\) increases, some noise wraps around to larger spatial scales rather than continuing the initial downward trend. Thus there is an optimal \(S\) that depends on project requirements. In this paper, we selected a skip parameter of \(S=24\) to minimize noise at tens of arcminutes scales (see Figure 6 and Figure 7). We selected this value by applying row-chopping to a full simulated image with the parameters described in Table 2. Increasing \(S\) beyond a few dozen rows results in a slight increase in noise at large spatial scales due to noise wrapping. ### Reference Correction Row-chopping mitigates \(1/f\) noise in the vertical direction across the array but not across channels due to current offsets in individual H2RG channels, which are created by independent \(1/f\) noise in each channel over an integration. Additional current offsets between even/odd columns at smaller spatial scales are produced by ACN. Both of these offsets can be eliminated by filtering the horizontal axis of the image centered in Fourier space, but at the cost of losing some astrophysical information. We can use reference pixels to approximate these corrections without this downside. The most basic method for reference correction is to subtract a single average of all of the reference pixels in each channel. Because the H2RG array uses a 4-pixel-wide frame of reference pixels, this results in 8\(\times\)64 = 512 reference Figure 5: Power spectra for a 32-channel simulated image with all noise sources active except the ROIC amplifier terms, before and after \(c=4\) phantom pixel correction. The \(1/f\) component here is from Video8 and bias noise. While subdominant to the ROIC noise, these sources of \(1/f\) noise are mitigated by phantom pixel corrections. Figure 6: Average noise power on \(\theta=5-20\) arcminute scales (\(0.03<k<0.13\) pix\({}^{-1}\)) with varying skip parameter \(S\) for a 32-channel \(N=74\) simulated reference-corrected image with all noise sources active. Here, we calculate noise after excluding the horizontal axis in Fourier space (see Figure 7) to isolate the effect of row-chopping from reference correction limitations. Figure 7: 2D FFT magnitudes in arbitrary units of a 32-channel simulated image before and after \(S=24\) row-chopping, cropped to show the region near the origin. Noise power is transferred from the horizontal axis to discrete packets above and below. Significant power remains on the horizontal axis, demonstrating the need for reference correction. We excluded the horizontal axis in Figure 6 to optimize the choice of skip parameter. pixels available for each channel, with an additional 8176 reference pixels for the first and last channels distributed along the side of the channel. More complex methods for reference correction have also been developed (see Rauscher et al. (2017) and Kubik et al. (2014)). We improved on the default reference correction method with two strategies: (1) reading the top and bottom rows repeatedly in place, and (2) spreading the multiple reads throughout the image. As shown in Figure 8, distributing the reference reads improves the performance of the correction. This improvement is due to low-frequency components in the noise timestreams, which are mitigated by reducing the average time interval between reference samples. The overall reference correction algorithm operates as follows, utilizing only the reference rows physically at the top and bottom of each channel: 1. Define a desired number of reference row reads per channel \(r\). 2. Define positions for the reference rows such that they are evenly distributed throughout the channel and insert these rows into the readout order by re-reading a physical reference row at each position. Physical reference rows are read cyclically, resulting in each being read an average of \(r/8\) times per frame. 3. After producing a full image, average together all reference pixels for each channel, in even and odd columns (resulting in two corrections per channel) if desired, and subtract those values from the respective channel. A \(\sigma\)-clip operation should be performed prior to averaging reference pixels to eliminate any outlier hot pixels. By default, we subtract a single average from each channel and do not correct for ACN in order to increase the statistical weight of the large spatial scale correction. Since the lowest frequency variations in the channel timestreams are effectively eliminated by \(S=24\) row-chopping, subtracting a single average from each channel is essentially identical to subtracting a best-fit average to the reference pixels. Figure 8: Average noise power on \(\theta=5-20\) arcminute scales as a function of the number of row samples \(r\) for both reference correction methods with \(S=24\) row-chopping, averaged over 5 simulated 32-channel images. The distributed method provides an improvement over repeated sampling of the default reference rows, at little cost to readout time from row shifts. ## 5 Model Predictions Compared With Measurement ### Simulation Output with Optimized Parameters The described simulation generates noise image realizations that approximate the noise components present in dark images. Table 3 shows the simulation inputs for a standard SPHEREx exposure. Figure 9 shows an example of an image produced by the simulation and its corresponding spatial power spectrum. ### Row-chopping and Reference Correction Model Predictions While phantom pixel correction is largely negligible on simulated images that include all noise sources due to the already extremely low Video8 noise components, row-chopping and reference correction applied together have a very significant effect on correlated spatial noise. Using the selected \(r=32\) reference correction alongside \(S=24\) row-chopping, the average spatial noise on \(\theta=5-20\) arcminute scales is brought well below the original level on simulated image realizations (see Figure 10). ### End-to-end Measurements We compared simulated images with dark images produced by a representative H2RG array and readout electronics. Measured dark images were collected at an array temperature of 40K. Row-chopping was implemented at \(S=24\) \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value \\ \hline Number of Frames \(N\) & 74 \\ Time Per Frame \(T_{int}\) (s) & 1.51 \\ \(S\) (rows) & 24 \\ \(r\) (rows) & 32 \\ \(c\) (pixels) & 4 \\ \hline \end{tabular} \end{table} Table 3: Simulation Input Figure 9: (a) Example output 32-channel \(N=74\) noise realization generated by the described simulation and (b) corresponding spatial power spectrum, with phantom pixel correction but without reference pixel correction. Large channel offsets due to low-frequency \(1/f\) noise are visible prior to reference correction. rows during readout. The detector has a small level of optical response during these tests due to a combination of multiplexer glow and residual light leaks. We used phantom correction at \(c=4\) to remove Video8 leakage and amplifier \(1/f\) noise. Figure 11: Side-by-side comparison of a measured (left) and simulated (right) \(N=74\) 32-channel image pair difference. Each image has \(S=24\) row-chopping and \(r=32\) distributed reference correction. Figure 10: Spatial power spectrum from a 32-channel \(N=74\) simulated image realization with \(S=24\) row-chopping and \(r=32\) reference correction. We obtain a significant improvement at low \(k\) compared with the uncorrected image. We compare images at \(N=74\) frames, the frame count planned for flight. Measured images were processed prior to reference correction by subtracting a second \(N=74\) exposure to remove small constant dark current offsets in the testbed, followed by iterative \(\sigma\)-clipping to 5 standard deviations to eliminate hot pixels and an additional factor of \(1/\sqrt{2}\) to account for the image difference. A side-by-side comparison of a measured and simulated image is shown in Figure 11. The simulated images model the measured images well at low \(N\) before deviating as \(N\) increases. At \(N=74\), the apparent white noise level of the measured images is approximately 30% higher than in the simulated image after dark current is added (see Figure 12). We attribute this departure to per-pixel \(1/f\) telegraph noise (see Section 3.3). The behavior at low \(k\) is otherwise well modeled by the simulation before applying reference correction. ### System Performance at Large Spatial Scales Implementing \(S=24\) on-board row-chopping and \(r=32\) reference correction with distributed reference row reads on the physical array and readout electronics results in a significant reduction of noise power on large spatial scales. For the specific application to SPHEREx, a relevant comparison can be made with the photon noise level caused by the bright Zodiacal light foreground, which contributes a Poissonian noise spectrum. While the SPHEREx instrument noise power requirement is not based on photon noise, the level of the ZL photon noise spectrum acts as a lower limit on possible instrument observed noise power. Figure 13 demonstrates the reduction in spatial power on \(\theta=5-20\) arcminute scales resulting from the described noise reduction methods. The noise reduction on measured images is significantly less than the reduction seen in simulated images (see Figures 10 and 13). Our testbed dark image measurements at various row-chopping \(S\) values also confirm the optimal \(S\) of a few dozen rows indicated by the simulation (see Figure 14). Measured images also show that increasing the number of reference row visits does not completely eliminate the excess channel offsets, with less noise reduction for higher \(r\) than simulated images would suggest. We note that the reference pixels do not completely follow the behavior of the optical pixels, even in the absence of photon noise in testbed images. Figure 13 includes an example of an alternative post-processing method for reference correction subtracting the median of all pixels (optical and reference) in each channel instead of the average of reference pixels, which we briefly mention here for the sake of comparison. Median correction in this way results in improved noise performance, with resulting spectra dropping below the SPHEREx band 1-4 photon noise levels at all presented spatial scales. However, this improved performance over the use of built-in reference pixel correction comes at the cost of loss of astrophysical information on real sky images, as each channel will necessarily include sky features. We further explore the use of median correction and the Figure 12: (a) Pixel standard deviation \(\sigma\) as a function of frame count \(N\), after \(\sigma\)-clipping, for measured and simulated images produced using a single Video8 (8 channels) with \(r=32\) ACN reference correction and (b) spatial power spectra for uncorrected 32-channel image pairs at \(N=74\), showing an offset between output spectra, particularly at high \(k\). For (a), measured channels are processed by subtracting a low-noise \(N=386\) ACN reference-corrected long exposure. Unmodeled per-pixel noise increases the deviation between measured and simulated channels at large \(N\), even after accounting for dark current variation (see Section 3.3). efficacy of SPHEREx reference pixel offset correction, as well as provide a more detailed overview of the testbed, in a future publication. Figure 13: Spatial power spectrum of a 32-channel \(N=74\) image pair difference obtained with a physical H2RG array and the readout electronics, before and after \(r=32\) reference correction with distributed rows. \(S=24\) row-chopping is applied during readout. The range of \(\theta=5-20\) arcminutes is shown in red, corresponding to \(0.03<k<0.13\) pix\({}^{-1}\). An estimate of the photon noise spectra due to the expected Zodiacal light incident photocurrents in the first 4 SPHEREx bands **(spanning 0.75-3.8\(\mu\)m) for the SPHEREx deep fields at the ecliptic poles is also included**. Reference correction and row-chopping provide a significant noise reduction at large spatial scales on measured images, with spatial noise brought near the level of the ZL photocurrent photon noise. ## 6 Conclusions We developed new algorithms for improving large-scale noise properties in H2RG detector arrays for intensity mapping applications. A combination of reference correction and row-chopping substantially reduces the noise power present at large spatial scales. Row-chopping provides excellent low-frequency noise stability along the vertical direction of the array without appreciably increasing readout time. The custom Video8 preamplifier developed for array readout produces extremely low \(1/f\) noise power using a phantom pixel correction scheme. As these methods are implemented during array readout rather than post-processing of sky exposures, no significant additional wavelength-dependence or effect on point sources is expected. While this work focuses specifically on applications for SPHEREx and H2RG arrays, the described row-chopping technique could be applied to any row-by-row infrared imaging system where reduced noise power at large spatial scales in desired. The methods we developed reduce spatial readout noise to near the level of photon noise due to Zodiacal light photocurrent on large spatial scales for SPHEREx, with a \(\sim 1\) e/s photocurrent and a 112s integration time. The combined large-scale statistical noise seen by the instrument resulting from the described noise-reduction methods is expected to be well below predicted EOR signals (Dore et al. (2014), especially Figure 24). Noise stability at large scales is limited by per-pixel telegraph noise in the H2RG built-in reference pixels, which limits the effectiveness of repeatedly sampling reference pixels. Our simulated noise realizations generated from constituent noise components closely approximate measured images for low frame count \(N\), but deviate at high \(N\) (particularly at small spatial scales). We plan to optimize the readout parameters in a future work using measured array data. ## 7 Acknowledgements A portion of the research described here was conducted at the Jet Propulsion Laboratory (JPL), California Institute of Technology. This article and analysis made use of the Astropy (Astropy Collaboration et al. (2022)), Numpy (Harris et al. (2020)), and Matplotlib (Hunter (2007)) Python packages. We acknowledge support from the SPHEREx project under a contract from the NASA/GODDARD Space Flight Center to the California Institute of Technology.
2310.20672
Evaluating the reconstruction of individual haloes in constrained cosmological simulations
Constrained cosmological simulations play an important role in modelling the local Universe, enabling investigation of the dark matter content of local structures and their formation histories. We introduce a method for determining the extent to which individual haloes are reliably reconstructed between constrained simulations, and apply it to the Constrained Simulations in BORG (CSiBORG) suite of $101$ high-resolution realisations across the posterior probability distribution of initial conditions from the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm. The method is based on the overlap of the initial Lagrangian patch of a halo in one simulation with those in another, and therefore measures the degree to which the haloes' particles are initially coincident. By this metric we find consistent reconstructions of $M\gtrsim10^{14}~M_\odot / h$ haloes across the CSiBORG simulations, indicating that the constraints from the BORG algorithm are sufficient to pin down the masses, positions and peculiar velocities of clusters to high precision. The effect of the constraints tapers off towards lower mass however, and the halo spins and concentrations are largely unconstrained at all masses. We document the advantages of evaluating halo consistency in the initial conditions, describe how the method may be used to quantify our knowledge of the halo field given galaxy survey data analysed through the lens of probabilistic inference machines such as BORG, and describe applications to matched but unconstrained simulations.
Richard Stiskalek, Harry Desmond, Julien Devriendt, Adrianne Slyz
2023-10-31T17:35:29Z
http://arxiv.org/abs/2310.20672v1
# Evaluating the reconstruction of individual haloes in constrained cosmological simulations ###### Abstract Constrained cosmological simulations play an important role in modelling the local Universe, enabling investigation of the dark matter content of local structures and their formation histories. We introduce a method for determining the extent to which individual haloes are reliably reconstructed between constrained simulations, and apply it to the _Constrained Simulations in BORG_ (CS1BORG) suite of 101 high-resolution realisations across the posterior probability distribution of initial conditions from the _Bayesian Origin Reconstruction from Galaxies_ (BORG) algorithm. The method is based on the overlap of the initial Lagrangian patch of a halo in one simulation with those in another, and therefore measures the degree to which the haloes' particles are initially coincident. By this metric we find consistent reconstructions of \(M\gtrsim 10^{14}\ M_{\odot}/h\) haloes across the CS1BORG simulations, indicating that the constraints from the BORG algorithm are sufficient to pin down the masses, positions and peculiar velocities of clusters to high precision. The effect of the constraints tapers off towards lower mass however, and the halo spins and concentrations are largely unconstrained at all masses. We document the advantages of evaluating halo consistency in the initial conditions, describe how the method may be used to quantify our knowledge of the halo field given galaxy survey data analysed through the lens of probabilistic inference machines such as BORG, and describe applications to matched but unconstrained simulations. keywords: large-scale structure of the universe - dark matter - galaxies: halos - galaxies: statistics - software: simulations ## 1 Introduction The dynamics of the Universe are largely governed by dark matter (DM), constituting the majority of the matter content of the Universe. Over the past decades, cosmological simulations have emerged as the paramount instrument to elucidate its nonlinear dynamics and interplay with baryons (Wechsler and Tinker, 2018; Vogelsberger et al., 2020; Angulo and Hahn, 2022). Simulations typically employ initial conditions (ICs) based on a fixed power spectrum and random phases of the primordial matter field (Press and Schechter, 1974; Davis et al., 1985; Lacey and Cole, 1993; Eisenstein and Hut, 1998; Tinker et al., 2008). However, such ICs produce universes that resembles the real Universe only statistically, but cannot be linked object-by-object. The alternative is the "_constrained simulation_", in which not only the amplitudes but also the phases of the primordial density perturbations are encoded. The beginnings of this endeavour can be traced back to Hoffman and Ribak (1991), who laid down the foundation for simulating constrained realisations of Gaussian random fields. Local Universe constraints were subsequently derived from galaxy counts (Kolatt et al., 1996; Bistolas and Hoffman, 1998), peculiar velocity measurements (van de Weygaert and Hoffman, 2000; Klypin et al., 2003; Kravtsov et al., 2022) and galaxy groups (Wang et al., 2014), which, together with advances in simulation resolution, gravity modelling, IC generation or galaxy bias modelling, have led to local Universe simulation becoming a mature field. Deriving the ICs that generated the structure we see around us is an inference problem, and, as we have only one Universe, is best formulated in a Bayesian framework. This realisation led to the development of a Bayesian forward-modelling approach now known as the _Bayesian Origin Reconstruction from Galaxies_ (BORG) algorithm (Jasche and Wandelt, 2013; Jasche et al., 2015; Lavaux and Jasche, 2016; Jasche and Lavaux, 2019). BORG leverages an efficient Hamiltonian Markov Chain Monte Carlo algorithm to sample the initial matter field along with parameters associated with observational selection effects, galaxy bias and cosmology. The BORG posterior encapsulates all realisations of the local Universe that are compatible with the observational constraints used to derive them. The flagship application of BORG--and the one that we will use in this work--targeted the 2M++ galaxy catalogue, a whole sky redshift compilation of \(69,160\) galaxies (Lavaux and Jasche, 2016; Jasche and Lavaux, 2019) based on the Two-Micron-All-Sky Extended Source Catalog (Skrutskie et al., 2006). Work is in progress to augment the constraints with information from cosmic shear (Porqueres et al., 2021) and peculiar velocities (Prideaux-Ghee et al., 2023). In this work, we use the _Constrained Simulations in_ BORG (CSiBORG) suite of constrained cosmological simulations (Bartlett et al., 2021; Desmond et al., 2022), which are based on the BORG 2M++ ICs. Each CSiBORG box is a resimulation of ICs from a single BORG posterior sample inferred from the 2M++ galaxy catalogue (Lavaux and Jasche, 2016; Jasche and Lavaux, 2019), so that differences between realisations quantify the reconstruction uncertainty associated with our incomplete knowledge of the galaxy field and galaxy-halo connection. Hutt et al. (2022) used CSiBORG to study the effect of the reduced cosmic variance on the halo mass function (HMF) and clustering of haloes, and developed a method to assess the consistency of halo reconstruction from the final conditions. CSiBORG has also previously been used to create catalogues of local voids-as-antihalos (Desmond et al., 2022), and search for modified gravity (Bartlett et al., 2021) and dark matter annihilation and decay (Bartlett et al., 2022; Kostic et al., 2023). Beyond the CSiBORG suite, a focus of constrained simulations has been used to study the Local Group and its assembly history, e.g. within the CLUES (Gottloeber et al., 2010; Sorce et al., 2016) and SIRELIUS (Sawala et al., 2022; McAlpine et al., 2022) projects. They have also been used to study the connection between Sloan Digital Sky Survey galaxies and their haloes (Wang et al., 2016; Yang et al., 2018; Zhang et al., 2022; Xu et al., 2023), quantify the compatibility of the local Universe with \(\Lambda\)CDM (Stoypra et al., 2021, 2023) and model the local Universe in modified gravity (Naidoo et al., 2023). Due to their Bayesian setup, BORG and CSiBORG afford quantification of the effects of data and model uncertainties on the dark matter distribution produced in the simulations. An alternative method leverages the Wiener filter, which allows reconstruction of the mean field but cannot quantify the reconstruction uncertainty (Hoffman and Ribak, 1991; Zaroubi et al., 1995, 1999; Doumler et al., 2013, 2013). In a recent study, Valade et al. (2023) compared a Wiener filter- and Bayesian-based (HAMLET; Valade et al., 2022) approach to reconstruction from a mock peculiar velocity catalogue. They found that the two to agree for nearby structures (\(\lesssim 40\) Mpc/\(h\)) but that at greater distances the Bayesian method outperforms the Wiener filter reconstruction. The objective of our study is to investigate the robustness of the reconstructions of _individual_ haloes in constrained cosmological simulations. We develop a framework to assess whether a halo present in one box is also present in another, and hence what properties of the halos are reliably reconstructed across the suite. This is achieved by means of a novel metric, the overlap of haloes' initial Lagrangian patches. While the method is agnostic as to the way in which the simulations of the suite are linked, a natural application (and the one on which we focus) is to suites that sample the IC posterior of a prior inference. We showcase the method by application to CSiBORG, where we quantify the consistency of halo reconstruction as a function of various halo properties. The significance of the results are established by contrast with Quijote, an unconstrained suite. Other applications of our method include matching haloes between DM-only simulations and their hydrodynamical counterparts, as well as simulations using different cosmological models, e.g. \(\Lambda\)CDM and modified gravity. While traditional methods for matching haloes between different runs often depend on a consistent DM particle ordering between the runs (e.g. Butsky et al., 2016; Desmond et al., 2017; Mitchell et al., 2018; Cataldi et al., 2021), ours does not. In both above-mentioned scenarios, our approach can quantify how either the hydrodynamics or the cosmology impacts the properties of individual haloes, instead of relying solely on population statistics conditioned on properties such as mass (e.g. Palero et al., 2023). The structure of the paper is as follows. In Section 2 we introduce the two sets of simulations employed in our work, in Section 3 we introduce the overlap metric and its interpretation, Section 4 contains our results and in Section 5 we discuss the results. Lastly, we conclude in Section 6. All logarithms in this work are base-10. ## 2 Simulated Data In this section we describe the two sets of simulations that we use, CSiBORG and Quijote, and their halo catalogues. ### CsiBORG The CSiBORG suite, first presented in Bartlett et al. (2021), consists of 101 DM-only \(N\)-body simulations in a 677.7 Mpc/\(h\) box centred on the Milky Way (MW), with ICs sampled from the BORG reconstruction of the 2M++ galaxy survey (Lavaux and Jasche, 2016). This reconstruction covers the same volume as each CSiBORG box, discretised into \(256^{3}\) cells for a spatial resolution of \(2.65\) Mpc/\(h\)(Jasche and Lavaux, 2019). The BORG density field is constrained within a spherical volume of radius \(\sim 155\) Mpc/\(h\) around the MW, where the 2M++ catalogue has high completeness. In CSiBORG, the ICs are propagated linearly to \(z=69\) and augmented with white noise on a \(2048^{3}\) grid in the Figure 1: The friends-of-friends (FOF) HMF in CSiBORG and Quijote. The thin background lines show the realisations of CSiBORG and Quijote and the bold lines show their mean. The CSiBORG HMF undershoots and overshoots the Quijote HMF at low and high masses, respectively. The lower limit is approximately given by Quijote haloes containing 100 particles. central high-completeness region, corresponding to a spatial resolution of 0.33 Mpc/\(h\) and a DM particle mass of \(3.09\times 10^{9}\) M\({}_{\odot}\)/\(h\). To ensure a smooth transition to the remainder of the box, a buffer region of approximately 10 Mpc/\(h\) is added at the edge of the high-resolution region. Both BORG and CSiBORG adopt the cosmological parameters from the Planck Collaboration et al. (2014) best fit results, including the Wilkinson Microwave Anisotropy Probe (WMAP) polarisation, high multipole moment, and baryonic acoustic oscillation data, except \(H_{0}\) which is taken from the 5-year WMAP results combined with Type Ia supernovae and baryonic acoustic oscillation data (Hinshaw et al., 2009) (\(T_{\rm CMB}=2.728\) K, \(\Omega_{\rm m}=0.307\), \(\Omega_{\Lambda}=0.693\), \(\Omega_{\rm b}=0.04825\), \(H_{0}=70.5\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\sigma_{8}=0.8288\), \(n=0.9611\)). The DM density field is evolved to \(z=0\) using the adaptive mesh refinement code RAMSES (Teyssier, 2002), where only the central high-resolution region is refined (reaching level 18 by \(z=0\) with a spatial resolution of 2.6 kpc/\(h\)). As we will be studying the reconstruction of individual objects, whose initial Lagrangian patches are constrained in BORG, it is illustrative to consider how many BORG cells constitute the Lagrangian patch of a halo. We find this to be approximately \(N\approx 7\)\(M_{\rm tot}/(10^{13}M_{\odot}/h)\). BORG constrains the average field value in each cell with physical size 2.65 Mpc/\(h\), and we do not a-priori expect haloes with Lagrangian patches spanning only a few BORG cells to be consistently reconstructed in CSiBORG since such haloes likely vary strongly across the BORG posterior. However, haloes above \(\sim 10^{14}\) Mpc/\(h\) comprise initially \(\gtrsim 100\) cells and are therefore likely well constrained. ### Quijote We compare the CSiBORG results to the publicly available Quijote simulations1(Villaescusa-Navarro et al., 2020). Quijote is a suite of unconstrained simulations evolved from \(z=127\) to \(z=0\) using the GADGET-III code (Springel et al., 2008). We use 10 realisations of the Quijote DM-only simulations with randomly drawn IC phases, each with a volume of 1 Gpc/\(h\) and a particle mass of \(8.72\times 10^{10}\) M\({}_{\odot}\)/\(h\) in a fiducial cosmology: \(\Omega_{\rm m}=0.3175\), \(\Omega_{\Lambda}=0.6825\), \(\Omega_{\rm b}=0.049\), \(H_{0}=67.11\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\sigma_{8}=0.834\) and \(n=0.9624\). Besides the constraints, the most significant difference with CSiBORG is the volume, so for an approximately fair comparison when calculating any extrinsic quantity we mimic the CSiBORG high-resolution region by splitting each Quijote box into 27 non-overlapping spherical sub-volumes of radius 155 Mpc/\(h\) centred at \(n\times 155\) Mpc/\(h\) for \(n=1,3,5\) along each axis. Footnote 1: [https://quijote-simulations.readthedocs.io/](https://quijote-simulations.readthedocs.io/) ### Halo catalogues We use the friends-of-friends halo finder (FOF; Davis et al., 1985) in both CSiBORG and Quijote, with a linking length parameter of \(b=0.2\). FOF connects particles within a distance \(b\) times the mean inter-particle separation. FOF can create artificially large structures by connecting extraneous particles to haloes along nearby filaments of the density field, particularly for merging haloes at \(z=0\)(Eisenstein & Hut, 1998). Warren et al. (2006) and Lukic et al. (2009) proposed corrections to this based on particle number and halo concentration respectively, but as we do not require high-precision halo masses we do not apply them here. To reduce numerical resolution errors of recovered haloes and their properties, authors have suggested that haloes must contain at least \(50-100\) particles (Springel et al., 2008; Onions et al., 2012; Knebe et al., 2013; van den Bosch & Jiang, 2016; Griffen et al., 2016), though Diemer & Kravtsov (2015) suggested stricter criteria for measuring e.g. concentration to avoid having only a few particles in the inner parts of the halo, especially if the inner region spans only a few force resolution lengths. Recently, van den Bosch & Ogiya (2018) proposed a more stringent criterion for determining the numerical convergence of infalling subhaloes. However, in this work we are not concerned with substructure or halo profiles, and thus adopt the simpler and less restrictive criterion of 100 particles for all haloes, corresponding to a minimum halo mass of \(3.09\times 10^{11}\) M\({}_{\odot}\)/\(h\) and \(8.72\times 10^{12}\) M\({}_{\odot}\)/\(h\) for CSiBORG and Quijote, respectively. In CSiBORG we identify haloes only inside the high-resolution region. As CSiBORG has a mass-resolution nearly two orders of magnitude better than Quijote, when comparing the two simulations we only consider haloes above the mass-resolution of Quijote. CSiBORG and Quijote assume different cosmological parameters, yielding a difference in both the large-scale structure and halo population. Hutt et al. (2022) study how this affects the HMF, finding insignificant disparities due to the different cosmologies. They do however quantify the reduced spread of the HMFs of individual CSiBORG realisations relative to Quijote due to the suppression of cosmic variance by the IC constraints. We show the HMFs in Fig. 1. It is found that the CSiBORG and Quijote HMFs have some systematic disagreement, with the former predicting higher bright-end abundances (see also McAlpine et al., 2022; Desmond et al., 2022; Hutt et al., 2022). However, this becomes statistically significant only above \(M\sim 10^{15}M_{\odot}\)/\(h\). On the other hand, CSiBORG systematically undershoots the Quijote HMF below \(\sim 10^{14.2}\)\(M_{\odot}\)/\(h\). These discrepancies have recently been investigated by Stopyra et al. (2023) who showed that replacing the 10-step particle mesh solver used in the BORG forward model with a 20-step COLA solver (Tassev et al., 2013) corrects for them. ## 3 Methodology We aim to develop a metric to quantify the extent to which halos are reliably reconstructed between boxes, e.g. across the simulations of constrained suites. We do so by evaluating the similarity of proto-haloes at high redshift, measured by their initial Lagrangian patches. This is a priori a sensible approach as it is the initial conditions that are constrained: focusing on the initial snapshot facilitates the establishment of a more causally coherent framework, circumventing reliance on interpretations deduced purely from the final conditions as in e.g. Hutt et al. (2022). Most of our work is intended to interpret and show it to be useful a posteriori as well. Similar approaches are used to associate haloes between DM-only and hydrodynamical simulations sharing the same ICs (e.g. Butsky et al., 2016; Desmond et al., 2017; Mitchell et al., 2018; Cataldi et al., 2021). For instance, Desmond et al. leverage the ability to match DM particle IDs between DM-only and hydrodynamical runs to match haloes as follows: if halo \(a\) from the DM-only run shares the most particles with halo \(b\) from the hydrodynamical run, and conversely \(b\) shares most particles with \(a\), they are identified as a match. This bears strong resemblance to the method we develop because the particle IDs are assigned based on their position in the initial snapshot. In CS1BORG the particle IDs are not consistent between realisations so this method cannot be used directly. We cross-correlate two IC realisations as follows. We denote the first simulation as the "reference" and the second as the "crossing" simulation and calculate the overlap of each halo in the reference simulation with all haloes in the crossing simulation. We identify FOF haloes in the final snapshot of both the reference \(\mathcal{A}^{\rm th}\) and crossing \(\mathcal{B}^{\rm th}\) IC realisation and trace their constituent particles back to the initial snapshot. To calculate the intersecting mass of a halo \(a\in\mathcal{A}\) and \(b\in\mathcal{B}\), we use the nearest grid point (NGP) scheme to assign the halo particles to a \(2048^{3}\) grid in the initial snapshot, matching the initial refinement of the high-resolution region. We denote the mass assigned to a single cell \(m_{a}\) and \(m_{b}\), respectively, for the two haloes. On the same grid we also calculate the background mass field of all particles assigned to a halo in the final snapshot in the respective simulation, \(\widehat{m}_{\mathcal{A}}\) and \(\widehat{m}_{\mathcal{B}}\). In the NGP scheme, a particle located at \(x_{i}\in[0,1]\) along the \(i^{\rm th}\) axis is assigned to the \(\lfloor x_{i}N_{\rm cells}\rfloor^{\rm th}\) cell, where \(N_{\rm cells}\) is the total number of cells along an axis. We then apply a Gaussian smoothing to the NGP field using a kernel width of one cell and define the intersecting mass of haloes \(a\) and \(b\) as \[X_{ab}\equiv\sum_{n}\frac{2m_{a}^{(n)}m_{b}^{(n)}}{\widehat{m}_{A}^{(n)}+ \widehat{m}_{\mathcal{B}}^{(n)}}, \tag{1}\] where \(n=1,\ldots,2048^{3}\) is the grid index. This definition accounts for the fact that particles of more than one halo may contribute to a single cell and may be motivated as \[X_{ab}=\sum_{n}\left[\left(\frac{m_{a}^{(n)}}{\widehat{m}_{\mathcal{A}}^{(n) }+\widehat{m}_{\mathcal{B}}^{(n)}}\right)m_{a}^{(n)}+\left(\frac{m_{a}^{(n)} }{\widehat{m}_{\mathcal{A}}^{(n)}+\widehat{m}_{\mathcal{B}}^{(n)}}\right)m_{ b}^{(n)}\right], \tag{2}\] i.e. the contribution of \(a\) is weighted by the mass of \(b\) in that cell normalized by the total mass in that cell, and vice versa. Next we define the IC overlap between the haloes \(a\) and \(b\) as \[\mathcal{O}_{ab}=\frac{X_{ab}}{M_{a}+M_{b}-X_{ab}}, \tag{3}\] such that \(M_{a}=\sum_{n}m_{a}^{(n)}\) is the total particle mass of the \(a^{\rm th}\) halo, and similarly for the \(b^{\rm th}\) halo. This ensures that \(\mathcal{O}_{ab}\in[0,1]\) and it can be interpreted simply as mass of the \(a^{\rm th}\) halo that overlaps with the \(b^{\rm th}\) halo normalized by their total mass. If the \(a^{\rm th}\) halo is entirely enclosed within the \(b^{\rm th}\) halo, and assuming one particle per grid cell without any additional smoothing, then the overlap fraction \(\mathcal{O}_{ab}\) can be expressed as \(\mathcal{O}_{ab}\sim M_{a}/M_{b}\), i.e. the ratio of their masses. The overlap measures the similarity of two haloes in their Lagrangian patches. However, a halo may overlap with many smaller haloes, producing a large set of small overlaps with none of the overlapping haloes being similar to the reference halo in mass or size. Therefore, we also calculate the maximum overlap that a halo \(a\) has with any halo in the crossing simulation, \(\max_{\rm c\delta\mathcal{B}}\mathcal{O}_{ab}\). If this quantity is sufficiently high across all crossing simulations, it implies that the halo is being consistently reconstructed across the IC realisations. While the overlap between a pair of haloes is symmetric, if a halo \(a_{0}\in\mathcal{A}\) has a maximum overlap \(\mathcal{O}_{a_{0}b_{0}}=\max_{\rm c\delta\mathcal{B}}\mathcal{O}_{a_{0}b}\) with some halo \(b_{0}\in\mathcal{B}\), this does not imply that \(b_{0}\) also has a maximum overlap with \(a_{0}\) since its maximum overlap is defined as \(\max_{a\in\mathcal{A}}\mathcal{O}_{b_{0}a}\) and the two sets of haloes over which the overlaps are maximised are not the same. The properties of the overlap \(\mathcal{O}_{ab}\) lend it a natural interpretation as the probability of a match between halos in two simulations. That a reference halo can have a non-zero overlap with multiple haloes that themselves may overlap in their initial Lagrangian patches is already accounted for in the definition of the overlap through the denominator of the intersecting mass in Eq. (1), which implies that the overlap of a pair of haloes is modified by the presence of other overlapping haloes. Therefore, the probability of a reference halo \(a\) being matched to \(some\) halo in the crossing simulation is simply the sum of the overlaps with all haloes in the crossing simulation: \[P(\text{a}\in\mathcal{A}\text{ matched in }\mathcal{B})=\sum_{b\in\mathcal{B}} \mathcal{O}_{ab}, \tag{4}\] and the probability of not being matched to any halo in the crossing simulation is \[P(\text{a}\in\mathcal{A}\text{ not matched in }\mathcal{B})=1-\sum_{b\in \mathcal{B}}\mathcal{O}_{ab}. \tag{5}\] The definitions of the intersecting mass and overlap in Eqs. (1) and (3) ensure that not only the overlap between a pair of haloes is always \(\leqslant 1\), but also that the sum of overlaps with a reference halo such as in Eq. (4) is always \(\leqslant 1\). In the denominator of Eq. (1), the intersecting mass is weighted by the fraction of a cell occupied by the pair of haloes, taking into account contributions from all haloes in both simulations. This ensures that the intersecting mass is not counted multiple times when summing over crossing haloes. We calculate the most likely value of a matched halo property \(\mathcal{H}\) (for instance, mass) based on haloes that overlap with the reference halo. For each crossing simulation indexed \(i\) we select \(\mathcal{H}_{i}\) of the halo with the highest overlap \(w_{i}\) with the reference halo. We then calculate the most likely matched property as the mode of the distribution of \(\mathcal{H}_{i}\) weighted by \(w_{i}\). To identify it we employ the shrinking sphere method, commonly utilised for finding the centre of DM haloes (Power et al., 2003). We iteratively shrink a search radius around the weighted average of enclosed \(\mathcal{H}_{i}\) samples until less than 5 samples are enclosed, at which point we take their average as the mode of the distribution. ## 4 Results We showcase the framework on the CS1BORG suite. We first calculate the overlaps of halo initial Lagrangian patches defined in Eq. (3). We then calculate the halo mass, spin and concentration of overlapping haloes and compare them to the reference halo. At each step we compare our findings to the unconstrained Quijote suite to assess the significance of the results. We calculate the overlaps for haloes whose total mass exceeds \(10^{13.25}\)\(M_{\odot}/h\) and pairs of haloes closer in mass than 2 dex. This is simply to reduce computational expense, since halo pairs with larger differences in mass have negligible overlaps (\(\lesssim 1\) per cent). ### Overlaps We begin by using a single reference simulation and assessing how consistently its haloes are present in the remaining IC realisations. We calculate the _non-zero_ overlaps between its haloes and those of another simulation following the approach of Section 3. This yields for each halo in the reference simulation a set of overlaps (which could potentially be empty), but its sum will never exceed 1 because of the denominator of Eq. (1). Retaining the same reference, the process is reiterated across the remaining IC realisations. #### 4.1.1 Pair overlap We present the overlaps between haloes in Fig. 1(a) where for every reference halo, we concatenate all sets of overlaps with the remaining IC realisations. As a comparison, in Fig. 1(a) we also show the overlaps in Quijote where we again use a single reference simulation. The hex-bin shows the CSiBORG results and the lines indicate binned medians and \(2\sigma\) spread. It is evident that in both CSiBORG and Quijote the most likely overlap value is close to 0, which follows from the predominance of non-matched halos in both simulation suites. However, a notable distinction emerges in CSiBORG: there is a pronounced tail of high overlaps, especially evident for haloes above \(10^{14}\)\(M_{\odot}/h\). These massive haloes have counterparts that occupy the same part of the initial snapshot in the majority of the IC realisations. This is not the case in Quijote, where in fact the high-overlap tail becomes less significant for more massive haloes due to their rarity. #### 4.1.2 Maximum pair overlap Next, we keep the same reference simulation, but instead calculate the maximum overlap of a reference halo in each other IC realisation. In Fig. 1(b) we present the median of the maximum overlaps for each reference halo across the remaining IC realisations. Unlike before, in CSiBORG the mean trend of the maximum overlaps as a function of mass is no longer close to 0. On the other hand, in Quijote the mean trend of the maximum overlaps remains close to 0. This indicates that there are halos in any pair of simulation that match well in CSiBORG, especially at high mass, while there are not in Quijote. In Fig. 3 we show the fraction of CSiBORG haloes in a reference simulation that have a median maximum overlap with other simulations over 1, 2, \(3\sigma\)-level thresholds calculated in Figure 3: Fraction of CSiBORG reference haloes as a function of \(\log M_{\mathrm{tot}}\) with median maximum overlap with the remaining IC realisations exceeding the \(1,\,2,\,3\sigma\) thresholds in Quijote (84.1, 97.7, 99.9 per cent). The bands show the \(1\sigma\) spread among the CSiBORG realisations. Even at the lower mass threshold a large proportion of CSiBORG haloes has maximum overlaps more significant than in Quijote. Figure 2: Comparison of Lagrangian patch overlaps in CSiBORG and Quijote simulations. In both cases, a single reference simulation is assumed and for each halo a list of overlaps is calculated per crossing simulation. The \(x\)-axis denotes the mass of the reference halo, the hex bins illustrate CSiBORG overlaps, with the lines delineating overlaps in CSiBORG and Quijote, each accompanied by \(1\sigma\) error bars showing the spread among points. The differentiation in overlap behaviour between the two simulations becomes evident at high mass, where the constraints of the CSiBORG suite become significant. Quijote (84.1, 97.7, 99.9 per cent). This figure again highlights that more massive haloes are more clearly constrained and that already at \(10^{14}M_{\odot}/h\) about 75 per cent of CSIBORG haloes have median maximum overlaps more significant than the \(1\sigma\) level in Quijote. In Fig. 4, we show in CSiBORG for every halo from a single reference simulation the number of simulations in which a reference halo \(a\) and a halo \(b\) from a crossing simulation _both_ have maximum overlaps with each other (\(f_{\rm sym}\)). We plot \(f_{\rm sym}\) against both the reference halo mass and the average maximum overlap of a halo. On average, haloes around \(\sim 10^{14}M_{\odot}/h\) have symmetric maximum overlaps in 50 per cent of the IC realisations. In contrast, haloes around \(\sim 10^{15}M_{\odot}/h\) have symmetric maximum overlaps in nearly all IC realisations. On the other hand, the relationship between \(f_{\rm sym}\) and the average maximum overlap has less scatter than its relationship with mass. For example, haloes with an average maximum overlap of \(\sim 0.2\) have symmetric maximum overlaps in approximately 50 per cent of the realisations. #### 4.1.3 Probability of being matched Lastly, we calculate the probability of a reference halo having a counterpart in _any_ other IC realisation, as given by Eq. (4). In Fig. 5 we plot the mean and standard deviation of this probability, averaged over the remaining IC realisations. If a halo has no potential match in another simulation, then the sum of its overlaps in that simulation is simply 0. Mirroring previous results, there is a clear distinction between CSiBORG and Quijote above \(\sim 10^{14}\ M_{\odot}/h\), in that the majority of CSIBORG halos are matched while the majority of Quijote halos are not. Below this mass, the distinction weakens though it remains significant. The uncertainty on the probability of a match shown in Fig. 4(b) peaks at \(10^{14}\ M_{\odot}/h\). Below this mass, and particularly near the lower mass threshold of \(10^{13.25}M_{\odot}/h\), there are only haloes with a mass above this threshold to overlap with, and not below it, thus underestimating the uncertainty. This is complementary to the results of Fig. 1(b), which was instead sensitive to the maximum overlap a reference halo has with another simulation. On the other hand, in Fig. 5 a high probability of a match can also be due to adding up many small overlaps. In both CSiBORG and Quijote there is a trend that more massive haloes have higher sum of overlaps, although this is more pronounced in CSiBORG. However, this is not surprising since the most massive haloes have initial Lagrangian patches of \(\sim 10\ {\rm Mpc}/h\) and thus are naturally overlapping with more objects. For comparison, see the trend of Fig. 1(b) where the mean trend of the maximum overlaps in Quijote never rises--while the large haloes have overlaps with many small ones such that the sum of overlaps may increase, the maximum overlap a given reference halo has with any one halo in a crossing simulation does not increase with mass. In Fig. 6 we show in CSiBORG the relation between the probability of being matched and the maximum overlap averaged over the IC realisations. The two quantities are strongly correlated, though the most massive haloes deviate from the \(1-1\) line, as smaller overlaps start contribute more significantly to the sum over overlaps. Nevertheless, even for the most massive haloes this shows that sum over overlaps is typically dominated by one object that has a large overlap with the reference halo. ### Halo properties of overlapping haloes Now that we have explored the properties of the overlap statistic and its behaviour across the CSIBORG suite we turn our attention to investigating the properties of halos that it matches. This includes their separation (Section 4.2.1), mass (Section 4.2.2), peculiar velocity (Section 4.2.3) and concentration and spin (Section 4.2.4). #### 4.2.1 Final snapshot separation of overlapping haloes For a reference halo that has a significant overlap with a halo in other IC realisation, our anticipation is their proximity in the ICs will make the pair unusually close in the final Figure 4: Number of simulations in which a reference halo \(a\) and a halo \(b\) from a crossing simulation _both_ have maximum overlaps with each other, \(f_{\rm sym}\), shown for every halo from a single reference simulation in CSiBORG. The red line is an arithmetic average in a bin along with \(1\sigma\) spread. The highest mass haloes have symmetric maximum overlaps in nearly all IC realisations. snapshot as well. Because there are nearby halos between boxes even in the absence of any IC constraints, we juxtapose our results with those from the unconstrained Quijote suite. Any suppression in distance observed in CSiBORG relative to Quijote can then be ascribed to the constraints. We show the results in Fig. 7. We cross-match two IC realisations and compute \(\langle\Delta R\rangle\), the overlap-weighted mean separation of haloes in the final snapshot averaged over all IC realisations. In CSiBORG haloes are consistently more likely to remain close in the final snapshot if they originate from the same Lagrangian patch. The lowest mass matched objects in CSiBORG show a variation of \(\sim 10\) Mpc/\(h\). For a individual reference haloes and its maximum-overlap matches from the remaining IC realisations we find, as expected, a strong negative correlation between the overlap value and their \(z=0\) separation. #### 4.2.2 Mass of matched haloes The next quantity that we look at is the most probable mass of the matched haloes. We calculate this following the ap Figure 5: Probability of a reference halo having a match in the remaining IC realisations calculated as the sum of overlaps. The hex bins show the CSiBORG data and the lines are the mean trends in CSiBORG and Quijote, with \(1\sigma\) error bars characterising the spread among points. The probability of a match is typically significantly higher in CSiBORG except for the lower mass threshold where the constraints weaken. The truncation at low masses in Fig. 5a, and reduction in uncertainty towards very low masses in Fig. 5b, are due to the lower mass threshold of \(10^{13.25}M_{\odot}/h\) when matching halos. Figure 6: Relation between the mean summed overlap and mean maximum overlap of reference haloes, both of which are averaged over the remaining CSiBORG IC realisations. Although the most massive haloes deviate from the \(1-1\) line due to contributions of smaller overlaps, it is clear that the summed overlaps are typically dominated by the overlap to a single object, making the match relatively unambiguous. Figure 7: The overlap-weighted mean separation of haloes in the final snapshot \(\Delta R\) of CSiBORG realisation 744 averaged over all remaining IC realisations. The hex bins are the CSiBORG overlaps and the lines are the CSiBORG and Quijote mean trends, respectively, with \(1\sigma\) spread among points Although the halo positions are constrained across the entire mass range when compared to Quijote, there is a significant variation in the positions of the lower mass objects. proach outlined at the end of Section 3. From each crossing simulation we find the mass of the maximally overlapping halo and construct a weighted histogram, where the weights are the maximum overlaps. We then define the most probable mass as the mode of this distribution, with its uncertainty given by the square root of the overlap-weighted average square of residuals around the mode. We plot an example of this in the left panel of Fig. 8 for the most massive halo in a single CSiBORG realisation, finding the most likely mass to be within 0.2 dex of the reference halo mass. This is in part by construction, since the overlap itself is preferentially higher for halos that have a similar mass. However, while the overlap is preferentially higher for haloes of a similar mass, it does not guarantee the objects being a "good" match. If that was the case, then we would find that even in Quijote the mass of matched haloes is on average close to the mass of the reference halo, which it is not. By analysing histograms like Fig. 8, we calculate and show in Fig. 9 the most likely mass of all reference haloes in CSiBORG above \(10^{13.25}\ M_{\odot}/h\). In this figure, we show three panels: comparison between the reference and most likely halo mass, the uncertainty of the most likely mass as a function of the reference mass and the ratio of the most likely mass to the reference mass as a function of the median probability of being matched. The reference and most likely matched halo masses agree well for the most massive objects, with deviations from the \(1-1\) line increasing towards lower mass. However, these objects also have a consistently smaller probability of being matched (right panel of Fig. 9) and, therefore it is not unexpected. Below the scale of \(\sim 10^{14}\ M_{\odot}/h\) the matching is less reliable: we only cross-match haloes with mass above \(10^{13.25}\ M_{\odot}/h\) since below this scale the IC constraints weaken as indicated by Fig. 2b and Fig. 5a. Therefore, a reference halo near the threshold will only have potential matches that are above this threshold, thus biasing the matching procedure. #### 4.2.3 Peculiar velocity alignment In Fig. 10, we show the alignment and ratio of magnitudes of peculiar velocities in the final snapshot of CSiBORG. We define the alignment angle \(\theta\) between the peculiar velocity vectors of haloes \(a\) and \(b\) with peculiar velocities \(\mathbf{v}_{a}\) and \(\mathbf{v}_{b}\), respectively, as: \[\cos\theta=\frac{\mathbf{v}_{a}\cdot\mathbf{v}_{b}}{|\mathbf{v}_{a}|\ |\mathbf{v}_{b}|}. \tag{6}\] For each reference halo we calculate the alignment of its peculiar velocity vector with the corresponding vector of the highest overlapping halo from another IC realisation. We find the alignment of a reference halo and a single maximum overlap crossing halo to be strongly correlated with the magnitude of this overlap across the IC realisations. The alignment and ratio of magnitudes plotted in Fig. 10 is averaged over the crossing IC realisations. While most matched cluster-mass haloes show strong alignment in the final snapshot, there are exceptions. These often coincide with a lower mean maximum overlap. Nevertheless, even plotting the alignment as a function of the mean maximum overlap there remains a significant scatter. We also compare the magnitudes of peculiar velocities, finding their ratios to be on average \(\sim 1\), however with significantly larger scatter towards smaller overlaps. #### 4.2.4 Spin and concentration constraint Lastly, we turn our attention to the spin and concentration of these haloes. We use the Bullock spin definition \[\lambda_{200c}=\frac{J_{200c}}{\sqrt{2M_{200c}}V_{200c}R_{200c}}, \tag{7}\] where \(J_{200c}\) is the angular momentum magnitude of particles within \(R_{200c}\) and \(V_{200c}^{2}=GM_{200c}/R_{200c}\)(Bullock et al., 2001). We define the concentration as the ratio of the virial radius to the scale radius of the Navarro-Frenk-White (NFW) profile, \(c=R_{200c}/R_{*}\)(Navarro et al., 1996). We calculate the most likely spin and concentration of the matched haloes following the approach outlined in Section 3. In the middle panel of Fig. 8 we show the comparison of the spins of overlapping haloes to the spin of a reference halo, which we take to be the most massive halo in one IC realisations of CSiBORG. However, unlike previously we do not find any preference for the spin of overlapping haloes to be similar to the reference halo spin, regardless of whether we weight the matched haloes by overlap or not. The matched distribution in Fig. 8 has a secondary peak near the reference spin, however it is not statistically significant. In fact, the distribution of matched spins is in good agreement with the simulation average, which is approximately a mass-independent Gaussian distribution in \(\log\lambda_{200c}\). We find similar conclusions to hold for all haloes, regardless of their mass. Next, we investigate the concentration of the matched haloes. In the right panel of Fig. 8 we show the comparison of the concentrations of overlapping haloes to the concentration of a reference halo, which we again take to be the most massive halo in one IC realisations of CSiBORG. We find that in this particular example, the mode of the weighted distribution agrees well with the reference halo concentration but still the width of this distribution is similar to the expectation from the simulated mass-concentration relation. To delve deeper, we assess the matched concentrations for all haloes with a mass exceeding \(10^{15}\ M_{\odot}/h\), which we previously identified as being consistently reconstructed. We find no discernible correlation between the most likely and reference concentrations. We also find that the concentration of the highest overlap halo from each IC realisation is within 10 (30) per cent of the reference value in only 20 (50) per cent of realisations, without any clear dependence on the reference halo mass. The agreement seen in Fig. 8 is thus either coincidental or specific to the halo in question. However, even in such instances, the significance is questionable as there is no notable improvement over the mass-concentration relation. ## 5 Discussion ### Interpretation of the results Constrained simulations enable an object-by-object comparison of the local Universe with theory. If a direct, or at least probabilistic, correspondence between a simulated halo and an observed structure can be established, constrained simulations allow inference of the properties and assembly histories of nearby objects and hence pose rigorous tests for galaxy formation models and cosmology. In this work, we outline and test a method for assessing whether a halo is robustly reconstructed across a set of simulations, differing for example in ICs drawn from the posterior of a preceding inference. This allows us to compare the properties of "same" halos across the simulations. We illustrate our framework by applying it to the CSiBORG suite of 101 constrained simulations of the local \(\sim 155\) Mpc/\(h\) Universe with ICs on a grid of spacing \(2.65\) Mpc/\(h\) derived from the BORG inference of the 2M++ catalogue. We find that cluster-mass halos (\(M\gtrsim 10^{14}\)\(M_{\odot}/h\)) are consistently reconstructed across the suite and originate from the same Lagrangian region. Halos of this mass are distributed over \(\sim 70\) cells in the initial snapshot at \(z=69\). Assuming that the comoving cell density in the initial snapshot is \(\Omega_{\rm m}\rho_{\rm c}\), where \(\rho_{\rm c}\) is the current critical density of the Universe, we can approximate the spatial resolution, \(L\), required to distribute a mass \(M\) over \(N\) cells as: \[L\approx 2.6\ {\rm Mpc}/h\left(\frac{M}{10^{14}\ M_{\odot}/h}\frac{70}{N} \right)^{1/3}. \tag{8}\] Assuming the criterion of 70 resolution elements across the initial Lagrangian patch to be universal, this suggests that for haloes of mass \(10^{15}\), \(10^{14}\), \(10^{13}\)\(M_{\odot}/h\) to be robustly re Figure 8: Distributions of most likely matched haloes’ properties (\(\log M_{\rm tot}\), \(\log\lambda_{200c}\), \(c\)) with the most massive halo in one IC realisation (7444) of CSiBORG shown as the blue histogram. The green “control” histogram is the spin and concentration of haloes with a similar mass from the remaining IC realisations: from each we take 10 haloes closest in mass to the reference halo. The blue and green vertical lines are the mode of the “matched” and “control” histograms, respectively, and the corresponding shaded bands are \(1\sigma\) uncertainties. The red line is the reference halo property. A comparison to the control distribution reveals that neither the spin or concentration are constrained. The matched concentration of this halo has a sharp peak near the reference concentration, however a similar trend is _not_ observed for other massive haloes. Figure 9: Most likely mass of haloes matched to a single realisation of CSiBORG averaged over all crossing simulations and defined as the mode of a histogram such as Fig. 8. The _left_ panel shows the reference-to-expected halo mass relation, the _middle_ panel shows the uncertainty of the expected mass and the _right_ panel shows the ratio of the expected mass to the reference mass as a function of probability of being matched defined in Eq. (4). The agreement between the reference and matched mass is strongly correlated with the matching probability. constructed one must use IC constraints at the scale 5.5, 2.6 and 1.2 Mpc/\(h\), respectively. While high-mass haloes are consistently reconstructed and have counterparts of similar mass and peculiar velocity across all IC realisations, the overlapping haloes originating from same Lagrangian regions are not similar in neither their spin or concentration. Despite the BORG reconstruction employed in this work not utilising a peculiar velocity catalogue as a constraint--relying solely on galaxy positions in the 2M++ catalogue--it is reassuring to find that the highest mass haloes indeed have aligned velocities, since to some extent galaxy positions are complementary to the peculiar velocity field information. In fact, BORG has been used in the past to reconstruct the local peculiar velocity field (Jasche and Lavaux, 2019). Cadiou et al. (2021) demonstrated that the angular momentum of haloes can be accurately predicted directly from their Lagrangian patches, but that it exhibits chaotic behaviour under small changes to the patch boundary. Although the high-mass CSiBORG haloes are present in all IC realisations, their overlaps never reach unity and so their Lagrangian patches are not exactly aligned. Therefore, given the findings of Cadiou et al., the lack of constraint on spin is not surprising. The halo concentration depends on both the Lagrangian patch configuration and subsequent accretion history (Rey et al., 2019). While we find that on average the halo concentration in CSiBORG is not constrained, we leave a detailed examination of specific observed clusters for future work along with comparing the mass accretion histories of haloes that are initially strongly overlapping. This will help tease out the properties of halos and their constituent particles that are responsible for setting their concentrations. For certain haloes, we observe that those originating from consecutive CSiBORG simulations tend to have higher maximum overlaps. CSiBORG resimulates every \(24^{\rm th}\) step of the BORG chain, yet the auto-correlation length of the chain is \(\sim 100\). CSiBORG thus effectively over-samples the BORG posterior--even though it varies without correlation the unconstrained small-scale modes at each step--which could lead to an overestimation of confidence in the BORG constraints if relying only on a few consecutive CSiBORG samples. To mitigate this effect we have averaged across all remaining 100 IC realisations for each reference realisation, the majority of which are fully decorrelated from it. A fully satisfactory solution would be to resimulate only decorrelated IC realisations, however it is challenging to derive a large number of these due to the high computational cost of the BORG inference. ### Comparison with the literature Hutt et al. (2022) introduce an algorithm for assessing whether "twins" of a single halo can be identified in all IC realisations of a constrained simulation. This is done by selecting a reference simulation and cataloguing the positions of all haloes. A halo is then chosen from the reference simulation, and its size calculated as \(R_{200c}=[(3M_{200c})/(4\pi\cdot 200\cdot\rho_{\rm crit})]^{1/3}\). In the other simulations, haloes within a designated "search radius" of the reference halo are then identified and the one most similar in mass selected. If no haloes are found within the search radius, the reference halo is discarded as not present within the crossing simulation. This approach differs from ours in several important ways: 1. we match haloes directly in the ICs, which are constrained by BORG, instead of relying on the forward-modelled realisations of the final snapshot, 2. we do not require the matched haloes to be within a fixed radius of one another, but instead calculate the overlap of their initial Lagrangian patches, 3. while the procedure of Hutt et al. is inherently binary, ours has a continuous interpretation in terms of probabilities. It is nevertheless instructive to compare our results. In Fig. 11 we show the fraction of matches identified via the approach of Hutt et al. that are also the maximum-overlap pairs for several choices of search radii in CSiBORG. Because of the stringent requirement of Hutt et al. that a halo must have a "twin" in _all_ IC realisations, there is a strong agreement between the two approaches. We calculate \(f_{\rm agreement}\), the fraction of matches identified by Hutt et al. that are also the maximum-overlap pairs as a function of a lower mass threshold. For the fiducial search radius used by Hutt et al. the agreement is \(\sim 90\) per cent regardless of the mass threshold. This does, however, come at the cost of only a small number of haloes being identified as matches by Hutt et al.: for example at \(10^{14}\ M_{\odot}/h\) this is only \(\sim 10\) per cent of the haloes. Our method has the advantage of providing useful information even in the regime where the matching across all realisations is partial or even weak. Another methodology with similarities to ours is that Figure 10: The mean alignment angle (_top_ panel) and ratio of magnitudes (_bottom_ panel) of reference haloes with the remaining IC realisations in CSiBORG, plotted as a function of the mean maximum overlap of the reference halo averaged over the IC realisations. The red line is the mean trend in a bin with \(1\sigma\) spread among points. Higher overlapping haloes tend to have their peculiar velocities more aligned in final snapshot. of Pfeifer et al. (2023), in which the goal is to identify the simulation most representative of the local Universe. They use a Wiener filter-based reconstruction that must be supplied with random small scale/unconstrained modes. This strategy can alternatively be conceptualised as an intensive fine-tuning process to pinpoint the optimal random seed on a very fine grid level, which can be then re-simulated (McAlpine et al., 2022). Pfeifer et al. find the realisation that best resembles the local Universe by minimizing the sum of \(p\)-values of simulated cluster-mass haloes being at the locations of observed clusters. However, while they find the most representative constrained simulations, their approach does not provide any information about the consistency with which the matched simulated haloes are reconstructed. We believe that this is crucial information to determine the confidence with which we can assert the properties of the dark matter distribution of the local Universe, given the quality and quantity of data used to set the constraints. A similar technique of matching observed galaxies to haloes from constrained cosmological simulations has been applied by Zhang et al. (2022); Xu et al. (2023). Zhang et al. develop a "neighbourhood" subhalo abundance matching (SHAM), a SHAM model specifically tailored for constrained cosmological simulations which ranks haloes based on both their peak mass and closeness in position and velocity to the observed galaxy (Yang et al., 2018). This statistical approach enables them to connect haloes in their single constrained simulation with observed galaxies, thereby facilitating the study of e.g. the galaxy-to-halo size relation. Our method would allow folding in the uncertainties associated with the reconstruction. ## 6 Conclusion We have investigated the extent to which halos are robustly reconstructed across multiple cosmological simulations. While this question is particularly pertinent to simulation suites constrained (with uncertainty) to match the local Universe, other applications include the matching of haloes between DM-only simulations and their hydrodynamical counterparts, or simulations of varying cosmology such as \(\Lambda\)CDM vs modified gravity. We argue that for a halo to be consistently reconstructed its Lagrangian patch in the initial conditions must strongly overlap with a halo in most other realisations of the suite, a condition implying similarity in both mass and location. This is a stricter and more causal measure than similarity in the final snapshot, providing a clearer condition for two halos to be "the same". In the future, an even stricter measure of similarity may be introduced by measuring the haloes' initial overlap in the 6D phase space directly, instead of only the 3D position space. We apply the method to CSiBORG, a suite of constrained simulations with initial conditions sampled from the posterior of the BORG algorithm applied to the 2M++ galaxy number density field. We establish the significance of the results by comparing to the unconstrained Quijote suite. Based on the criteria mentioned above, we find cluster-mass haloes (\(M\gtrsim 10^{14}M_{\odot}/h\)) to be consistently reconstructed in CSiBORG in position, mass and peculiar velocity, with higher mass haloes being typically more strongly constrained. For haloes below this mass threshold, the constraints diminish, and haloes do not consistently originate from the same Lagrangian patches. Regarding secondary halo properties like spin and concentration, even the consistently matched, high-mass haloes display variations across the IC realisations. The absence of constraints on concentration is surprising, given its strong dependence on both mass and assembly history. This might imply that the assembly history of even the most massive haloes in CSiBORG remains unconstrained, however we defer a comprehensive study of nearby clusters to future work. In sum, our framework provides a step towards the goal of identifying the ICs that led to the observed Universe and the objects it contains. ## 7 Data availability The code underlying this article is available at [https://github.com/Richard-Sti/cislborgtools](https://github.com/Richard-Sti/cislborgtools) and other data will be made available on reasonable request to the authors. ## Acknowledgements We thank Jens Jasche, Guilhem Lavaux and Tariq Yasin for useful inputs and discussions. We also thank Jonathan Patterson for smoothly running the Glamdring Cluster hosted by the University of Oxford, where the data processing was performed. This work was done within the Aquila Consortium.2 Footnote 2: [https://aquila-consortium.org](https://aquila-consortium.org) RS acknowledges financial support from STFC Grant No. Figure 11: Comparison of our matching procedure to that of Hutt et al. (2022), who identify “twin” halos in the final snapshots of CSiBORG. Solid lines correspond to \(f_{\mathrm{agreement}}\), which is the fractional agreement between the Hutt et al. matches and the maximum-overlap pairs above mass \(M_{\mathrm{tot,min}}\). Dotted lines are \(f_{\mathrm{match}}\), the fraction of haloes above \(M_{\mathrm{tot,min}}\) identified as matches by Hutt et al.. The high \(f_{\mathrm{agreement}}\) indicates excellent agreement between the approaches regardless of the search radius (shown in the legend). However, only a small fraction of halos are identified as matches by Hutt et al.. Our method establishes a match probability applicable even to far more uncertain matches. ST/X508664/1, HD is supported by a Royal Society University Research Fellowship (grant no. 211046). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 693024). This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. For the purpose of open access, we have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2309.09383
Waring's problem with restricted digits
Let $k \geq 2$ and $b \geq 3$ be integers, and suppose that $d_1, d_2 \in \{0,1,\dots, b - 1\}$ are distinct and coprime. Let $\mathcal{S}$ be the set of non-negative integers, all of whose digits in base $b$ are either $d_1$ or $d_2$. Then every sufficiently large integer is a sum of at most $b^{160 k^2}$ numbers of the form $x^k$, $x \in \mathcal{S}$.
Ben Green
2023-09-17T21:42:53Z
http://arxiv.org/abs/2309.09383v2
# Waring's problem with restricted digits ###### Abstract. Let \(k\geqslant 2\) and \(b\geqslant 3\) be integers, and suppose that \(d_{1},d_{2}\in\{0,1,\ldots,b-1\}\) are distinct and coprime. Let \(\mathcal{S}\) be the set of non-negative integers, all of whose digits in base \(b\) are either \(d_{1}\) or \(d_{2}\). Then every sufficiently large integer is a sum of at most \(b^{160k^{2}}\) numbers of the form \(x^{k}\), \(x\in\mathcal{S}\). The author gratefully acknowledges the support of the Simons Foundation (Simons Investigator grant 376201). One may ask whether a similar result holds if one passes to a subset \(\{x^{k}:x\in\mathcal{S}\}\) of the full set of \(k\)th powers. This has been established in various cases, for instance when \(\mathcal{S}\) is the set of primes (the so-called Waring-Goldbach problem [15]), the set of smooth numbers with suitable parameters [9], the set of integers such that the sum of digits in base \(b\) lies in some fixed residue class modulo \(m\)[18], or a random set with \(\mathbf{P}(s\in\mathcal{S})=s^{c-1}\) for some \(c>1\)[20]. Our main result in this paper is that a statement of this type holds when \(\mathcal{S}\) is the set of integers whose base \(b\) expansion contains just two different (fixed) digits. **Theorem 1.1**.: _Let \(k\geqslant 2\) and \(b\geqslant 3\) be integers, and suppose that \(d_{1},d_{2}\in\{0,1,\ldots,b-1\}\) are distinct and coprime. Let \(\mathcal{S}\) be the set of non-negative integers, all of whose digits in base \(b\) are either \(d_{1}\) or \(d_{2}\). Then every sufficiently large integer is a sum of at most \(b^{160k^{2}}\) numbers of the form \(x^{k}\), \(x\in\mathcal{S}\)._ _Remarks._ While the basic form of the bound is the best the method gives, the constant \(160\) could certainly be reduced, especially for large values of \(b\); I have not tried to optimise it. The restriction to \(b\geqslant 3\) is helpful at certain points in the argument. Of course, the case \(b=2\) (in which case we must have \(\{d_{1},d_{2}\}=\{0,1\}\)) corresponds to the classical Waring problem, for which much better bounds are known. Although Theorem 1.1 seems to be new, one should certainly mention in this context the interesting work of Biggs [1, 2] and Biggs and Brandes [3], who showed that, for some \(s\), every sufficiently large integer is a sum of at most \(s\) numbers of the form \(x^{k}\), \(x\in\mathcal{S}\), and one further \(k\)th power. (In their work \(b\) is taken to be prime and larger than \(k\).) This paper is completely independent of the work of Biggs and Brandes, but it seems plausible that by combining their methods with ours one could significantly reduce the quantity \(b^{160k^{2}}\) in Theorem 1.1, at least for prime \(b\). Finally, we note that sets of integers whose digits in some base are restricted to some set are often called _ellipsephic_, a term coined by Mauduit, as explained in [1, 2]. _Notation._ If \(x\in\mathbf{R}\), we write \(\|x\|\) for the distance from \(x\) to the nearest integer. The only other time we use the double vertical line symbol is for certain box norms \(\|\cdot\|_{\square}\) which occur in Appendix A. There seems little danger of confusion so we do not resort to more cumbersome notations such as \(\|x\|_{\mathbf{R}/\mathbf{Z}}\). Write \(e(x)=e^{2\pi ix}\). If \(X\) is a finite set and \(f:X\to\mathbf{C}\) is a function then we write \(\mathbf{E}_{x\in X}f(x)=\frac{1}{|X|}\sum_{x\in X}f(x)\). All intervals will be discrete. Thus \([A,B]\) denotes the set of all _integers_\(x\) with \(A\leqslant x\leqslant B\) (and here \(A,B\) need not be integers). We will frequently encounter the discrete interval \([0,m)\), for positive integer \(m\), which is the same thing as the set \(\{0,1,\ldots,m-1\}\). Note carefully that at some points in Section 6, the notation \([m_{1},m_{2}]\) will also refer to the lowest common multiple of two integers \(m_{1},m_{2}\). Throughout the paper we will fix a base \(b\geqslant 3\), an exponent \(k\geqslant 2\) and distinct coprime digits \(d_{1},d_{2}\in[0,b)\). Denote by \(\mathcal{S}\) the set of all non-negative integers \(x\), all of whose digits in base \(b\) are \(d_{1}\) or \(d_{2}\). We include \(0\) in \(\mathcal{S}\). Write \(\mathcal{S}^{k}:=\{x^{k}:x\in\mathcal{S}\}\). Note that \(\mathcal{S}^{k}\) might more usually refer to the \(k\)-fold product set of \(\mathcal{S}\) with itself, but we have no use for that concept here. We will reserve the letter \(n\) for a variable natural number, which we often assume is sufficiently large, and which it is usually convenient to take to be divisible by \(k\). We always write \(N=b^{n}\), so \([0,N)\) is precisely the set of non-negative integers with at most \(n\) digits in base \(b\). If \(n\) is a natural number, we define the map \(L_{b}:\{0,1\}^{[0,n)}\to\mathbf{Z}\) by \[L_{b}(\mathbf{x}):=\sum_{i\in[0,n)}x_{i}b^{i}, \tag{1.1}\] where \(\mathbf{x}=(x_{i})_{i\in[0,n)}\). Although this map depends on \(n\), we will not indicate this explicitly, since the underlying \(n\) will be clear from context. Then \[\frac{d_{1}(b^{n}-1)}{b-1}+(d_{2}-d_{1})L_{b}(\mathbf{x}) \tag{1.2}\] is the number whose base \(b\) expansion has \(b^{i}\)-digit equal to \(d_{1}\) if \(x_{i}=0\), and \(d_{2}\) if \(x_{i}=1\). _Acknowledgements._ I thank Zach Hunter and Sarah Peluse for comments on the first version of the manuscript. ## 2. An outline of the argument Unsurprisingly, given its pre-eminence in work on Waring's problem, the basic mode of attack is the Hardy-Littlewood circle method. Let \(n\in\mathbf{N}\), set \(N=b^{n}\) and consider the subset of \(\mathcal{S}\) consisting of integers with precisely \(n\) digits. This is a set of size \(2^{n}\). Denote by \(\mu_{n}\) the normalised probability measure on the set of \(k\)th powers of the elements of this set. That is, \(\mu_{n}(m)=2^{-n}\) if \(m=(\sum_{i\in[0,n)}x_{i}b^{i})^{k}\) with all \(x_{i}\in\{d_{1},d_{2}\}\) for all \(i\), and \(\mu_{n}(m)=0\) otherwise. The Fourier transform \(\widehat{\mu_{n}}(\theta):=\sum_{m\in\mathbf{Z}}\mu_{n}(m)e(m\theta)\) is then a normalised version of what is usually called the exponential sum or Weyl-type sum, and as expected for an application of the circle method, it plays a central role in our paper. Our main technical result is the following, which might be called a log-free Weyl-type estimate for \(k\)th powers with restricted digits. **Proposition 2.1**.: _Suppose that \(k\geqslant 2\) and \(b\geqslant 3\). Set \(B:=b^{6k^{2}}\). Suppose that \(\delta\in(0,1)\) and that \(k\mid n\). Suppose that \(|\widehat{\mu_{n}}(\theta)|\geqslant\delta\), and that \(N\geqslant(2/\delta)^{B}\), where \(N:=b^{n}\). Then there is a positive integer \(q\leqslant(2/\delta)^{B}\) such that \(\|\theta q\|\leqslant(2/\delta)^{B}N^{-k}\)._ _Remarks._ If \(\mu_{n}\) is replaced by the normalised counting measure on \(k\)th powers less than \(N\) without any digital restriction, a similar estimate is true and is very closely related to Weyl's inequality. The most standard proof of Weyl's inequality such as [19, Lemma 2.4], however, results in some extra factors of \(N^{o(1)}\) (from the divisor bound). "Log-free" versions may be obtained by combining the standard result with major arc estimates as discussed, for example, in [22], or by modifying the standard proof of Weyl's inequality to focus on this goal rather than on the quality of the exponents, as done in [12, Section 4]. Our treatment here is most closely related to this latter approach. Although we will only give a detailed proof of Proposition 2.1 in the case that \(\mu_{n}\) is the measure on \(k\)th powers of integers with just two fixed digits, similar arguments ought to give a more general result in which the digits are restricted to an arbitrary subset of \(\{0,1,\ldots,b-1\}\) of size at least \(2\). This would be of interest if one wanted to obtain an asymptotic formula in Theorem 1.1, with more general digital restrictions of this type. Experts will consider it a standard observation that Proposition 2.1 implies that \(\mathcal{S}^{k}\) is an asymptotic basis of some finite order \(s\). Roughly, this is because one can use it to obtain a moment estimate \(\sum_{x}\mu_{n}^{(t)}(x)^{2}=\int_{0}^{1}|\widehat{\mu_{n}}(\theta)|^{2t}d \theta\ll N^{-k}\) for a suitably large \(t\). Cauchy-Schwarz then implies that the \(t\)-fold sumset \(t\mathcal{S}^{k}\) has positive density in an interval of length \(\gg N^{k}\), whereupon methods of additive combinatorics can be used to conclude. However, by itself this kind of argument leads to \(s\) having a double-exponential dependence on \(k\). The reason is that Proposition 2.1 is not very effective in the regime \(\delta\approx 1\). It is possible that the proof could be adapted so as to be more efficient in this range, but this seems nontrivial. Instead we provide, in Section 4, a separate argument which is at first sight crude, but turns out to be more efficient for this task. This gives the following result. **Proposition 2.2**.: _Let \(n\in\mathbf{N}\) and let \(N=b^{n}\). Suppose that \(n\geqslant k\). Then the measure of all \(\theta\in\mathbf{R}/\mathbf{Z}\) such that \(|\widehat{\mu_{n}}(\theta)|\geqslant 1-\frac{1}{4}b^{-3k^{2}}\) is bounded above by \(2b^{k^{2}}N^{-k}\)._ In fact, we obtain a characterisation of these values of \(\theta\), much as in Proposition 2.1: see Section 4 for the detailed statement and proof. Details of how to estimate the moment \(\int_{0}^{1}|\widehat{\mu_{n}}(\theta)|^{2t}d\theta\) using Propositions 2.1 and 2.2, and of the subsequent additive combinatorics arguments leading to the proof of Theorem 1.1, may be found in Section 3. This leaves the task of proving Proposition 2.1, which forms the bulk of the paper, and is where the less standard ideas are required. For the purposes of this overview, we mostly consider the case \(k=2\), and for definiteness set \(\{d_{1},d_{2}\}=\{0,1\}\). _Decoupling._ The first step is a kind of decoupling. Recall the definitions of the maps \(L_{b}\) (see (1.1)). The idea is to split the variables \(\mathbf{x}=(x_{i})_{i\in[0,n)}\) into the even variables \(\mathbf{y}=(x_{2i})_{i\in[0,n/2)}\) and the odd variables \(\mathbf{z}=(x_{2i+1})_{i\in[0,n/2)}\), assuming that \(n\) is even for this discussion. We have \(L_{b}(\mathbf{x})=L_{b^{2}}(\mathbf{y})+bL_{b^{2}}(\mathbf{z})\). Here, there is a slight abuse of notation in that \(L_{b}\) is defined on vectors of length \(n\), whilst \(L_{b^{2}}\) is defined on vectors of length \(n/2\). We then have \[\widehat{\mu_{n}}(\theta) =\mathbf{E}_{\mathbf{x}\in\{0,1\}^{[0,n)}}e(\theta L_{b}(\mathbf{ x})^{2})\] \[=\mathbf{E}_{\mathbf{y},\mathbf{z}\in\{0,1\}^{[0,n/2)}}e\big{(} \theta(L_{b^{2}}(\mathbf{y})+bL_{b^{2}}(\mathbf{z})^{2}\big{)}\] \[=\mathbf{E}_{\mathbf{y},\mathbf{z}\in\{0,1\}^{[0,n/2)}}\Psi( \mathbf{y})\Psi^{\prime}(\mathbf{z})e\big{(}2b\theta L_{b^{2}}(\mathbf{y})L_{ b^{2}}(\mathbf{z})\big{)},\] where \(\Psi(\mathbf{y})=e(\theta L_{b^{2}}(\mathbf{y})^{2})\) and \(\Psi^{\prime}(\mathbf{z})=e(b^{2}\theta L_{b^{2}}(\mathbf{z})^{2})\), but the precise form of these functions is not important in what follows. By two applications of the Cauchy-Schwarz inequality (see Appendix A for a general statement), we may eliminate the \(\Psi\) and \(\Psi^{\prime}\) terms, each of which depends on just one of \(\mathbf{y},\mathbf{z}\). Assuming, as in the statement of Proposition 2.1, that \(|\widehat{\mu_{n}}(\theta)|\geqslant\delta\), we obtain \[\delta^{4}\leqslant\mathbf{E}_{\mathbf{y},\mathbf{z},\mathbf{y}^ {\prime},\mathbf{z}^{\prime}\in\{0,1\}^{[0,n/2)}}e\big{(}2b\theta\big{(}L_{b^{2 }}(\mathbf{y})L_{b^{2}}(\mathbf{z})-L_{b^{2}}(\mathbf{y}^{\prime})L_{b^{2}}( \mathbf{z})-\] \[\qquad\qquad\qquad\qquad\qquad-L_{b^{2}}(\mathbf{y})L_{b^{2}}( \mathbf{z}^{\prime})+L_{b^{2}}(\mathbf{y}^{\prime})L_{b^{2}}(\mathbf{z}^{ \prime})\big{)}\big{)}.\] We remove the expectation over the dashed variables, that is to say there is some choice of \(\mathbf{y}^{\prime},\mathbf{z}^{\prime}\) for which the remaining average over \(\mathbf{y},\mathbf{z}\) is at least \(\delta^{4}\). For simplicity of discussion, suppose that \(\mathbf{y}^{\prime}=\mathbf{z}^{\prime}=0\) is such a choice; then \[\delta^{4}\leqslant\mathbf{E}_{\mathbf{y},\mathbf{z}\in\{0,1\}^{[0,n/2)}}e \big{(}2b\theta L_{b^{2}}(\mathbf{y})L_{b^{2}}(\mathbf{z})\big{)}. \tag{2.1}\] At the expense of replacing \(\delta\) by \(\delta^{4}\), we have replaced the quadratic form \(L_{b}(\mathbf{x})^{2}\) by a product of two linear forms in disjoint variables, which is a far more flexible object to work with. I remark that I obtained this idea from the proof of [8, Theorem 4.3], which uses a very similar method. Now, for fixed \(\mathbf{z}\) the average over \(\mathbf{y}\) in (2.1) can be estimated fairly explicitly. The conclusion is that for \(\gg\delta^{4}2^{n/2}\) values of \(\mathbf{z}\), \(2b\theta L_{b^{2}}(\mathbf{z})\) has \(\ll\log(1/\delta)\) non-zero base \(b\) digits, among the first \(n\) digits after the radix point. Here, we use the _centred_ base \(b\) expansion in which digits lie in \((-\frac{b}{2},\frac{b}{2}]\), discussed in more detail in Section 5. _Additive expansion._ The output of the decoupling step is an assertion to the effect that, for \(m\) in a somewhat large set \(\mathscr{M}\subset\{1,\ldots,N\}\), \(\theta m\) has very few non-zero digits in base \(b\) among the first \(n\) after the radix point. The set \(\mathscr{M}\) is the set of \(2bL_{b^{2}}(\mathbf{z})\) for \(\gg\delta^{4}2^{n/2}\) values of \(\mathbf{z}\in\{0,1\}^{[0,n/2)}\), and so has size \(\sim N^{(\log 2)/2\log b}\) which, though'somewhat large', is unfortunately appreciably smaller than \(N\). The next step of the argument is to show that the sum of a few copies of \(\mathscr{M}\) is a considerably larger set, of size close to \(N\). In fact, in the case \(k=2\) under discussion, \(b^{2}-1\) copies will do. This follows straightforwardly from the following result from the literature. **Theorem 2.3**.: _Let \(r,n\in\mathbf{N}\). Suppose that \(A_{1},\ldots,A_{r}\subseteq\{0,1\}^{n}\) are sets with densities \(\alpha_{1},\ldots,\alpha_{r}\). Then \(A_{1}+\cdots+A_{r}\) has density at least \((\alpha_{1}\cdots\alpha_{r})^{\gamma}\) in \(\{0,1,\ldots,r\}^{n}\), where \(\gamma:=r^{-1}\log_{2}(r+1)\)._ This theorem, which came from the study of Cantor-type sets in the 1970s and 1980s, seems not to be well-known in modern-day additive combinatorics. The result has a somewhat complicated history, with contributions by no fewer than 10 authors, and I am unsure exactly how to attribute it. For comments and references pertinent to this, see Appendix B. We remark that for \(k>2\) a considerably more elaborate argument is required at this point, and this occupies the bulk of Section 6. The conclusion is that \(\theta m\) has \(\ll\log(1/\delta)\) nonzero base \(b\) digits among the first \(n\) after the radi point, for all \(m\) in a set \(\mathscr{M}^{\prime}\subset\{1,\ldots,N\}\) of size \(\gg\delta^{C}N\). _From digits to diophantine._ In the final step of the argument we extract the required diophantine conclusion (that is, the conclusion of Proposition 2.1) from the digital condition just obtained. The main ingredient is a result on the additive structure of sets with few nonzero digits, which may potentially have other uses. Recall that if \(A\) is a set of integers then \(E(A)\), the additive energy of \(A\), is the number of quadruples \((a_{1},a_{2},a_{3},a_{4})\in A\times A\times A\times A\) with \(a_{1}+a_{2}=a_{3}+a_{4}\). **Proposition 2.4**.: _Let \(r\in\mathbf{Z}_{\geqslant 0}\). Suppose that \(A\subset\mathbf{Z}\) is a finite set, all of whose elements have at most \(r\) nonzero digits in their centred base \(b\) expansion. Then \(E(A)\leqslant(2b)^{4r}|A|^{2}\)._ The proof of this involves passing to a quadripartite formulation (that is, with four potentially different sets \(A_{1},A_{2},A_{3},A_{4}\), and also allowing for the possibility of a 'carry' in the additive quadruples) and an inductive argument. The final deduction of Proposition 2.1 uses this and some fibring arguments. This, and the proof of Proposition 2.4, may be found in Section 7. ## 3. Reduction to a log-free Weyl-type estimate In this section we show that our main result, Theorem 1.1 follows from the log-free Weyl-type estimate, Proposition 2.1. We begin by stating two results about growth under set addition. The first is a theorem of Nathanson and Sarkozy. **Theorem 3.1**.: _Let \(X\in\mathbf{N}\) and \(r\in\mathbf{N}\). Suppose that \(A\subset\{1,\ldots,X\}\) is a set of size \(\geqslant 1+X/r\). Then there is an arithmetic progression of common difference \(d\), \(1\leqslant d\leqslant r-1\) and length at least \(\lfloor X/2r^{2}\rfloor\) contained in \(4rA\)._ Proof.: In [17, Theorem 1], take \(h=2r\), \(z=\lfloor X/2r^{2}\rfloor\); the result is then easily verified. The second result we will need is a simple but slightly fiddly lemma on repeated addition of discrete intervals. **Lemma 3.2**.: _Let \(X\geqslant 1\) be real and suppose that \(I\subset[0,X)\) is a discrete interval of length \(L\geqslant 2\). Set \(\eta:=L/X\). Let \(K\geqslant 4\) be a parameter. Then \(\bigcup_{j\leqslant\lceil 2K/\eta^{2}\rceil}jI\) contains the discrete interval \([\frac{4}{\eta}X,\frac{K}{\eta}X]\)._ Proof.: Write \(I=[x_{0},x_{0}+L-1]\), where \(x_{0}\in\mathbf{Z}_{\geqslant 0}\). Then \(jI=[jx_{0},jx_{0}+j(L-1)]\). Note that if \(j\geqslant x_{0}/(L-1)\), we have \(jx_{0}+j(L-1)\geqslant(j+1)x_{0}\), and so the interval \((j+1)I\) overlaps the interval \(jI\). Therefore if we set \(j_{0}:=\lceil x_{0}/(L-1)\rceil\), for any \(j_{1}\geqslant j_{0}\) the union \(I^{*}:=\bigcup_{j_{0}\leqslant j\leqslant j_{1}}jI\) is a discrete interval. Set \(j_{1}:=\lceil 2K/\eta^{2}\rceil\). We have \[\min I^{*}=j_{0}x_{0}\leqslant\big{\lceil}\frac{X}{L-1}\big{\rceil}X \leqslant\big{\lceil}\frac{2X}{L}\big{\rceil}X\leqslant\frac{4X^{2}}{L}= \frac{4}{\eta}X,\] and \[\max I^{*}\geqslant j_{1}(L-1)\geqslant\frac{2K}{\eta^{2}}\frac{L}{2}=\frac{ K}{\eta}X.\] This concludes the proof. Proof of Theorem 1.1, assuming Proposition 2.1.: Let \(n\) be some large multiple of \(k\) and consider the measure \(\mu_{n}\) as described in Section 2. Thus \(\mu_{n}\) is supported on \(\mathcal{S}^{k}\cap[0,N^{k})\), where \(N=b^{n}\). Set \[t:=8b^{9k^{2}}, \tag{3.1}\] and write \(\mu_{n}^{(t)}\) for the \(t\)-fold convolution power of \(\mu_{n}\), that is to say \(\mu_{n}^{(t)}(x)=\sum_{x_{1}+\cdots+x_{t}=x}\mu_{n}(x_{1})\cdots\mu_{n}(x_{t})\). Then \(\widehat{\mu_{n}^{(t)}}=(\widehat{\mu_{n}})^{t}\) and so by Parseval's identity and the layer-cake representation \[\sum_{x} \mu_{n}^{(t)}(x)^{2}=\int_{0}^{1}|\widehat{\mu_{n}}(\theta)|^{2t}d\theta\] \[=2t\int_{0}^{1}\delta^{2t-1}\operatorname{meas}\{\theta:| \widehat{\mu_{n}}(\theta)|\geqslant\delta\}d\delta=2t(I_{1}+I_{2}+I_{3}), \tag{3.2}\] where \(I_{1},I_{2},I_{3}\) are the integrals over ranges \([0,2N^{-1/B}]\), \([2N^{-1/B},1-c]\) and \([1-c,1]\) respectively, with \(c:=\frac{1}{4}b^{-3k^{2}}\), \(B=b^{6k^{2}}\) (as in Proposition 2.1) and meas is the Lebesgue measure on the circle \(\mathbf{R}/\mathbf{Z}\). We have, for \(N\) large, \[I_{1}\leqslant(2N^{-1/B})^{2t-1}<N^{-k}.\] To bound \(I_{2}\) we use Proposition 2.1, which tells us that the set \(\{\theta\in\mathbf{R}/\mathbf{Z}:|\widehat{\mu}_{n}(\theta)|\geqslant\delta\}\) is contained in the set \(\{\theta\in\mathbf{R}/\mathbf{Z}:\|\theta q\|\leqslant(2/\delta)^{B}N^{-k}\) for some positive \(q\leqslant(2/\delta)^{B}\}\), and so \(\operatorname{meas}\{\theta:|\widehat{\mu_{n}}(\theta)|\geqslant\delta\} \leqslant 2(2/\delta)^{2B}N^{-k}\). Since \(2t-1-2B\geqslant t\), we therefore have \[I_{2}\leqslant 2N^{-k}\int_{0}^{1-c}\delta^{2t-1}(2/\delta)^{2B}d\delta \leqslant 2N^{-k}(1-c)^{t}2^{2B}<N^{-k}.\] For the last inequality we used the fact that \(t=2B/c\) and so \((1-c)^{t}\leqslant e^{-2B}\). Finally, to bound \(I_{3}\) we use Proposition 2.2, which immediately implies that \[I_{3}\leqslant 2b^{k^{2}}N^{-k}.\] Substituting these bounds for \(I_{1},I_{2}\) and \(I_{3}\) into (3.2), we obtain that for \(N\) sufficiently large \(\sum_{x}\mu_{n}^{(t)}(x)^{2}\leqslant 4tb^{k^{2}}N^{-k}=32b^{10k^{2}}N^{-k}\). On the other hand, it follows by Cauchy-Schwarz and the fact that \(\sum_{x}\mu_{n}^{(t)}(x)=1\) that \(1\leqslant|\operatorname{Supp}(\mu_{n}^{(t)})|\sum_{x}\mu_{n}^{(t)}(x)^{2}\), and so \(|\operatorname{Supp}(\mu_{n}^{(t)})|\geqslant 2^{-5}b^{-10k^{2}}N^{k}\). Thus, since \(\mu_{n}^{(t)}\) is supported on the \(t\)-fold sumset of \(\mathcal{S}^{k}\cap[0,N^{k})\), we see that \(|t\mathcal{S}^{k}\cap[0,tN^{k})|\geqslant 2^{-5}b^{-10k^{2}}N^{k}\). Applying Theorem 3.1 with \(X=tN^{k}\) and \(r=2^{8}b^{19k^{2}}\), we see that \(4rt\mathcal{S}^{k}\cap[0,4rtN^{k})\) contains an arithmetic progression \(P\) of common difference \(<r\) and length \(|P|\geqslant L:=2^{-15}b^{-29k^{2}}N^{k}\). Since \(d_{1}^{k}\) and \(d_{2}^{k}\) are coprime, every number greater than or equal to \((d_{1}^{k}-1)(d_{2}^{k}-1)<b^{2k}<r\) is a non-negative integer combination of these numbers. Therefore it is certainly the case that \(2r\mathcal{S}^{k}\) contains \([r,2r)\). Since the common difference of \(P\) is less than \(r\), \(P+[r,2r)\) contains a discrete interval \(I\) of length \(\geqslant L\). This interval is therefore contained in \((4rt+2r)\mathcal{S}^{k}\subset 8rt\mathcal{S}^{k}\). Note that by construction \(I\subset[0,8rtN^{k})\). Apply Lemma 3.2, taking \(X=X(n)=8rtN^{k}\), \(\eta=\frac{L}{X}=2^{-29}b^{-57k^{2}}\), and \(K=4b^{k^{2}}\). Since \(\mathcal{S}\) contains \(0\), we see that \(|2K/\eta^{2}|8rt\mathcal{S}^{k}=2^{75}b^{142k^{2}}\mathcal{S}^{k}\) contains the interval \(I_{n}:=[\frac{4}{\eta}X(n),\frac{K}{\eta}X(n)]\). Remember that here \(n\) is any sufficiently large multiple of \(k\). By the choice of \(K\), \(\frac{K}{\eta}X(n)=\frac{4}{\eta}X(n+k)\), and so these intervals overlap. Thus \(\bigcup_{n}I_{n}\) consists of all sufficiently large integers, and hence so does \(2^{75}b^{142k^{2}}\mathcal{S}^{k}\). Finally, one may note that \(2^{75}<b^{12k^{2}}\) for \(b\geqslant 3\) and \(k\geqslant 2\). ## 4. Very large values of the Fourier transform In this section we establish Proposition 2.2. We will in fact establish the following more precise result. **Proposition 4.1**.: _Let \(n\in\mathbf{N}\) and let \(N=b^{n}\). Suppose that \(n\geqslant k\). Let \(\theta\in\mathbf{R}/\mathbf{Z}\). Suppose that \(|\widehat{\mu_{n}}(\theta)|\geqslant 1-\frac{1}{4}b^{-3k^{2}}\). Then there is a positive integer \(q\leqslant(2k!)b^{\frac{1}{2}k(k-1)+1}\) such that \(\|\theta q\|\leqslant(2k!)^{-1}b^{\frac{1}{2}k(k+1)-1}N^{-k}\)._ Proposition 2.2 is a consequence of this and the observation that the measure of \(\theta\in\mathbf{R}/\mathbf{Z}\) such that \(\|\theta q\|\leqslant\varepsilon\) for some positive integer \(q\leqslant q_{0}\) is bounded above by \(2\varepsilon q_{0}\). Proof of Proposition 4.1.: Set \(Q\,:=\,2k!b^{k(k-1)/2+1}\). Note that, since \(2k!\leqslant 2^{k^{2}/2}\leqslant b^{k^{2}/2}\) for all \(b,k\geqslant 2\), we have \(Q\leqslant b^{k^{2}}\). By Dirichlet's theorem, there is some positive integer \(q\leqslant Q\) and an \(a\), coprime to \(q\), such that \(|\theta-a/q|\leqslant 1/qQ\). Set \(\eta:=\theta-a/q\), thus \(|\eta|\leqslant 1/qQ\). There is a unique integer \(j\) such that \[\frac{1}{2bq}<|(d_{2}-d_{1})k!b^{j}\eta|\leqslant\frac{1}{2q}. \tag{4.1}\] Now if we had \(j<k(k-1)/2\) then \[|(d_{2}-d_{1})k!b^{j}\eta|\leqslant(b-1)k!b^{\frac{1}{2}k(k-1)-1}|\eta|<k!b^{ \frac{1}{2}k(k-1)}/qQ=1/2bq,\] contrary to (4.1). If \(j>kn-k(k+1)/2\) then \[\|\theta q\|=|\eta q|\leqslant(2k!)^{-1}b^{\frac{1}{2}k(k+1)-1}N^{-k},\] in which case the conclusion of the proposition is satisfied. Suppose, then, that \(k(k-1)/2\leqslant j\leqslant kn-k(k+1)/2\). Then there is a set \(I\subset[0,n)\), \(|I|=k\), such that \(j=\sum_{i\in I}i\). As usual, write \(\mathbf{x}=(x_{i})_{i\in[0,n)}\). It is convenient to write \(\mathbf{x}_{I}\) for the variables \(x_{i}\), \(i\in I\), and \(\mathbf{x}_{[0,n)\setminus I}\) for the other variables. For any fixed choice of \(\mathbf{x}_{[0,n)\setminus I}\) we can write, setting \(u:=\frac{d_{1}(b^{n}-1)}{b-1}\), \[\big{(}u+(d_{2}-d_{1})L_{b}(\mathbf{x})\big{)}^{k} =\big{(}u+(d_{2}-d_{1})\sum_{i\in[0,n)}x_{i}b^{i}\big{)}^{k}\] \[=(d_{2}-d_{1})k!b^{j}\prod_{i\in I}x_{i}+\sum_{i\in I}\psi_{i}( \mathbf{x}_{[0,n)\setminus I};\mathbf{x}_{I}),\] for some functions \(\psi_{i}\), where \(\psi_{i}\) does not depend on \(x_{i}\). It follows that \[|\widehat{\mu}_{n}(\theta)|=\big{|}\mathbf{E}_{\mathbf{x}\in\{0,1 \}^{[0,n)}}e\big{(}\big{(}u+(d_{2}-d_{1})L_{b}(\mathbf{x})\big{)}^{k}\big{)} \big{|}\leqslant\] \[\mathbf{E}_{\mathbf{x}_{[0,n)\setminus I}\in\{0,1\}^{[0,n) \setminus I}}\big{|}\mathbf{E}_{\mathbf{x}_{I}\in\{0,1\}^{I}}\prod_{i\in I} \Psi_{i}(\mathbf{x}_{[0,n)\setminus I};\mathbf{x}_{I})e\big{(}(d_{2}-d_{1})k!b^{j}\theta\prod_{i\in I}x_{i}\big{)}\big{|},\] where \(\Psi_{i}:=e(\psi_{i})\) is a \(1\)-bounded function, not depending on \(x_{i}\). By Proposition A.2 (and the accompanying definition of Box norm, Definition A.1) it follows that \[|\widehat{\mu_{n}}(\theta)|^{2^{k}}\leqslant\mathbf{E}_{\mathbf{x}_{I}, \mathbf{x}^{\prime}_{I}\in\{0,1\}^{I}}e\big{(}(d_{2}-d_{1})k!b^{j}\theta\prod _{i\in I}(x_{i}-x^{\prime}_{i})\big{)}.\] (The right-hand side here is automatically a non-negative real number). On the right, we now bound all the terms trivially (by \(1\)) except for two: the term with \(x_{i}=x^{\prime}_{i}=0\) for all \(i\in I\), and the term with \(x_{i}=1\) and \(x_{i}^{\prime}=0\) for all \(i\in I\). This gives, using the inequality \(2-|1+e(t)|=4\sin^{2}\frac{\pi\|t\|}{2}\geqslant 4\|t\|^{2}\), \[|\widehat{\mu_{n}}(\theta)|^{2^{k}} \leqslant 1-\frac{2}{4^{k}}+\frac{1}{4^{k}}|1+e((d_{2}-d_{1})k!b^{j }\theta)|\] \[\leqslant 1-2^{2-2k}\|(d_{2}-d_{1})k!b^{j}\theta\|^{2}. \tag{4.2}\] There are now two slightly different cases, according to whether or not \(q\mid(d_{2}-d_{1})k!b^{j}a\). If this is the case, then by (4.1) \[\|(d_{2}-d_{1})k!b^{j}\theta\|=|(d_{2}-d_{1})k!b^{j}\eta|\geqslant 1/2bq.\] If, on the other hand, \(q\nmid(d_{2}-d_{1})k!b^{j}a\) then by (4.1) we have \[\|(d_{2}-d_{1})k!b^{j}\theta\|\geqslant\frac{1}{q}-|(d_{2}-d_{1})k!b^{j}\eta |\geqslant\frac{1}{2q}.\] In both cases, \(\|(d_{2}-d_{1})k!b^{j}\theta\|\geqslant 1/2bQ=(4k!)^{-1}b^{-2-\frac{1}{2}k(k-1)}\). It follows from (4.2) that \[|\widehat{\mu_{n}}(\theta)|\leqslant\big{(}1-2^{2-2k}(4k!)^{-2}b^{-4-k(k-1)} \big{)}^{1/2^{k}}<1-\frac{1}{4}b^{-3k^{2}},\] that is to say the hypothesis of the proposition is not satisfied. Here, the second inequality follows from the Bernoulli inequality \((1-x)^{1/2^{k}}\leqslant 1-x/2^{k}\) and the crude bounds \(k!\leqslant b^{k^{2}/4}\), \(2^{3k}\leqslant b^{2k}\), both valid for \(b\geqslant 3\) and \(k\geqslant 2\). ## 5. Decoupling We turn now to the somewhat lengthy task of proving Proposition 2.1. In this section we give the details of what we called the decoupling argument in the outline of Section 2. The main result of the section is Proposition 5.2 below. We begin with a definition. **Definition 5.1**.: Let \(\alpha\in\mathbf{R}/\mathbf{Z}\). Then we define \[\tilde{\mathrm{w}}_{n}(\alpha):=\sum_{i\in[0,n)}\|\alpha b^{i}\|^{2}. \tag{5.1}\] The reason for the notation is that \(\tilde{\mathrm{w}}_{n}(\alpha)\) is closely related to the more natural quantity \(\mathrm{w}_{n}(\alpha)\), which is the number of non-zero digits among the first \(n\) digits after the radix point in the (centred) base \(b\) expansion of \(\alpha\). For a careful definition of this, see Section 7. However, \(\tilde{\mathrm{w}}_{n}\) has more convenient analytic properties. Now we come to the main result of the section. As we said before, it is a little technical to state. However, it is rather less technical in the case \(k=2\), in which case the reader may wish to compare it with the outline in Section 2. **Proposition 5.2**.: _Let \(n\in\mathbf{N}\) be divisible by \(k\) and set \(N:=b^{n}\). Suppose that \(\delta\in(0,1]\) and that \(|\widehat{\mu_{n}}(\theta)|\geqslant\delta\). Then there are \(t_{1},\ldots,t_{k-1}\in\mathbf{Z}\) with \(|t_{j}|\leqslant N\) for all \(j\) and a positive integer \(q_{0}\leqslant b^{k^{2}}\) such that for at least \(\frac{1}{2}\delta^{2^{k}}2^{(k-1)n/k}\) choices of \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)}\in\{0,1\}^{[0,n/k)}\) we have_ \[\operatorname{\tilde{w}}_{n}\bigl{(}\theta q_{0}\prod_{i=1}^{k-1}(L_{b^{k}}( \mathbf{x}^{(i)})+t_{i})\bigr{)}\leqslant 2^{k}b^{2k}\log(2/\delta).\] Proof.: By (1.2) and the definition of the measure \(\mu_{n}\), we have \[\widehat{\mu_{n}}(\theta)=\mathbf{E}_{\mathbf{x}\in\{0,1\}^{n}}e\bigl{(} \theta\bigl{(}u+(d_{2}-d_{1})L_{b}(\mathbf{x})\bigr{)}^{k}\bigr{)}, \tag{5.2}\] where \(u:=d_{1}(b^{n}-1)/(b-1)\). The first stage of the decoupling procedure is to split the variables \(\mathbf{x}\) into \(k\) disjoint subsets of size \(n/k\). If \(\mathbf{x}=(x_{i})_{i\in[0,n)}\in\{0,1\}^{[0,n)}\), for each \(j\in[0,k)\) we write \(\mathbf{x}^{(j)}=(x_{ik+j})_{i\in[0,n/k)}\in\{0,1\}^{[0,n/k)}\). Then \[L_{b}(\mathbf{x})=\sum_{j\in[0,k)}b^{j}L_{b^{k}}(\mathbf{x}^{(j)}). \tag{5.3}\] (Note here that \(L_{b}\) is defined on \(\{0,1\}^{[0,n)}\), whereas \(L_{b^{k}}\) is defined on \(\{0,1\}^{[0,n/k)}\).) By (5.2) we have \[\widehat{\mu_{n}}(\theta)=\mathbf{E}_{\mathbf{x}^{(0)},\ldots,\mathbf{x}^{(k -1)}\in\{0,1\}^{[0,n/k)}}e\biggl{(}\theta\bigl{(}u+(d_{2}-d_{1})\sum_{i\in[0, k)}b^{i}L_{b^{k}}(\mathbf{x}^{(i)})\bigr{)}^{k}\biggr{)}.\] Expanding out the \(k\)th power and collecting terms, this can be written as \[\mathbf{E}_{\mathbf{x}^{(0)},\ldots,\mathbf{x}^{(k-1)}\in\{0,1\}^{[0,n/k)}} \bigl{(}\prod_{j\in[0,k)}\Psi_{j}(\mathbf{x})\bigr{)}e\bigl{(}\theta q_{0} \prod_{i\in[0,k)}L_{b^{k}}(\mathbf{x}^{(i)})\bigr{)},\] where \[q_{0}:=k!(d_{2}-d_{1})^{k}b^{k(k-1)/2}\] and \(\Psi_{j}\) is some \(1\)-bounded function of the variables \(\mathbf{x}^{(i)}\), \(i\in[0,k)\setminus\{j\}\), the precise nature of which does not concern us. The inequality \(q_{0}\leqslant b^{k^{2}}\) follows using \(|d_{1}-d_{1}|\leqslant b\) and the estimate \(k!\leqslant 3^{k(k-1)/2}\), since \(b\geqslant 3\). One may now apply the Cauchy-Schwarz inequality \(k\) times to eliminate the functions \(\Psi_{j}\) in turn. This procedure is well-known from the theory of hypergraph regularity [10] or from the proofs of so-called generalised von Neumann theorems in additive combinatorics [12]. For a detailed statement, see Proposition A.2. From this it follows that \[\delta^{2^{k}}\leqslant\mathbf{E}e\big{(}\theta q_{0}\sum_{\omega\in\{0,1\}^{[0, k)}}(-1)^{|\omega|}\prod_{i\in[0,k)}L_{b^{k}}(\mathbf{x}_{\omega_{i}}^{(i)}) \big{)},\] where the average is over \(\mathbf{x}_{0}^{(0)},\ldots\mathbf{x}_{0}^{(k-1)},\mathbf{x}_{0}^{(0)},\ldots \mathbf{x}_{1}^{(k-1)}\in\{0,1\}^{[0,n/k)}\), and we write \(\omega=(\omega_{i})_{i\in[0,k)}\) and \(|\omega|=\sum_{i=1}^{k}|\omega_{i}|\). By pigeonhole there is some choice of \(\mathbf{x}_{1}^{(0)},\ldots,\mathbf{x}_{1}^{(k-1)}\) such that the remaining average over \(\mathbf{x}_{0}^{(0)},\ldots\mathbf{x}_{0}^{(k-1)}\) is at least \(\delta^{2^{k}}\). This may be written as \[\delta^{2^{k}}\leqslant\big{|}\mathbf{E}_{\mathbf{x}^{(0)},\ldots,\mathbf{x} ^{(k-1)}\in\{0,1\}^{[0,n/k)}}e\big{(}\theta q_{0}\prod_{i\in[0,k)}(L_{b^{k}}( \mathbf{x}^{(i)})+t_{i})\big{)}\big{|}\] where \(t_{i}:=-L_{b^{k}}(\mathbf{x}_{1}^{(i)})\). It follows that for at least \(\frac{1}{2}\delta^{2^{k}}2^{(k-1)n/k}\) choices of \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)}\in\{0,1\}^{[0,n/k)}\) we have \[\big{|}\mathbf{E}_{\mathbf{x}^{(0)}\in\{0,1\}^{[0,n/k)}}e\big{(}\theta q_{0}L _{b^{k}}(\mathbf{x}^{(0)})\prod_{i=1}^{k-1}(L_{b^{k}}(\mathbf{x}^{(i)})+t_{i} )\big{)}\big{|}\geqslant\delta^{2^{k}}/2. \tag{5.4}\] Let \(\alpha\in\mathbf{R}/\mathbf{Z}\) be arbitrary. Note that \[\tilde{\mathrm{w}}_{n}(\alpha)=\sum_{i\in[0,n-1)}\|\alpha b^{i} \|^{2}=\sum_{j\in[0,k)}\sum_{i\in[0,n/k)}\|\alpha b^{j+ik}\|^{2}\] \[\leqslant\big{(}\sum_{j\in[0,k)}b^{2j}\big{)}\sum_{i\in[0,n/k)}\| \alpha b^{ik}\|^{2}\leqslant b^{2k}\sum_{i\in[0,n/k)}\|\alpha b^{ik}\|^{2}.\] Therefore, using the inequality \(|1+e(t)|=2|\cos(\pi t)|\leqslant 2\exp(-\|t\|^{2})\), we have \[\big{|}\mathbf{E}_{\mathbf{y}\in\{0,1\}^{[0,n/k)}}e(\alpha L_{b^{ k}}(\mathbf{y}))\big{|}=\prod_{i\in[0,n/k)}\left|\frac{1+e(\alpha b^{ik})}{2}\right|\] \[\leqslant\exp\big{(}-\sum_{i\in[0,n/k)}\|\alpha b^{ik}\|^{2}\big{)} \leqslant\exp\big{(}-b^{-2k}\tilde{\mathrm{w}}_{n}(\alpha)\big{)}.\] Combining this with (5.4), Proposition 5.2 follows. ## 6. Sums of products of linear forms We turn now to the next step of the outline in Section 2, which we called additive expansion. The main result of the previous section, Proposition 5.2, is roughly of the form "for quite a few \(m\sim N^{k-1}\), \(\tilde{\mathrm{w}}_{n}(\theta m)\lesssim\log(2/\delta)\)". (The reader should not attach any precise meaning to the symbols \(\sim,\lesssim\) here.) The shortcoming of the statement as it stands is that the set of \(m\) is of size \(\sim 2^{(k-1)n/k}\), which is substantially smaller than \(N^{k-1}\) (recall that \(N=b^{n}\)). The aim of this section is to upgrade the conclusion of Proposition 5.2 to get a much larger set of \(m\). Here is the statement we will prove. **Proposition 6.1**.: _Set \(C:=b^{7k^{2}/2}\). Suppose that \(\delta\in(0,1]\) and that \(k\mid n\). Suppose that \(|\widehat{\mu_{n}}(\theta)|\geqslant\delta\), and that \(N\geqslant(2/\delta)^{C}\), where \(N:=b^{n}\). Then for at least \((\delta/2)^{C}N^{k-1}\) values of \(m\), \(|m|\leqslant CN^{k-1}\), we have \(\tilde{\mathrm{w}}_{n}(\theta m)\leqslant C\log(2/\delta)\)._ The basic idea of the proof is to take sums of a few copies of the set of \(m\) produced in Proposition 5.2 which (it turns out) expands this set of \(m\) dramatically, whilst retaining the property of \(\tilde{\mathrm{w}}_{n}(\theta m)\) being small. We assemble some ingredients. The key input is Theorem 2.3 (see, in addition to Section 2, Appendix B). We will also require some other lemmas of a miscellaneous type, and we turn to these first. **Lemma 6.2**.: _Let \(\varepsilon,U,V\) be real parameters with \(0<\varepsilon\leqslant 2^{-44}\) and \(U,V\geqslant 64/\varepsilon\). Suppose that \(\Omega\subset[-U,U]\times[-V,V]\) has size at least \(\varepsilon UV\). Then at least \(\varepsilon^{7}UV\) integers \(n\in[-2UV,2UV]\) may be written as \(u_{1}v_{1}+u_{2}v_{2}\) with \((u_{1},v_{1}),(u_{2},v_{2})\in\Omega\)._ Proof.: The conclusion is invariant under applying any of the four involutions \((u,v)\mapsto(\pm u,\pm v)\) to \(\Omega\), so without loss of generality we may suppose that \(\Omega\cap([0,U]\times[0,V])\) has size at least \(\varepsilon UV/4\). It then follows that \(\Omega\cap([\varepsilon U/32,U]\times[\varepsilon V/32,V])\) has size at least \(\varepsilon UV/8\). Covering this box by disjoint dyadic boxes \([2^{i},2^{i+1})\times[2^{j},2^{j+1})\) contained in \([\varepsilon U/64,2U]\times[\varepsilon V/64,2V]\), we see that there is some dyadic box \([U^{\prime},2U^{\prime})\times[V^{\prime},2V^{\prime})\), \(\varepsilon U/64\leqslant U^{\prime}\leqslant U\), \(\varepsilon V/64\leqslant V^{\prime}\leqslant V\), on which the density of \(\Omega\) is at least \(\varepsilon/32\). Without loss of generality, suppose that \(U^{\prime}\leqslant V^{\prime}\), and set \(X:=U^{\prime}V^{\prime}\geqslant 1\). Set \(\Omega:=\Omega\cap\big{(}[U^{\prime},2U^{\prime})\times[V^{\prime},2V^{\prime })\big{)}\). For \(n\in\mathbf{Z}\), denote by \(r(n)\) the number of representations of \(n\) as \(u_{1}v_{1}+u_{2}v_{2}\) with \((u_{1},v_{1}),(u_{2},v_{2})\in\Omega^{\prime}\), and by \(\tilde{r}(n)\) the number of representations as \(u_{1}v_{1}+u_{2}v_{2}\) with \((u_{1},v_{1}),(u_{2},v_{2})\in[U^{\prime},2U^{\prime})\times[V^{\prime},2V^{ \prime})\). Thus \(r(n)\leqslant\tilde{r}(n)\). By Cauchy-Schwarz, \[(\varepsilon X/32)^{4}\leqslant|\Omega^{\prime}|^{4}=\big{(}\sum_{n}r(n) \big{)}^{2}\leqslant|\operatorname{Supp}(r)|\sum_{n}\tilde{r}(n)^{2}. \tag{6.1}\] Now, denoting by \(\nu(n)\) the number of divisors of \(n\) in the range \([U^{\prime},2U^{\prime})\), \[\tilde{r}(n)\leqslant\sum_{m\leqslant 4X}\nu(m)\nu(n-m)=\sum_{\begin{subarray}{ c}d,e\in[U^{\prime},2U^{\prime})\\ (d,e)|n\end{subarray}}\sum_{\begin{subarray}{c}m\leqslant 4X\\ d|m,e|n-m\end{subarray}}1\leqslant 8X\hskip-1.0pt\sum_{\begin{subarray}{c}d,e\in[U^{ \prime},2U^{\prime})\\ (d,e)|n\end{subarray}}\frac{1}{[d,e]}.\] Here, in the last step we used the fact that the set of \(m\) satisfying \(d\mid m\) and \(e\mid n-m\) is a single residue class modulo \([d,e]\) (the lowest common multiple of \(d\) and \(e\)), whose intersection with the interval \([1,4X)\) has size \(\leqslant 1+4X/[d,e]\leqslant 8X/[d,e]\) since \([d,e]\leqslant(2U^{\prime})^{2}\leqslant 4X\). Setting \(\delta:=(d,e)\) and \(d=\delta d^{\prime}\), \(e=\delta e^{\prime}\), so that \([d,e]=\delta d^{\prime}e^{\prime}\), it then follows that \[\tilde{r}(n)\leqslant 8X\sum_{\delta|n}\frac{1}{\delta}\sum_{d^{\prime},e^{ \prime}\in[U^{\prime}/\delta,2U^{\prime}/\delta)}\frac{1}{d^{\prime}e^{\prime }}\leqslant 8X\sum_{\delta|n}\frac{1}{\delta}.\] Since \(\tilde{r}(n)\) is supported where \(n\leqslant 8X\), we have \[\sum_{n}\tilde{r}(n)^{2} \leqslant(8X)^{2}\sum_{n\leqslant 8X}\big{(}\sum_{\delta|n}\frac{1 }{\delta}\big{)}^{2}=(8X)^{2}\sum_{\delta_{1},\delta_{2}\leqslant 8X}\frac{1}{ \delta_{1}\delta_{2}}\sum_{n\leqslant 8X}1_{[\delta_{1},\delta_{2}]|n}\] \[\leqslant(8X)^{2}\sum_{\delta_{1},\delta_{2}\leqslant 8X}\frac{1}{ \delta_{1}\delta_{2}}\big{(}\frac{8X}{[\delta_{1},\delta_{2}]}+1\big{)}.\] The contribution from the \(+1\) term is \(\leqslant(8X)^{2}(1+\log 8X)^{2}<2^{10}X^{3}\), since \(X\geqslant 1\). Since \([\delta_{1},\delta_{2}]\geqslant\sqrt{\delta_{1}\delta_{2}}\), the contribution from the main term is \(\leqslant 2^{8}\zeta(\frac{3}{2})^{2}X^{3}<2^{11}X^{3}\). It follows that \(\sum_{n}\tilde{r}(n)^{2}\leqslant 2^{12}X^{3}\). Comparing with (6.1), we obtain \(|\operatorname{Supp}(r)|\geqslant 2^{-32}\varepsilon^{4}X\geqslant 2^{-44} \varepsilon^{6}UV\). Since we are assuming that \(\varepsilon\leqslant 2^{-44}\), this is at least \(\varepsilon^{7}UV\), and the proof is complete. **Lemma 6.3**.: _Let \(X\geqslant 1\) be real, and suppose that \(S_{1},\ldots,S_{t}\subseteq[-X,X]\) are sets of integers with \(|S_{i}|\geqslant\eta X\). Then \(\big{|}\bigcap_{i=1}^{t}(S_{i}-S_{i})\big{|}\geqslant(\eta/5)^{t}X\)._ Proof.: We have \[\sum_{h_{2},\ldots,h_{t}}\big{(}\sum_{x}1_{S_{1}}(x)1_{S_{2}}(x+h_{2})\cdots 1 _{S_{t}}(x+h_{t})\big{)}=\prod_{i=1}^{t}|S_{i}|\geqslant\eta^{t}X^{t}.\] Since the \(h_{i}\) may be restricted to range over \([-2X,2X]\), which contains at most \(5X\) integers, there is some choice of \(h_{2},\ldots,h_{t}\) so that \(\sum_{x}1_{S_{1}}(x)1_{S_{2}}(x+h_{2})\cdots 1_{S_{t}}(x+h_{t})\geqslant(\eta/5)^{t}X\). That is, there is a set \(S\), \(|S|\geqslant(\eta/5)^{t}X\), such that \(S\subseteq S_{1}\cap(S_{2}-h_{2})\cap\cdots\cap(S_{t}-h_{t})\). But then \(S-S\subseteq\bigcap_{i=1}^{t}(S_{i}-S_{i})\), and the result is proved. We turn now to the heart of proof of Proposition 6.1. The key technical ingredient is the following. **Proposition 6.4**.: _Let \(d,r\) be positive integers with \(d\geqslant 2\). Let \(\alpha\in(0,1]\). Let \(m\) be an integer, set \(N:=d^{m}\), and suppose that \(N\geqslant(2/\alpha)^{(32d)^{r}}\). Suppose that \(t_{1},\ldots,t_{r}\) are integers with \(|t_{j}|\leqslant N\). Define \(L_{d}:\{0,1\}^{[0,m)}\to[0,N)\) as in (1.1). Suppose that \(A\subset\big{(}\{0,1\}^{[0,m)}\big{)}^{r}\) is a set of size at least \(\alpha 2^{mr}\). Then at least \((\alpha/2)^{(32d)^{r}}N^{r}\) integers _with \(|x|\leqslant(8dN)^{r}\) may be written as a \(\pm\) sum of at most \((4d)^{r}\) numbers \(\prod_{j=1}^{r}(L_{d}({\bf y}_{j})+t_{j})\) with \(({\bf y}_{1},\ldots,{\bf y}_{r})\in A\)._ Proof.: It is convenient to write \(\phi_{j}({\bf y}):=L_{d}({\bf y})+t_{j}\), \(j=1,\ldots,r\). Note for further use the containment \[\phi_{j}(\{0,1\}^{[0,m)})\subset[-2N,2N], \tag{6.2}\] which follows from the fact that \(|t_{j}|\leqslant N\). Turning to the proof, we proceed by induction on \(r\). In the case \(r=1\), we can apply Theorem 2.3. Noting that \(L_{d}(\{0,1,\ldots,d-1\}^{m})=\{0,1,\ldots,N-1\}\), we see that at least \(\alpha^{\log_{2}d}N\) elements of \(\{0,1,\ldots,N-1\}\) are the sum of \(d-1\) elements \(L_{d}({\bf y}_{1})\), \({\bf y}_{1}\in A\). Since, for any \({\bf y}_{1}^{(1)},\ldots,{\bf y}_{1}^{(d-1)}\in A\), we have \[\sum_{i=1}^{d-1}\phi_{1}({\bf y}_{1}^{(i)})=\sum_{i=1}^{d-1}L_{d}({\bf y}_{1} ^{(i)})+(d-1)t_{1},\] we see that at least \(\alpha^{\log_{2}d}N\) elements of \([-dN,dN]\) are the sum of \(d-1\) elements \(\phi_{1}({\bf y}_{1})\), \({\bf y}_{1}\in A\), which gives the required result in this case. Now suppose that \(r\geqslant 2\), and that we have proven the result for smaller values of \(r\). For each \({\bf y}_{r}\in\{0,1\}^{[0,m)}\), denote by \(A({\bf y}_{r})\subseteq(\{0,1\}^{[0,m)})^{r-1}\) the maximal set such that \(A({\bf y}_{r})\times\{{\bf y}_{r}\}\subseteq A\). By a simple averaging argument there is a set \(Y\) of least \((\alpha/2)2^{m}\) values of \({\bf y}_{r}\) such that \(|A({\bf y}_{r})|\geqslant(\alpha/2)2^{m(r-1)}\). By the inductive hypothesis, for each \({\bf y}_{r}\in Y\) there is a set \[B({\bf y}_{r})\subseteq[-(8dN)^{r-1},(8dN)^{r-1}], \tag{6.3}\] with \[|B({\bf y}_{r})|\geqslant(\alpha/4)^{(32d)^{r-1}}N^{r-1}, \tag{6.4}\] such that everything in \(B({\bf y}_{r})\) is a \(\pm\) combination of at most \((4d)^{r-1}\) elements \(\phi_{1}({\bf y}_{1})\cdots\phi_{r-1}({\bf y}_{r-1})\) with \(({\bf y}_{1},\ldots,{\bf y}_{r-1})\in A({\bf y}_{r})\). Observe that everything in \((B({\bf y}_{r})-B({\bf y}_{r}))\phi_{r}({\bf y}_{r})\) is then a \(\pm\) combination of at most \(2(4d)^{r-1}\) elements \(\phi_{1}({\bf y}_{1})\cdots\phi_{r}({\bf y}_{r})\) with \(({\bf y}_{1},\ldots,{\bf y}_{r})\in A\). Suppose now that \(z\in(d-1)\phi_{r}(Y)=\phi_{r}(Y)+\cdots+\phi_{r}(Y)\). Note that, by (6.2), \[|z|<2dN. \tag{6.5}\] For each such \(z\), pick a representation \(z=\phi_{r}({\bf y}_{r}^{(1)})+\cdots+\phi_{r}({\bf y}_{r}^{(d-1)})\) with \({\bf y}_{r}^{(i)}\in Y\) for \(i=1,\ldots,d-1\), and define \(S(z):=\bigcap_{i=1}^{d-1}(B({\bf y}_{r}^{(i)})-B({\bf y}_{r}^{(i)}))\). By (6.3), (6.4) and Lemma 6.3 (taking \(X:=(8dN)^{r-1}\), \((8d)^{-(r-1)}(\alpha/4)^{(32d)^{r-1}}\) and \(t:=d-1\) in that lemma) we have \[|S(z)| \geqslant 5^{-(d-1)}(8d)^{-(r-1)(d-2)}(\alpha/4)^{(32d)^{r-1}(d-1)}N^{r-1}\] \[\geqslant(\alpha/2)^{4d(32d)^{r-1}}N^{r-1}. \tag{6.6}\] Here, the second bound is crude and uses the inequality \[(2d+2)(32d)^{r-1}\geqslant(d-1)\log_{2}5+(r-1)(d-2)\log_{2}(8d),\] valid for \(d\geqslant 2\) and \(r\geqslant 1\) (by a large margin if \(r>1\)). Note that everything in \(S(z)z\) is a \(\pm\) combination of at most \(2(d-1)(4d)^{r-1}\) elements \(\phi_{1}(\mathbf{y}_{1})\cdots\phi_{r}(\mathbf{y}_{r})\) with \((\mathbf{y}_{1},\ldots,\mathbf{y}_{r})\in A\). Set \(\Omega:=\bigcup_{z\in(d-1)\phi_{r}(Y)}(S(z)\times\{z\})\). Then \(\Omega\subset[-U,U]\times[-V,V]\) where by (6.3) and (6.5) we can take \(U:=2(8dN)^{r-1}\) and \(V:=2dN\). Now by Theorem 2.3, and recalling that \(|Y|\geqslant(\alpha/2)2^{m}\), we have \(|(d-1)\phi_{r}(Y)|=|(d-1)L_{d}(Y)|\geqslant(\alpha/2)^{\log_{2}d}N\). From this and (6.6), we have \(|\Omega|\geqslant(\alpha/2)^{4d(32d)^{r-1}+\log_{2}d}N^{r}\). Thus, noting that \(UV=2^{3r-1+r\log_{2}d}N^{r}\), it follows that \(|\Omega|\geqslant\varepsilon UV\) with \[\varepsilon:=(\alpha/2)^{4d(32d)^{r-1}+3r+(r+1)\log_{2}d}. \tag{6.7}\] Now we aim to apply Lemma 6.2. For such an application to be valid, we require \(\varepsilon<2^{-44}\), which is comfortably a consequence of (6.7). We also need that \(U,V\geqslant 64/\varepsilon\), which follows from (6.7) and the lower bound on \(N\) in the hypotheses of the proposition. Note that if \((u_{1},v_{1})=(S(z),z)\), \((u_{2},v_{2})=(S(z^{\prime}),z^{\prime})\in\Omega\) then \(u_{1}v_{1}+u_{2}v_{2}=S(z)z^{\prime}+S(z^{\prime})z^{\prime}\) is a \(\pm\) combination of at most \((4d)^{r}\) elements \(\phi_{1}(\mathbf{y}_{1})\cdots\phi_{r}(\mathbf{y}_{r})\) with \((\mathbf{y}_{1},\ldots,\mathbf{y}_{r})\in A\), and by Lemma 6.2 there are \(\geqslant\varepsilon^{7}UV>\varepsilon^{7}N^{r}\) such elements. To conclude the argument, we need only check that \(\varepsilon^{7}\geqslant(\alpha/2)^{(32d)^{r}}\) which, using (6.7), comes down to checking that \(4d(32d)^{r-1}\geqslant 7(3r+(r+1)\log_{2}d)\), which is comfortably true for all \(d,r\geqslant 2\). Finally, we are ready for the proof of the main result of the section, Proposition 6.1, which results from combining Propositions 5.2 and 6.4. Proof of Proposition 6.1.: In the following proof we suppress a number of short calculations, showing that various constants are bounded by \(C=b^{7k^{2}/2}\). These calculations are all simple finger exercises using the assumption that \(b\geqslant 3\) and \(k\geqslant 2\). First apply Proposition 5.2. As in the statement of that Proposition, we obtain \(t_{1},\ldots,t_{k-1}\in\mathbf{Z}\), \(|t_{j}|\leqslant N\) such that, for at least \(\frac{1}{2}\delta^{2^{k}}2^{(k-1)n/k}\) choices of \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)}\in\{0,1\}^{[0,n/k)}\) we have \[\tilde{\mathrm{w}}_{n}\big{(}\theta q_{0}\prod_{i=1}^{k-1}(L_{b^{k}}(\mathbf{x}^ {(i)})+t_{i})\big{)}\leqslant 2^{k}b^{2k}\log(2/\delta), \tag{6.8}\] for some positive integer \(q_{0}\leqslant b^{k^{2}}\). (For the definition of \(\tilde{\mathrm{w}}_{n}\), see Definition 5.1.) To this conclusion, we apply Proposition 6.4, taking \(m:=n/k\), \(r:=k-1\) and \(d:=b^{k}\) in that proposition, and taking \(A\) to be the set of all \((\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)})\) as just described; thus we may take \(\alpha:=\delta^{2^{k}}/2\). Note that \(N=d^{m}=b^{n}\) is the same quantity. The reader may check that the lower bound on \(N\) required for this application of Proposition 6.4 is a consequence of the assumption on \(N\) in Proposition 6.1. We conclude that at least \((\delta^{2^{k}}/4)^{(32b^{k})^{k-1}}N^{k-1}>(\delta/2)^{C}N^{k-1}\) integers \(x\) with \(|x|\leqslant(8b^{k}N)^{k-1}\) may be written as a \(\pm\) sum of at most \((4b^{k})^{k-1}\) numbers \(\prod_{i=1}^{k-1}(L_{b^{k}}(\mathbf{x}^{(i)})+t_{i})\), with \((\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)})\in A\). By (6.8), the fact that \(\tilde{\mathrm{w}}_{n}(-\alpha)=\tilde{\mathrm{w}}_{n}(\alpha)\), as well as the (easily-verified) subadditivity property \[\tilde{\mathrm{w}}_{n}(\alpha_{1}+\cdots+\alpha_{s})\leqslant s(\tilde{ \mathrm{w}}_{n}(\alpha_{1})+\cdots+\tilde{\mathrm{w}}_{n}(\alpha_{s})),\] we see that for all such \(x\) we have \[\tilde{\mathrm{w}}_{n}(\theta q_{0}x)\leqslant(4b^{k})^{2(k-1)}2^{k}b^{2k} \log(2/\delta)<C\log(2/\delta).\] Finally, note that for all these \(x\) we have \(|q_{0}x|\leqslant b^{k^{2}}(8b^{k})^{k-1}N^{k-1}\), which is less than \(CN^{k-1}\). This concludes the proof. ## 7. From digital to diophantine In this section we turn to the final step in the outline of Section 2, the aim of which is to convert the 'digital' conclusion of Proposition 6.1 to the 'diophantine' conclusion of Proposition 2.1. Before turning to detailed statements, we comment on the notion of a centred base \(b\) expansion. _Centred base \(b\) expansions._ Consider \(\alpha\in\mathbf{R}/\mathbf{Z}\). Then there are essentially unique choices of integers \(\alpha_{j}\in(-\frac{b}{2},\frac{b}{2}]\) such that \[\alpha=\alpha_{0}+\alpha_{1}b^{-1}+\alpha_{2}b^{-2}+\ldots(\mathrm{mod}\ 1). \tag{7.1}\] We call this the _centred_ base \(b\) expansion of \(\alpha(\mathrm{mod}\ 1)\). Let us pause to explain the existence of such expansions. When \(b\) is odd, so that \((-\frac{b}{2},\frac{b}{2}]=\{-\frac{1}{2}(b-1),\ldots,\frac{1}{2}(b-1)\}\), the centred expansion may be obtained from the more usual base \(b\) expansion of \(\alpha+\frac{b}{2}\), noting that \(\frac{b}{2}=\frac{1}{2}(b-1)(1+b^{-1}+b^{-2}+\cdots)\). As usual, there is some ambiguity when all the digits from some point on are \(\frac{1}{2}(b-1)\); any such number can also be written with all digits from some point on being \(-\frac{1}{2}(b-1)\). For consistency with the usual base \(b\) expansions, we always prefer the latter representation. When \(b\) is even, so that \((-\frac{b}{2},\frac{b}{2}]=\{-\frac{1}{2}(b-2),\ldots,\frac{1}{2}b\}\), one instead considers the usual base \(b\) expansion of \(\alpha+\frac{b(b-2)}{2(b-1)}\), noting now that \(\frac{b(b-2)}{2(b-1)}=\frac{1}{2}(b-2)(1+b^{-1}+b^{-2}+\cdots)\). **Definition 7.1**.: Given \(\alpha\in\mathbf{R}/\mathbf{Z}\), denote by \(\mathrm{w}_{n}(\alpha)\) the number of nonzero digits among the first \(n\) digits \(\alpha_{0},\alpha_{1},\ldots,\alpha_{n-1}\) in the centred expansion (7.1). We record the connection between \(\mathrm{w}_{n}\) and the 'analytic' proxy \(\tilde{\mathrm{w}}_{n}\), introduced in Definition 5.1. **Lemma 7.2**.: _Suppose that \(b\geqslant 3\). Then \(\tilde{\mathrm{w}}_{n}(\alpha)\leqslant\mathrm{w}_{n}(\alpha)\leqslant 16b^{2} \tilde{\mathrm{w}}_{n}(\alpha)\)._ Proof.: Let the centred expansion of \(\alpha(\mathrm{mod}\;1)\) be (7.1), and suppose that \(\alpha_{i}\) is a non-zero digit. We have \(\alpha b^{i-1}\equiv\sum_{j\geqslant 0}\alpha_{i+j}b^{-j-1}(\mathrm{mod}\;1)\). However, \[|\sum_{j\geqslant 0}\alpha_{i+j}b^{-j-1}|\leqslant\frac{b}{2}\sum_{j\geqslant 0 }b^{-j-1}=\frac{b}{2(b-1)}\leqslant\frac{3}{4},\] and, since \(\alpha_{i}\neq 0\), \[|\sum_{j\geqslant 0}\alpha_{i+j}b^{-j-1}|\geqslant\frac{1}{b}-\frac{b}{2}\sum_ {j\geqslant 1}b^{-j-1}=\frac{b-2}{2b(b-1)}\geqslant\frac{1}{4b}.\] Thus \(\|\alpha b^{i-1}\|\geqslant 1/4b\), and the upper bound follows. The lower bound is not needed elsewhere in the paper, but we sketch the proof for completeness. Let \(I:=\{i:\alpha_{i}\neq 0\}\). Given \(j\), denote by \(i(j)\) the distance from \(j\) to the smallest element of \(I\) which is greater than \(j\). Then \[\|\alpha b^{j}\|=\|\sum_{i\in I,i>j}\alpha_{i}b^{-i+j}\|\leqslant\frac{b}{2} \sum_{m\geqslant i(j)}b^{-m}=\frac{b^{2}}{2(b-1)}b^{-i(j)}.\] Now square this and sum over \(j\), and use the fact that \(\#\{j:i(j)=i\}\leqslant|I|=\mathrm{w}_{n}(\alpha)\) for all \(i\). _Remarks._ This upper bound breaks down when \(b=2\), as may be seen by considering \(\alpha\) of the form \(1-2^{-m}\). This is the main reason for the restriction to \(b\geqslant 3\) in the paper. Here is the main result of the section. **Proposition 7.3**.: _Let \(b\geqslant 3\) be an integer. Let \(r,M,n\) be a positive integers, and set \(N:=b^{n}\). Let \(\eta\in(0,1]\) be real. Suppose that \(M,N\geqslant b^{20r}\eta^{-2}\). Suppose that \(\theta\in\mathbf{R}\), and that \(\mathrm{w}_{n}(\theta m)\leqslant r\) for at least \(\eta M\) values of \(m\in[-M,M]\). Then there is some positive integer \(q\leqslant b^{20r}\eta^{-2}\) such that \(\|\theta q\|\leqslant b^{20r}\eta^{-2}M^{-1}N^{-1}\)._ Before giving the proof, we assemble some lemmas. In the first of these, we will again be concerned with centred expansions in base \(b\), but this time of integers. Every integer \(x\) has a unique finite-length centred base \(b\) expansion \[x=x_{0}+x_{1}b+x_{2}b^{2}+\ldots. \tag{7.2}\] with \(x_{i}\in(-\frac{b}{2},\frac{b}{2}]\). To see uniqueness, note that \(x_{0}\) is uniquely determined by \(x(\mathrm{mod}\ b)\), then \(x_{1}\) is uniquely determined by \(\frac{x-x_{0}}{b}(\mathrm{mod}\ b)\), and so on. Strictly speaking, we do not need the existence in this paper but one way to see it is to take the usual base \(b\) expansion and modify from the right. For instance, in base \(10\) we have, denoting the 'digit' \(-d\) by \(\overline{d}\), \(6277=628\overline{3}=63\overline{23}=1\overline{43}\overline{23}\). Denote by \(\mathrm{d}_{b}(x)\) the number of nonzero digits in this expansion of \(x\). The set of \(x\) for which \(\mathrm{d}_{b}(x)\leqslant r\) is a kind of "digital Hamming ball". As for true Hamming balls [4, 14] subsets of this set have little additive structure. Such a result was stated as Proposition 2.4. We recall the statement now. Recall that, if \(A\subset\mathbf{Z}\) is a finite set, the additive energy \(E(A)\) is the number of quadruples \((a_{1},a_{2},a_{3},a_{4})\in A\times A\times A\times A\) with \(a_{1}+a_{2}=a_{3}+a_{4}\). **Proposition 2.4**.: _Let \(r\in\mathbf{Z}_{\geqslant 0}\). Suppose that \(A\subset\mathbf{Z}\) is a finite set, all of whose elements have at most \(r\) nonzero digits in their centred base \(b\) expansion. Then \(E(A)\leqslant(2b)^{4r}|A|^{2}\)._ The proof of Proposition 2.4 will proceed by induction. However, to make this work, we need to prove a more general statement, involving four potentially different sets \(A_{1},A_{2},A_{3},A_{4}\) instead of just one, as well as the provision for a 'carry' in base \(b\) arithmetic. Here is the more general statement, from which Proposition 2.4 follows immediately. **Lemma 7.4**.: _Let \(r_{1},r_{2},r_{3},r_{4}\in\mathbf{Z}_{\geqslant 0}\). For each \(i\in\{1,2,3,4\}\), suppose that \(A_{i}\subset\mathbf{Z}\) is a finite set, all of whose elements have at most \(r_{i}\) nonzero digits in their centred base \(b\) expansion. Let \(e\in\mathbf{Z}\), \(|e|<b\). Then the number of quadruples \((a_{1},a_{2},a_{3},a_{4})\in A_{1}\times A_{2}\times A_{3}\times A_{4}\) with \(a_{1}+a_{2}=a_{3}+a_{4}+e\) is at most \((2b)^{r_{1}+r_{2}+r_{3}+r_{4}}|A_{1}|^{1/2}|A_{2}|^{1/2}|A_{3}|^{1/2}|A_{4}|^{ 1/2}\)._ Proof.: We proceed by induction on \(\sum_{j=1}^{4}|A_{j}|+\sum_{j=1}^{4}r_{j}\), the result being obvious when this quantity is zero. Suppose now that \(\sum_{j=1}^{4}r_{j}=n>0\) and that the result has been proven for all smaller values of \(n\). If any of the \(A_{j}\) are empty, or if \(A_{1}=A_{2}=A_{3}=A_{4}=\{0\}\), the result is obvious. Suppose this is not the case, but that \(b\) divides every element of \(\bigcup_{j=1}^{4}A_{j}\). Let \(b^{m}\) be the largest power of \(b\) which divides every element of \(\bigcup_{j=1}^{4}A_{j}\), this being well-defined since this set contains at least one nonzero element. Then, if the number of quadruples in \(A_{1}\times A_{2}\times A_{3}\times A_{4}\) with \(a_{1}+a_{2}=a_{3}+a_{4}+e\) is nonzero, we must have \(e=0\), and the number of such quadruples is the same as the number in \(\frac{1}{b^{m}}A_{1}\times\frac{1}{b^{m}}A_{2}\times\frac{1}{b^{m}}A_{3}\times \frac{1}{b^{m}}A_{4}\). Thus, replacing \(A_{j}\) by \(\frac{1}{b^{m}}A_{j}\), we may assume that not all the elements of \(\bigcup_{j=1}^{4}A_{j}\) are divisible by \(b\). For each \(j\in\{1,2,3,4\}\) and for each \(i\in(-\frac{b}{2},\frac{b}{2}]\), write \(A_{j}^{(i)}\) for the set of \(x\in A_{j}\) whose first digit \(x_{0}\) (in the centred base \(b\) expansion (7.2)) is \(i\). Write \(\alpha_{j}(i)\) for the relative density of \(A_{j}^{(i)}\) in \(A_{j}\), that is to say \(|A_{j}^{(i)}|=\alpha_{j}(i)|A_{j}|\). Any quadruple \((a_{1},a_{2},a_{3},a_{4})\) with \(a_{1}+a_{2}=a_{3}+a_{4}+e\) must have \(a_{j}\in A_{j}^{(i_{j})}\), where \(i_{1}+i_{2}\equiv i_{3}+i_{4}+e(\text{mod }b)\). Let us estimate the number of such quadruples \((a_{1},a_{2},a_{3},a_{4})\), for each quadruple \((i_{1},i_{2},i_{3},i_{4})\in(-\frac{b}{2},\frac{b}{2}]^{4}\) satisfying this condition. First note that \(i_{1}+i_{2}=i_{3}+i_{4}+e+e^{\prime}b\) for some integer \(e^{\prime}\), where \[|e^{\prime}|\leqslant\frac{1}{b}\big{(}|i_{1}+i_{2}-i_{3}-i_{4}|+|e|\big{)} \leqslant\frac{3(b-1)}{b}<b,\] where here we noted that \(|i_{1}-i_{3}|,|i_{2}-i_{4}|,|e|\leqslant b-1\). We then have \(\frac{1}{b}(a_{1}-i_{1})+\frac{1}{b}(a_{2}-i_{2})-\frac{1}{b}(a_{3}-i_{3})- \frac{1}{b}(a_{4}-i_{4})=-e^{\prime}\). Now the set \(A_{j}^{\prime}:=\frac{1}{b}(A_{j}^{(i_{j})}-i_{j})\) is a finite set of integers, all of whose elements \(x\) have \(\text{d}_{b}(x)\leqslant r_{j}^{\prime}:=r_{j}-1_{i_{j}\neq 0}\). Note that \(\sum_{j=1}^{4}|A_{j}^{\prime}|+\sum_{j=1}^{4}r_{j}^{\prime}<\sum_{j=1}^{4}|A_ {j}|+\sum_{j=1}^{4}r_{j}\); if any \(i_{j}\) is not zero, this follows from the fact that \(r_{j}^{\prime}=r_{j}-1\), whereas if \(i_{1}=i_{2}=i_{3}=i_{4}=0\) we have \(\sum_{j=1}^{4}|A_{j}^{\prime}|=\sum_{j=1}^{4}|A_{j}^{(0)}|<\sum_{j=1}^{4}|A_ {j}|\), since not every element of \(\bigcup_{j=1}^{4}A_{j}\) is a multiple of \(b\). It follows from the inductive hypothesis that the numbers of quadruples \((a_{1},a_{2},a_{3},a_{4})\) with \(a_{1}+a_{2}=a_{3}+a_{4}+e\), and with \(a_{j}\in A_{j}^{(i_{j})}\), \(j=1,\ldots,4\), is bounded above by \((2b)^{r_{1}+r_{2}+r_{3}+r_{4}-\#\{j:i_{j}\neq 0\}}\prod_{j=1}^{4}|A_{j}^{(i_{j} )}|^{1/2}\). To complete the inductive step, it is therefore enough to show that \[\sum_{i_{1}+i_{2}\equiv i_{3}+i_{4}+e(\text{mod }b)}(2b)^{-\#\{j:i_{j}\neq 0\}} \prod_{j=1}^{4}\alpha_{j}(i_{j})^{1/2}\leqslant 1. \tag{7.3}\] If \(e\not\equiv 0(\text{mod }b)\) then we have \(\#\{j:i_{j}\neq 0\}\geqslant 1\) for all \((i_{1},i_{2},i_{3},i_{4})\) in this sum, and moreover (where all congruences are \((\text{mod }b)\)) \[\sum_{i_{1}+i_{2}\equiv i_{3}+i_{4}+e}\prod_{j=1}^{4}\alpha_{j}(i _{j})^{1/2}\] \[=\sum_{x\in\mathbf{Z}/b\mathbf{Z}}\big{(}\sum_{i_{1}+i_{2}\equiv x+ e}\alpha_{1}(i_{1})^{1/2}\alpha_{2}(i_{2})^{1/2}\big{)}\big{(}\sum_{i_{3}+i_{4} \equiv x}\alpha_{3}(i_{3})^{1/2}\alpha_{4}(i_{4})^{1/2}\big{)}\] \[\leqslant\sum_{x\in\mathbf{Z}/b\mathbf{Z}}\big{(}\sum_{i_{1}+i_{2 }\equiv x+e}\frac{\alpha_{1}(i_{1})+\alpha_{2}(i_{2})}{2}\big{)}\big{(}\sum_{i _{3}+i_{4}\equiv x}\frac{\alpha_{3}(i_{3})+\alpha_{4}(i_{4})}{2}\big{)}=b,\] since \(\sum_{i}\alpha_{j}(i)=1\) for each \(j\). Therefore (7.3) holds in this case. Suppose, then, that \(e\equiv 0(\text{mod }b)\), which means that \(e=0\). Then, if \(i_{1}+i_{2}\equiv i_{3}+i_{4}(\text{mod }b)\) we either have \((i_{1},i_{2},i_{3},i_{4})=(0,0,0,0)\), or else \(\#\{j:i_{j}\neq 0\}\geqslant 2\), and so to establish (7.3) it suffices to show \[\prod_{j=1}^{4}\alpha_{j}(0)^{1/2}+(2b)^{-2}\sum_{\begin{subarray}{c}i_{1}+i_ {2}\equiv i_{3}+i_{4}(\text{mod }b)\\ (i_{1},i_{2},i_{3},i_{4})\neq(0,0,0,0)\end{subarray}}\prod_{j=1}^{4}\alpha_{j} (i_{j})^{1/2}\leqslant 1. \tag{7.4}\] Write \(\varepsilon_{j}:=1-\alpha_{j}(0)\). We first estimate the contribution to the sum where none of \(i_{1},i_{2},i_{3},i_{4}\) is zero. We have, similarly to the above (and again with congruences being \((\text{mod }b)\)) \[\sum_{\begin{subarray}{c}i_{1}+i_{2}\equiv i_{3}+i_{4}\\ i_{1}i_{2}i_{3}i_{4}\neq 0\end{subarray}}\prod_{j=1}^{4}\alpha_{j}(i_{j})^{1/2}\] \[=\sum_{x\in\mathbf{Z}/b\mathbf{Z}}\big{(}\sum_{\begin{subarray}{c }i_{1}+i_{2}\equiv x\\ i_{1}i_{2}\neq 0\end{subarray}}\alpha_{1}(i_{1})^{1/2}\alpha_{2}(i_{2})^{1/2} \big{)}\big{(}\sum_{\begin{subarray}{c}i_{3}+i_{4}\equiv x\\ i_{3}i_{4}\neq 0\end{subarray}}\alpha_{3}(i_{3})^{1/2}\alpha_{4}(i_{4})^{1/2} \big{)}\] \[\leqslant\sum_{x\in\mathbf{Z}/b\mathbf{Z}}\sum_{\begin{subarray}{c }i_{1}+i_{2}\equiv x\\ i_{1}i_{2}\neq 0\end{subarray}}\big{(}\frac{\alpha_{1}(i_{1})+\alpha_{2}(i_{2})}{2} \big{)}\big{(}\sum_{\begin{subarray}{c}i_{3}+i_{4}\equiv x\\ i_{3}i_{4}\neq 0\end{subarray}}\frac{\alpha_{3}(i_{3})+\alpha_{4}(i_{4})}{2}\big{)}\] \[\leqslant b\big{(}\frac{\varepsilon_{1}+\varepsilon_{2}}{2}\big{)} \big{(}\frac{\varepsilon_{3}+\varepsilon_{4}}{2}\big{)}<b\sum_{j=1}^{4} \varepsilon_{j}.\] Next we estimate the contribution to the sum in (7.4) from the terms where at least one, but not all, of \(i_{1},i_{2},i_{3},i_{4}\) are zero. In each such term, at least two \(i_{j},i_{j^{\prime}}\) are not zero, say with \(j<j^{\prime}\). Fix a choice of \(j,j^{\prime}\). Then for each \(i_{j},i_{j^{\prime}}\) there are at most two choices of the other \(i_{t}\), \(t\in\{1,2,3,4\}\setminus\{j,j^{\prime}\}\), one of which must be zero, and the other then being determined by the relation \(i_{1}+i_{2}\equiv i_{3}+i_{4}(\operatorname{mod}\,b)\). It follows that the contribution to the sum in (7.4) from this choice of \(j,j^{\prime}\) is \[\leqslant 2\sum_{i_{j},i_{j^{\prime}}\neq 0}\alpha_{j}(i_{j})^{1/2} \alpha_{j^{\prime}}(i_{j^{\prime}})^{1/2} =2\big{(}\sum_{i\neq 0}\alpha_{j}(i)^{1/2}\big{)}\big{(}\sum_{i \neq 0}\alpha_{j^{\prime}}(i)^{1/2}\big{)}\] \[\leqslant 2b\varepsilon_{j}^{1/2}\varepsilon_{j^{\prime}}^{1/2} \leqslant b(\varepsilon_{j}+\varepsilon_{j^{\prime}}),\] where in the middle step we used Cauchy-Schwarz and the fact that \(\sum_{i\neq 0}\alpha_{j}(i)=\varepsilon_{j}\). Summing over the six choices of \(j,j^{\prime}\) gives an upper bound of \(3b\sum_{j=1}^{4}\varepsilon_{j}\). Putting all this together, we see that the LHS of (7.4) is bounded above by \(\prod_{j=1}^{4}(1-\varepsilon_{j})^{1/2}+\frac{1}{b}\sum_{j=1}^{4} \varepsilon_{j}\). Using \(\prod_{j=1}^{4}(1-\varepsilon_{j})^{1/2}\leqslant 1-\frac{1}{2}\sum_{j=1}^{4} \varepsilon_{j}\), it follows that this is at most \(1\). This completes the proof of (7.4), and hence of Lemma 7.4. Now we turn to the proof of Proposition 7.3. Proof of Proposition 7.3.: Consider the map \(\psi:\mathbf{R}\to\mathbf{Z}\) defined as follows. If \(\alpha(\operatorname{mod}\,1)\) has centred base \(b\) expansion as in (7.1), set \(\psi(\alpha):=\alpha_{0}b^{n-1}+\cdots+\alpha_{n-2}b+\alpha_{n-1}\). Observe that \[\operatorname{d}_{b}(\psi(\alpha))=\operatorname{w}_{n}(\alpha). \tag{7.5}\] Note that \[\|\alpha-b^{1-n}\psi(\alpha)\|\leqslant\sum_{i\geqslant n}\frac{b}{2}b^{-i} \leqslant\frac{3}{4}b^{1-n}. \tag{7.6}\] Thus if \(\alpha_{1}+\alpha_{2}=\alpha_{3}+\alpha_{4}\) then \[\|b^{1-n}(\psi(\alpha_{1})+\psi(\alpha_{2})-\psi(\alpha_{3})-\psi(\alpha_{4}) )\|\leqslant 3b^{1-n}.\] Note also that, since \(\psi\) takes values in \(\mathbf{Z}\cap[-\frac{3}{4}b^{n},\frac{3}{4}b^{n}]\), we have \[|\psi(\alpha_{1})+\psi(\alpha_{2})-\psi(\alpha_{3})-\psi(\alpha_{4})|\leqslant 3 b^{n}.\] Now if \(x\in\mathbf{Z}\) is an integer with \(\|b^{1-n}x\|\leqslant 3b^{1-n}\) and \(|x|\leqslant 3b^{n}\) then \(x\) takes (at most) one of the \(7(6b+1)\) values \(\lambda b^{n-1}+\lambda^{\prime}\), \(\lambda\in\{-3b,\ldots,3b\}\), \(\lambda^{\prime}\in\{0,\pm 1,\pm 2,\pm 3\}\). Denoting by \(\Sigma\) the set consisting of these \(7(6b+1)\) values, we see that \(\psi\) has the following almost-homomorphism property: if \(\alpha_{1}+\alpha_{2}=\alpha_{3}+\alpha_{4}\) then \[\psi(\alpha_{1})+\psi(\alpha_{2})-\psi(\alpha_{3})-\psi(\alpha_{4})\in\Sigma.\] With parameters as in the statement of Proposition 7.3, consider the map \(\pi:[-M,M]\to\mathbf{Z}\) given by \[\pi(m):=\psi(\theta m). \tag{7.7}\] Since the map \(m\mapsto\theta m\) is a homomorphism from \({\bf Z}\) to \({\bf R}\), we see that \(\pi\) also has an almost-homomorphism property, namely that if \(m_{1}+m_{2}=m_{3}+m_{4}\) then \[\pi(m_{1})+\pi(m_{2})-\pi(m_{3})-\pi(m_{4})\in\Sigma. \tag{7.8}\] Denote by \(\mathscr{M}\) the set of all \(m\in[-M,M]\) such that \({\rm w}_{n}(\theta m)\leqslant r\). Thus, by the assumptions of Proposition 7.3, \(|\mathscr{M}|\geqslant\eta M\). Denote \(A:=\pi(\mathscr{M})\). By the definition (7.7) of \(\pi\), (7.5) and the definition of \(\mathscr{M}\), we see that \({\rm d}_{b}(a)\leqslant r\) for all \(a\in A\). For \(a\in A\), denote by \(X_{a}:=\pi^{-1}(a)\cap\mathscr{M}\) the \(\pi\)-fibre above \(a\). Decompose \(A\) according to the dyadic size of these fibres, thus for \(j\in{\bf Z}_{\geqslant 0}\) set \[A_{j}:=\{a\in A:2^{-j-1}M<|X_{a}|\leqslant 2^{-j}M\}. \tag{7.9}\] Denote by \(\mathscr{M}_{j}\subset\mathscr{M}\) the points of \(\mathscr{M}\) lying above \(A_{j}\), that is to say \(\mathscr{M}_{j}:=\bigcup_{a\in A_{j}}X_{a}\). Define \(\eta_{j}\) by \(|\mathscr{M}_{j}|=\eta_{j}M\). Since \(\mathscr{M}\) is the disjoint union of the \(\mathscr{M}_{j}\), we have \[\sum_{j}\eta_{j}\geqslant\eta. \tag{7.10}\] By (7.9) we have \(2^{-j-1}M|A_{j}|\leqslant|\mathscr{M}_{j}|\leqslant 2^{-j}M|A_{j}|\), and so \[2^{j}\eta_{j}\leqslant|A_{j}|\leqslant 2^{j+1}\eta_{j}. \tag{7.11}\] Now by a simple application of the Cauchy-Schwarz inequality any subset of \([-M,M]\) of size at least \(\varepsilon M\) has at least \(\varepsilon^{4}M^{3}/4\) additive quadruples. In particular, for any \(j\in{\bf Z}_{\geqslant 0}\) there are \(\geqslant\eta_{j}^{4}M^{3}/4\) additive quadruples in \(\mathscr{M}_{j}\). By (7.8), there is some \(\sigma_{j}\in\Sigma\) such that, for \(\geqslant 2^{-10}b^{-1}\eta_{j}^{4}M^{3}\) additive quadruples in \(\mathscr{M}_{j}\), we have \[\pi(m_{1})+\pi(m_{2})=\pi(m_{3})+\pi(m_{4})+\sigma_{j}. \tag{7.12}\] For each \(j\), fix such a choice of \(\sigma_{j}\). Now the number of such quadruples with \(\pi(m_{i})=a_{i}\) for \(i=1,2,3,4\) is, for a fixed choice of \(a_{1},\ldots,a_{4}\) satisfying \[a_{1}+a_{2}=a_{3}+a_{4}+\sigma_{j}, \tag{7.13}\] the number of additive quadruples in \(X_{a_{1}}\times X_{a_{2}}\times X_{a_{3}}\times X_{a_{4}}\), which is bounded above by \(|X_{a_{1}}||X_{a_{2}}||X_{a_{3}}|\leqslant 2^{-3j}M^{3}\) since three elements of an additive quadruple determine the fourth. It follows that the number of \((a_{1},a_{2},a_{3},a_{4})\in A_{j}^{4}\) satisfying (7.13) is \(\geqslant 2^{-10}b^{-1}2^{3j}\eta_{j}^{4}\). By (7.11), this is \(\geqslant 2^{-13}b^{-1}\eta_{j}|A_{j}|^{3}\). Now if \(S_{1},S_{2},S_{3},S_{4}\) are additive sets then \(E(S_{1},S_{2},S_{3},S_{4})\), the number of solutions to \(s_{1}+s_{2}=s_{3}+s_{4}\) with \(s_{i}\in S_{i}\), is bounded by \(\prod_{i=1}^{4}E(S_{i})^{1/4}\), where \(E(S_{i})\) is the number of additive quadruples in \(S_{i}\). This is essentially the Gowers-Cauchy-Schwarz inequality for the \(U^{2}\)-norm; it may be proven by two applications of Cauchy-Schwarz or alternatively from Holder's inequality on the Fourier side. Applying this with \(S_{1}=S_{2}=S_{3}=A_{j}\) and \(S_{4}=A_{j}+\sigma_{j}\), and noting that \(E(A_{j}+\sigma_{j})=E(A_{j})\), we see that \(E(A_{j})\geqslant 2^{-13}b^{-1}\eta_{j}|A_{j}|^{3}\). By Proposition 2.4, we have \(|A_{j}|\leqslant 2^{4r+13}b^{4r+1}\eta_{j}^{-1}\). Comparing with (7.11) gives \(\eta_{j}\leqslant 2^{2r+7-j/2}b^{2r+1/2}\). Take \(J\) to be the least integer such that \(2^{J/2}\geqslant 2^{2r+9}b^{2r+1/2}\eta^{-1}\); then \(\sum_{j\geqslant J}\eta_{j}<\eta\), and so by (7.10), some \(\mathscr{M}_{j}\), \(j\leqslant J-1\), is nonempty. In particular, by (7.9) there is some value of \(a\) such that \(|X_{a}|\geqslant 2^{-J}M\geqslant 2^{-4r-20}b^{-4r-1}\eta^{2}M\). Fix this value of \(a\) and set \(\mathscr{M}^{\prime}:=X_{a}\). Thus, to summarise, \[|\mathscr{M}^{\prime}|\geqslant 2^{-4r-20}b^{-4r-1}\eta^{2}M \tag{7.14}\] and if \(m\in\mathscr{M}^{\prime}\) then \(\pi(m)=a\). Note that the condition on \(M\) in the statement of Proposition 7.3 implies (comfortably) that \(|\mathscr{M}^{\prime}|\geqslant 2\). Note that, by (7.6) and the definition (7.7) of \(\pi\), we have that if \(m\in\mathscr{M}^{\prime}\) then \[\|\theta m-b^{1-n}a\|\leqslant\frac{3}{4}b^{1-n}. \tag{7.15}\] Pick some \(m_{0}\in\mathscr{M}^{\prime}\), and set \(\mathscr{M}^{\prime\prime}:=\mathscr{M}^{\prime}-m_{0}\subset[-2M,2M]\). By the triangle inequality and (7.15), we have \[\|\theta m\|\leqslant\frac{3}{2}b^{1-n}<2bN^{-1} \tag{7.16}\] for all \(m\in\mathscr{M}^{\prime\prime}\). (Recall that, by definition, \(N=b^{n}\).) Replacing \(\mathscr{M}^{\prime\prime}\) by \(-\mathscr{M}^{\prime\prime}\) if necessary (and since \(|\mathscr{M}^{\prime\prime}|\geqslant 2\)) it follows that there are at least \(2^{-4r-22}b^{-4r-1}\eta^{2}M\) integers \(m\in\{1,\dots,2M\}\) satisfying (7.16). Now we apply Lemma C.1, taking \(L=2M\), \(\delta_{1}=2bN^{-1}\) and \(\delta_{2}=2^{-4r-22}b^{-4r-1}\eta^{2}\) in that result. The conditions of the lemma hold under the assumptions that \(M,N\geqslant b^{20r}\eta^{-2}\) (using here the fact that \(b\geqslant 3\)). The conclusion implies that there is some positive integer \(q\leqslant b^{20r}\eta^{-2}\) such that \(\|\theta q\|\leqslant b^{20r}\eta^{-2}N^{-1}M^{-1}\), which is what we wanted to prove. Finally, we are in a position to prove Proposition 2.1, whose statement we recall now. **Proposition 2.1**.: _Suppose that \(k\geqslant 2\) and \(b\geqslant 3\). Set \(B:=b^{6k^{2}}\). Suppose that \(\delta\in(0,1)\) and that \(k\mid n\). Suppose that \(|\widehat{\mu_{n}}(\theta)|\geqslant\delta\), and that \(N\geqslant(2/\delta)^{B}\), where \(N:=b^{n}\). Then there is a positive integer \(q\leqslant(2/\delta)^{B}\) such that \(\|\theta q\|\leqslant(2/\delta)^{B}N^{-k}\)._ Proof.: First apply Proposition 6.1. The conclusion is that for at least \((\delta/2)^{C}N^{k-1}\) values of \(m\), \(|m|\leqslant CN^{k-1}\), we have \(\operatorname{\breve{w}}_{n}(\theta m)\leqslant C\log(2/\delta)\) where \(C:=b^{7k^{2}/2}\). By Lemma 7.2, for these values of \(m\) we have \(\mathrm{w}_{n}(\theta m)\leqslant 16b^{2}C\log(2/\delta)\). (For the definitions of \(\tilde{\mathrm{w}}_{n}\) and \(\mathrm{w}_{n}\), see Definitions 5.1 and 7.1 respectively.) Now apply Proposition 7.3 with \(\eta:=(\delta/2)^{C}C^{-1}\), \(r=\lceil 16b^{2}C\log(2/\delta)\rceil\), \(N=b^{n}\) (as usual) and \(M:=CN^{k-1}\). To process the resulting conclusion, note that \(b^{20r}\eta^{-2}\leqslant(2/\delta)^{C^{\prime}}\), with \(C^{\prime}:=2C+320b^{2}C\log b+\log_{2}(C^{2}b^{20})<321b^{2}C\log b<b^{8}C<B\). Proposition 2.1 then follows. ## Appendix A Box norm inequalities In this appendix we prove an inequality, Proposition A.2, which is in a sense well-known: indeed, it underpins the theory of hypergraph regularity [10] and is also very closely related to generalised von Neumann theorems and the notion of Cauchy-Schwarz complexity in additive combinatorics. We begin by recalling the basic definition of Gowers box norms as given in [12, Appendix B]. **Definition A.1**.: Let \((X_{i})_{i\in I}\) be a finite collection of finite non-empty sets, and denote by \(X_{I}:=\prod_{i\in I}X_{i}\) the Cartesian product of these sets. Let \(f:X_{I}\to\mathbf{C}\) be a function. Then we define the (Gowers-) box norm \(\|f\|_{\Box(X_{I})}\) to be the unique nonnegative real number such that \[\|f\|_{\Box(X_{I})}^{2^{|I|}}=\mathbf{E}_{x_{I}^{(0)},x_{I}^{(1)}\in X_{I}} \prod_{\omega_{I}\in\{0,1\}^{I}}\mathcal{C}^{|\omega_{I}|}f(x_{I}^{(\omega_{I} )}).\] Here, \(\mathcal{C}\) denotes the complex conjugation operator, and for any \(x_{I}^{(0)}=(x_{i}^{(0)})_{i\in I}\) and \(x_{I}^{(1)}=(x_{i}^{(1)})_{i\in I}\) in \(X_{I}\) and \(\omega_{I}=(\omega_{i})_{i\in I}\in\{0,1\}^{I}\) we write \(x_{I}^{(\omega_{I})}=(x_{i}^{(\omega_{i})})_{i\in I}\) and \(|\omega_{I}|:=\sum_{i\in I}|\omega_{i}|\). It is not obvious that \(\|f\|_{\Box(X_{I})}\) is well-defined, but this is so: see [12, Appendix B] for a proof. Another non-obvious fact, whose proof may also be found in [12, Appendix B], is that \(\|f\|_{\Box(X_{I})}\) is a norm for \(|I|\geqslant 2\). When \(|I|=1\), say \(I=\{1\}\), we have \(\|f\|_{\Box(X_{I})}=|\sum_{x_{1}\in X_{1}}f(x_{1})|\), which is only a seminorm. To clarify notation, in the case \(I=\{1,2\}\) we have \(\|f\|_{\Box(X_{\{1,2\}})}^{4}=\) \[\mathbf{E}_{x_{10}^{(0)},x_{1}^{(1)}\in X_{1}}f(x_{1}^{(0)},x_{2}^{(0)}) \overline{f(x_{1}^{(0)},x_{2}^{(1)})f(x_{1}^{(1)},x_{2}^{(0)})}f(x_{1}^{(1)},x _{2}^{(1)}).\] Here is the inequality we will need. The proof is simply several applications of Cauchy-Schwarz, the main difficulty being one of notation. **Proposition A.2**.: _Suppose that notation is as in Definition A.1. Suppose additionally that, for each \(i\in I\), we have a \(1\)-bounded function \(\Psi_{i}:X_{I}\to\mathbf{C}\) which does not depend on the value of \(x_{i}\), that is to say \(\Psi_{i}(x_{I})=\Psi_{i}(x^{\prime}_{I})\) if \(x_{j}=x^{\prime}_{j}\) for all \(j\neq i\). Let \(f:X_{I}\to\mathbf{C}\) be a function. Then we have_ \[\big{|}\mathbf{E}_{x_{I}\in X_{I}}\big{(}\prod_{i\in I}\Psi_{i}(x_{I})\big{)}f( x_{I})\big{|}\leqslant\|f\|_{\square(X_{I})}.\] Proof.: We proceed by induction on \(|I|\), the result being a tautology when \(|I|=1\). Suppose now that \(|I|\geqslant 2\), and that we have already established the result for smaller values of \(|I|\). Let \(\alpha\) be some element of \(I\), and write \(I^{\prime}:=I\setminus\{\alpha\}\). By Cauchy-Schwarz, the \(1\)-boundedness of \(\Psi_{\alpha}\), and the fact that \(\Psi_{\alpha}\) does not depend on \(x_{\alpha}\), we have \[\big{|}\mathbf{E}_{x_{I}\in X_{I}}\big{(}\prod_{i\in I}\Psi_{i}( x_{I})\big{)}f(x_{I})\big{|}^{2}\] \[=\big{|}\mathbf{E}_{x_{I^{\prime}}\in X_{I^{\prime}}}\Psi_{ \alpha}(x_{I})\mathbf{E}_{x_{\alpha}\in X_{\alpha}}\big{(}\prod_{i\in I^{ \prime}}\Psi_{i}(x_{I})\big{)}f(x_{I})\big{|}^{2}\] \[\leqslant\mathbf{E}_{x_{I^{\prime}}\in X_{I^{\prime}}}\big{|} \mathbf{E}_{x_{\alpha}\in X_{\alpha}}(\prod_{i\in I^{\prime}}\Psi_{i}(x_{I}))f (x_{I})\big{|}^{2}\] \[=\mathbf{E}_{x_{\alpha}^{(0)},x_{\alpha}^{(1)}\in X_{\alpha}} \mathbf{E}_{x_{I^{\prime}}\in X_{I^{\prime}}}\big{(}\prod_{i\in I^{\prime}} \Psi_{i}(x_{I^{\prime}},x_{\alpha}^{(0)})\overline{\Psi_{i}(x_{I^{\prime}},x_{ \alpha}^{(1)})}\big{)}\times\] \[\times f(x_{I^{\prime}},x_{\alpha}^{(0)})\overline{f(x_{I^{\prime }},x_{\alpha}^{(0)})}.\] For fixed \(x_{\alpha}^{(0)},x_{\alpha}^{(1)}\) we may apply the induction hypothesis (with indexing set \(I^{\prime}\)) with \(1\)-bounded functions \[\tilde{\Psi}_{i}(x_{I^{\prime}}):=\Psi_{i}(x_{I^{\prime}},x_{\alpha}^{(0)}) \overline{\Psi_{i}(x_{I^{\prime}},x_{\alpha}^{(1)})}\] and with \[\tilde{f}(x_{I^{\prime}})=f(x_{I^{\prime}},x_{\alpha}^{(0)})\overline{f(x_{I^{ \prime}},x_{\alpha}^{(0)})},\] noting that \(\tilde{\Psi}_{i}\) does not depend on \(x_{i}\). This gives \[\big{|}\mathbf{E}_{x_{I}\in X_{I}}\big{(}\prod_{i\in I}\Psi_{i}(x_{I})\big{)}f (x_{I})\big{|}^{2}\leqslant\mathbf{E}_{x_{\alpha}^{(0)},x_{\alpha}^{(1)}\in X _{\alpha}}\|f(\cdot,x_{\alpha}^{(0)})\overline{f(\cdot,x_{\alpha}^{(1)})}\|_{ \square(X_{I^{\prime}})}.\] By Holder's inequality, it follows that \[\big{|}\mathbf{E}_{x_{I}\in X_{I}}\big{(}\prod_{i\in I}\Psi_{i}(x_{I})\big{)} f(x_{I})\big{|}^{2^{|I|}}\leqslant\mathbf{E}_{x_{\alpha}^{(0)},x_{\alpha}^{(1)} \in X_{\alpha}}\|f(\cdot,x_{\alpha}^{(0)})\overline{f(\cdot,x_{\alpha}^{(1)})} \|_{\square(X_{I^{\prime}})}^{2^{|I|-1}}.\] However, the right-hand side is precisely \(\|f\|_{\square(X_{I})}^{2^{|I|}}\), and the inductive step is complete. ## Appendix B Sumsets of subsets of \(\{0,1\}^{n}\) In this appendix we provide some comments on Theorem 2.3, which seems to have a very complicated history. In the case \(r=2\) it is due to Woodall [21], and independently to Hajela and Seymour [13]. In the general case, Theorem 2.3 is a consequence of the following real-variable inequality, which was conjectured in [13]. **Proposition B.1**.: _Let \(r\geqslant 2\) be an integer. Suppose that \(1\geqslant x_{1}\geqslant x_{2}\geqslant\cdots\geqslant x_{r}\geqslant 0\). Then_ \[(x_{1}\cdots x_{r})^{\gamma}+(x_{1}\cdots x_{r-1}(1-x_{r}))^{\gamma}+\cdots+(( 1-x_{1})\cdots(1-x_{r}))^{\gamma}\geqslant 1,\] _where \(\gamma:=r^{-1}\log_{2}(r+1)\)._ The deduction of Theorem 2.3 from Proposition B.1 is a straightforward 'tensorisation' argument, but no details are given in either [6] or [13]. For the convenience of the reader we give the deduction below, claiming no originality whatsoever. Proposition B.1 (and hence Theorem 2.3) was established by Landau, Logan and Shepp [16], and 3 years later but seemingly independently (and in a more elementary fashion) by Brown, Keane, Moran and Pearce [6]. A discussion of the history of these and related problems is given by Brown [5] but this appears to overlook [16]. Finally, we note that a result which is weaker in the exponent than Theorem 2.3, but quite sufficient for the purpose of proving the qualitative form of Theorem 1.1, follows by an iterated application of a result of Gowers and Karam [11, Proposition 3.1]. This avoids the need for the delicate analytic inequality in Proposition B.1. Let us also note that the context in which Gowers and Karam use this result is in some ways analogous to ours, albeit in a very different setting. Proof of Theorem 2.3, assuming Proposition b.1.: As stated in [6], one may proceed in a manner 'parallel' to arguments in [7], specifically the proof of Lemma 2.6 there. We proceed by induction on \(n\). First we check the base case \(n=1\). Here, one may assume without loss of generality that \(A_{1}=\cdots=A_{s}=\{0,1\}\) and \(A_{s+1}=\cdots=A_{r}=\{1\}\) for some \(s\), \(0\leqslant s\leqslant r\). The density of \(A_{1}+\cdots+A_{r}\) in \(\{0,1,\ldots,r\}\) is then \((s+1)/(r+1)\), whilst \(\alpha_{1}=\cdots=\alpha_{s}=1\) and \(\alpha_{s+1}=\cdots=\alpha_{r}=1/2\). The inequality to be checked is thus \((s+1)/(r+1)\geqslant 2^{-(r-s)\gamma}\). However, taking \(x_{1}=\cdots=x_{s}=1/2\) and \(x_{s+1}=\cdots=x_{r}=0\) in Proposition B.1 yields \((s+1)2^{-s\gamma}\geqslant 1\). Since \(2^{r\gamma}=r+1\), the desired inequality follows. Now assume the result is true for \(n-1\). Let \(A_{i}^{0}\) be the elements of \(A_{i}\) with first coordinate zero, and \(A_{i}^{1}\) the elements of \(A_{i}\) with first coordinate \(1\). Suppose that \(|A_{i}^{0}|=x_{i}|A_{i}|\), and without loss of generality suppose that \(x_{1}\geqslant x_{2}\geqslant\cdots\geqslant x_{r}\). Then the sets \(A_{1}^{0}+\cdots+A_{j}^{0}+A_{j+1}^{1}+\cdots+A_{r}^{1}\), \(j=0,\ldots,r\) are disjoint, since the first coordinate of every element of this set is \(j\). It follows that \[|A_{1}+\cdots+A_{r}|\geqslant\sum_{j=0}^{r}|A_{1}^{0}+\cdots+A_{j}^{0}+A_{j+1 }^{1}+\cdots+A_{r}^{1}|.\] Note that \(A_{i}^{0}\) is a subset of a copy \(\{0,1\}^{n-1}\) of density \(2\alpha_{i}x_{i}\), and that \(A_{i}^{1}\) is a subset of (a translate of) \(\{0,1\}^{n-1}\) of density \(2\alpha_{i}(1-x_{i})\). By the inductive hypothesis, \[|A_{1}^{0}+\ldots +A_{j}^{0}+A_{j+1}^{1}+\cdots+A_{r}^{1}|\] \[\geqslant(2^{r}\alpha_{1}\cdots\alpha_{r}x_{1}\cdots x_{j}(1-x_{j +1})\cdots(1-x_{r}))^{\gamma}(r+1)^{n-1}\] \[=(r+1)^{n}(\alpha_{1}\cdots\alpha_{r})^{\gamma}\big{(}x_{1} \cdots x_{j}(1-x_{j+1})\cdots(1-x_{r})\big{)}^{\gamma}.\] Performing the sum over \(j\) and applying Proposition B.1, the result follows. ## Appendix C A diophantine lemma The following is a fairly standard type of lemma arising in applications of the circle method and is normally attributed to Vinogradov. We make no attempt to optimise the constants, contenting ourselves with a version sufficient for our purposes in the main paper. **Lemma C.1**.: _Suppose that \(\alpha\in\mathbf{R}\) and that \(L\geqslant 1\) is an integer. Suppose that \(\delta_{1},\delta_{2}\) are positive real numbers satisfying \(\delta_{2}\geqslant 32\delta_{1}\), and suppose that there are at least \(\delta_{2}L\) elements \(n\in\{1,\ldots,L\}\) for which \(\|\alpha n\|\leqslant\delta_{1}\). Suppose that \(L\geqslant 16/\delta_{2}\). Then there is some positive integer \(q\leqslant 16/\delta_{2}\) such that \(\|\alpha q\|\leqslant\delta_{1}\delta_{2}^{-1}L^{-1}\)._ Proof.: Write \(S\subseteq\{1,\ldots,L\}\) for the set of all \(n\) such that \(\|\alpha n\|\leqslant\delta_{1}\); thus \(|S|\geqslant\delta_{2}L\). By Dirichlet's lemma, there is a positive integer \(q\leqslant 4L\) and an \(a\) coprime to \(q\) such that \(|\alpha-a/q|\leqslant 1/4Lq\). Write \(\theta:=\alpha-a/q\); thus \[|\theta|\leqslant\frac{1}{4Lq}.\] (C.1) The remainder of the proof consists of "bootstrapping" this simple conclusion. First, we tighten the bound for \(q\), and then the bound for \(|\theta|\). Suppose that \(n\in S\). Then, by (C.1), we see that \[\big{\|}\frac{an}{q}\big{\|}\leqslant\delta_{1}+\frac{1}{4q}.\] (C.2) Now we bound the number of \(n\in\{1,\ldots,L\}\) satisfying (C.2) in a different way. Divide \(\{1,\ldots,L\}\) into \(\leqslant 1+\frac{L}{q}\) intervals of length \(q\). In each interval, \(\frac{an}{q}(\text{mod }1)\) ranges over each rational \((\text{mod }1)\) with denominator \(q\) precisely once. At most \(2q(\delta_{1}+\frac{1}{4q})+1<2(\delta_{1}q+2)\) of these rationals \(x\) satisfy \(\|x\|\leqslant\delta_{1}+\frac{1}{4q}\). Thus the total number of \(n\in\{1,\ldots,L\}\) satisfying (C.2) is bounded above by \(2\big{(}\frac{L}{q}+1\big{)}(\delta_{1}q+2)=2\delta_{1}L+2\delta_{1}q+\frac{4 L}{q}+4\). It follows that \[2\delta_{1}L+2\delta_{1}q+\frac{4L}{q}+4\geqslant\delta_{2}L.\] (C.3) Using \(\delta_{2}\geqslant 32\delta_{1}\), \(q\leqslant 4L\) and \(L\geqslant 16/\delta_{2}\), one may check that the first, second and fourth terms on the left are each at most \(\delta_{2}L/4\). Therefore (C.3) forces us to conclude that \(4L/q>\delta_{2}L/4\), and therefore \(q\leqslant 16/\delta_{2}\), which is a bound on \(q\) of the required strength. Now we obtain the claimed bound on \(\|\alpha q\|\). Note that, by the assumptions and the inequality on \(q\) just established, we have \(\delta_{1}\leqslant\delta_{2}/32\leqslant 1/2q\), and so if \(n\in S\) then, by (C.2), we have \(\|an/q\|<1/q\), which implies that \(q|n\). That is, all elements of \(S\) are divisible by \(q\). It follows from this and the definition of \(\theta\) that if \(n\in S\) then \(\|\theta n\|=\|\alpha n\|\leqslant\delta_{1}\). However, since (by (C.1)) we have \(|\theta|\leqslant 1/4Lq\), for \(n\in\{1,\ldots,L\}\) we have \(\|\theta n\|=|\theta n|\). Therefore \[|\theta n|\leqslant\delta_{1}\] (C.4) for all \(n\in S\). Finally, recall that \(S\) consists of multiples of \(q\) and that \(|S|\geqslant\delta_{2}L\); therefore there is some \(n\in S\) with \(|n|\geqslant\delta_{2}qL\). Using this \(n\), (C.4) implies that \(|\theta|\leqslant\delta_{1}/q\delta_{2}L\), and so finally \(\|\alpha q\|\leqslant|\theta q|\leqslant\delta_{1}/\delta_{2}L\). This concludes the proof.
2303.17858
Linear Programming based Lower Bounds on Average Dwell-Time via Multiple Lyapunov Functions
With the objective of developing computational methods for stability analysis of switched systems, we consider the problem of finding the minimal lower bounds on average dwell-time that guarantee global asymptotic stability of the origin. Analytical results in the literature quantifying such lower bounds assume existence of multiple Lyapunov functions that satisfy some inequalities. For our purposes, we formulate an optimization problem that searches for the optimal value of the parameters in those inequalities and includes the computation of the associated Lyapunov functions. In its generality, the problem is nonconvex and difficult to solve numerically, so we fix some parameters which results in a linear program (LP). For linear vector fields described by Hurwitz matrices, we prove that such programs are feasible and the resulting solution provides a lower bound on the average dwell-time for exponential stability. Through some experiments, we compare our results with the bounds obtained from other methods in the literature and we report some improvements in the results obtained using our method.
Sigurdur Hafstein, Aneel Tanwani
2023-03-31T07:38:19Z
http://arxiv.org/abs/2303.17858v2
# Linear Programming based Lower Bounds on Average Dwell-Time via Multiple Lyapunov Functions ###### Abstract With the objective of developing computational methods for stability analysis of switched systems, we consider the problem of finding the minimal lower bounds on average dwell-time that guarantee global asymptotic stability of the origin. Analytical results in the literature quantifying such lower bounds assume existence of multiple Lyapunov functions that satisfy some inequalities. For our purposes, we formulate an optimization problem that searches for the optimal value of the parameters in those inequalities and includes the computation of the associated Lyapunov functions. In its generality, the problem is nonconvex and difficult to solve numerically, so we fix some parameters which results in a linear program (LP). For linear vector fields described by Hurwitz matrices, we prove that such programs are feasible and the resulting solution provides a lower bound on the average dwell-time for exponential stability. Through some experiments, we compare our results with the bounds obtained from other methods in the literature and we report some improvements in the results obtained using our method. Switched systems; continuous piecewise-affine Lyapunov functions; average dwell-time; linear programs. ## I Introduction Switched systems comprise a family of dynamical subsystems orchestrated by a switching signal that activates one of these subsystems at a given time. This abstract framework has been useful in modeling a class of hybrid systems with continuous and discrete dynamics. Another common source of switched systems is uncertainty quantification in continuous-time systems and the associated differential inclusions. Stability analysis of switched systems, therefore, has gathered a lot of attention in the literature. The references [12, 20] provide a comprehensive overview of the different approaches on this topic. When analyzing stability under arbitrary switching, existence of a common Lyapunov function is a necessary and sufficient condition for the asymptotic stability of an equilibrium of the switched system [5]. Thus, over the years, a lot of attention in the literature has been given to computing a common Lyapunov function for the switched system under different hypotheses. For some results in this direction, the reader may refer to [1] for discrete-time systems, and [18] for continuous-time systems. Particularly relevant to this paper is the technique based on the construction of continuous and piecewise affine (CPA) Lyapunov functions, which is reviewed in [9]. The papers [3, 10] present the adaptation of computing CPA Lyapunov functions in case of arbitrarily switching systems. However, such methods have not yet been used in the context of constrained, or dwell-time based, switched systems. For certain applications, existence of common Lyapunov function is a stringent requirement, and may not hold for the given system data. For that reason, when the individual subsystems are asymptotically stable and one can not compute a common Lyapunov function, it is natural to ask how we can guarantee stability for a certain class of switching signals (which is smaller than the set of switching signals with arbitrary switching). The works [17] and [11] studied the stability of switched systems by putting a bound on how fast the switches can occur. Depending on the system data, lower bounds were derived on the (average) dwell-time which ensures global asymptotic stability if the length of interval between two consecutive switches (on average) is greater than the derived lower bound. A tutorial like exposition of these concepts also appears in [12, Chapter 3]. Several works have followed up to extend this idea in several directions. Some generalizations have been addressed in the recent papers [14, 19] with nonlinearities in the system data. Computational methods with multiple Lyapunov functions for getting best possible lower bounds on the dwell-time have not received much attention in the literature. The references [4, 6, 13, 16] provide some algorithms for calculating lower bounds on the dwell-time in the linear case. Among these, the papers [13, 16] build on dwell-time bounds obtained from multiple Lyapunov functions, which is also the case for this article. The authors of [16] developed optimization-based methods for the automatic verification of dwell-time properties. On the other hand, [13] proposes some relaxations in the form of sequential convex programs to compute lower bounds on the average dwell-time. With similar motivation, this article studies computational methods for computing best possible lower bounds on the (average) dwell-time using linear programming (LP) methods. In fact, our approach uses techniques based on the construction of CPA Lyapunov functions, under the constraints that are normally imposed for dwell-time based stability conditions. For a given family of dynamical subsystems with asymptotically stable origin, the question of interest is to find the _smallest_ lower bound on the dwell-time, which ensures asymptotic stability of the switched system under the so-called compatibility constraints. Such questions can be formulated as an optimization problem and in its full generality, it is a nonconvex problem, even when dealing with linear subsystems and quadratic Lyapunov functions for individual subsystems. In this paper, we provide a new technique for solving the optimization problem that corresponds to the computation of a minimum average dwell-time that ensures stability. The intermediate step in getting this bound is to first compute the Lyapunov functions for individual subsystems satisfying certain inequalities. In our work, we search for these Lyapunov functions from the family of continuous piecewise affine functions, in contrast to the quadratic ones. This is done by discretizing the state space into simplices and solving for the values of the Lyapunov functions at the vertices of the simplices, using some inequality constraints. The resulting optimization problem is actually a linear program. The solution to this linear program provides us with a Lyapunov function for each subsystem and also a dwell-time bound. ## II Problem Setup We consider time-dependent switched dynamical systems described as \[\dot{\mathbf{x}}=\mathbf{f}_{\sigma}(\mathbf{x}) \tag{1}\] where, for some given index set \(\mathcal{P}\subset\mathbb{N}\), the function \(\sigma:[0,\infty)\to\mathcal{P}\) is piecewise constant and right-continuous, called the _switching signal_. The discontinuities of \(\sigma\), called the _switching times_, are assumed to be locally finite. The vector fields \(\mathbf{f}_{i}:\mathbb{R}^{n}\to\mathbb{R}^{n}\), for each \(i\in\mathcal{P}\), are assumed to be locally Lipschitz and with \(\mathbf{f}_{i}(\mathbf{0})=\mathbf{0}\). We say that a switching signal \(\sigma\) has an average dwell-time \(\tau_{a}>0\), if there exists \(N_{0}>0\), such that \[N_{\sigma}(t,s)\leq N_{0}+\frac{t-s}{\tau_{a}},\] where \(N_{\sigma}(t,s)\) denotes the number of switches over the interval \(]s,t[\). The set of all switching signals with average dwell-time \(\tau_{a}\) is denoted by \(\Sigma_{\tau_{a}}\). For the stability of the origin for such systems, let us recall the following result, which follows from [11, 12, Chapter 3], and [14, Theorem 1]: **Theorem 1**: _Suppose that there exist \(\mathcal{C}^{1}\) Lyapunov functions \(V_{p}:\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\), \(i\in\mathcal{P}\), satisfying the following:_ * _There exist_ \(\underline{\alpha},\overline{\alpha}\in\mathcal{K}_{\infty}\) _such that_ \[\underline{\alpha}(\|\mathbf{x}\|)\leq V_{i}(\mathbf{x})\leq \overline{\alpha}(\|\mathbf{x}\|),\quad\forall\mathbf{x}\in\mathbb{R}^{n},i\in \mathcal{P}.\] (2) * _There exists a Lipschitz function_ \(\rho\in\mathcal{K}\)_, such that, for every_ \(i\in\mathcal{P}\)_,_ \[\nabla V_{i}(\mathbf{x})\mathbin{\hbox to 0.0pt{\kern 2.5pt\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule height 6.0pt width 0.2pt\hss}\hbox{\vrule heig ht -0.2pt width 0.2pt\hss}\hbox{\vrule heig ht -0. _Problem statement:_ With \(d=1\) and for fixed values of \(\underline{a}>0\), \(\overline{a}>0\), and \(\mu\geq 1\), find piecewise linear functions \(V_{i}\), for each \(i\in\mathcal{P}\), that satisfy (8) while maximizing \(\alpha\). The reason for fixing the constants \(\underline{a}>0\), \(\overline{a}>0\), and \(\mu\geq 1\) is that the foregoing problem then transforms into a linear program. We will provide the formulation of this linear program and discuss its feasibility in the next section. For the sake of clarity in this conference paper, we present our ideas for the linear vector fields but similar concepts can be extended to nonlinear systems. ### _Quadratic functions and matrix inequalities_ Before discussing the LP problem and CPA Lyapunov functions, let us first look at the inequalities (8) for the case \(d=2\) more carefully. In this case, we let \(V_{i}(\mathbf{x})=\mathbf{x}^{\top}P_{i}\mathbf{x}\), with symmetric and positive definite \(P_{i}\in\mathbb{R}^{n\times n}\). In particular, (8) takes the following form, where \(I\) is the identity matrix: \[\begin{cases}\underline{a}I\preceq P_{i}\preceq\overline{a}I,&i\in\mathcal{P },\\ A_{i}^{\top}P_{i}+P_{i}A_{i}\preceq-\alpha I,&i\in\mathcal{P},\\ P_{i}\preceq\mu P_{j},&i,j\in\mathcal{P}.\end{cases} \tag{10}\] In (10), if we fix \(\underline{a},\overline{a}>0\) and \(\mu\geq 1\), then the inequalities result in LMIs with unknowns \(P_{i}\), \(i\in\mathcal{P}\), which can be solved to maximize \(\alpha\). Practically one can select \(\underline{a}\) small, e.g. \(\underline{a}=10^{-5}\) as we do in our examples, and then a large enough \(\overline{a}>0\) will ensure that \(\tau_{a}=\overline{a}\ln(\mu)/\alpha\) can be made minimal for the given \(\mu\geq 1\) by maximizing \(\alpha\). Indeed, assume the conditions (10) are fulfilled for some positive constants \(\underline{a}=\underline{a}^{*},\overline{a}=\overline{a}^{*},\alpha=\alpha^{*}\) and \(V_{i}(\mathbf{x})=\mathbf{x}^{\top}P_{i}^{*}\mathbf{x}\), \(P_{i}^{*}\in\mathbb{R}^{n\times n}\) symmetric, \(i\in\mathcal{P}\). Then we have the lower bound \(\tau_{a}^{*}=\overline{a}^{*}\ln(\mu)/\alpha^{*}\) on the average dwell-time. Now fix new constants \(\underline{a},\overline{a}>0\) such that \(\overline{a}/\underline{a}\geq\overline{a}^{*}/\underline{a}^{*}\) and set \(\alpha=(\overline{a}/\overline{a}^{*})\alpha^{*}\) and \(P_{i}=(\overline{a}/\overline{a}^{*})P_{i}^{*}\). Then, for \(V_{i}(\mathbf{x}):=\mathbf{x}^{\top}P_{i}\mathbf{x}=(\overline{a}/\overline{ a}^{*})\mathbf{x}^{\top}P_{i}^{*}\mathbf{x}\), we have \(V_{i}(\mathbf{x})\leq\mu V_{j}(\mathbf{x})\), and \[\underline{a}\|\mathbf{x}\|^{2}\leq\underline{a}^{*}\,\frac{\overline{a}}{ \overline{a}^{*}}\|\mathbf{x}\|^{2}\leq V_{i}(\mathbf{x})=(\overline{a}/ \overline{a}^{*})\mathbf{x}^{\top}P_{i}^{*}\mathbf{x}\leq\overline{a}\| \mathbf{x}\|^{2}\] for \(i,j\in\mathcal{P}\). That is, the constraints (10) are fulfilled with these values of \(\underline{a},\overline{a},\alpha,P_{i}\) and further, for the lower bound on the average dwell-time we have \[\tau_{a}=\frac{\overline{a}\ln(\mu)}{\alpha}=\frac{\overline{a}\ln(\mu)}{ \overline{a}^{*}}=\frac{\overline{a}^{*}\ln(\mu)}{\alpha^{*}}=\tau_{a}^{*}.\] In other words, for a fixed \(\mu\geq 1\), if there is a solution to (10) for some choice of \(\underline{a}^{*}\), \(\overline{a}^{*}\), \(\alpha^{*}\) which yields the bound \(\tau_{a}^{*}\) for the average dwell-time, then by choosing \(\overline{a}/\underline{a}\) large enough, one can always find another solution to (10) which gives at least as good a bound on average dwell-time as \(\tau^{*}\). Thus, given \(\mu\geq 1\), maximizing \(\alpha>0\) under the constraints (10) for a fixed \(\underline{a},\overline{a}>0\) delivers as good lower bounds on the average dwell-time \(\tau_{a}\) as minimizing \(\tau_{a}=\overline{a}\ln(\mu)/\alpha\), where both \(\overline{a}\) and \(\alpha\) are variables, given that \(\overline{a}/\underline{a}\) is large enough. Note that in the setting of the LMI problem (10) we are searching for quadratic Lyapunov functions for the individual subsystems \(\dot{\mathbf{x}}=A_{i}\mathbf{x}\), \(i\in\mathcal{P}\), which can be conservative. In the next section we consider a similar approach for modeling the conditions (8) using piecewise linear Lyapunov functions and an LP formulation to compute them. Due to the foregoing observation, when solving (8) using an LP formulation, we will fix \(\underline{a}\) and \(\overline{a}\) with \(\underline{a}/\overline{a}\) large enough and maximize \(\alpha\). ## III Continuous Piecewise Affine Lyapunov Functions and Linear Programming Formulation Our LP approach to compute piecewise linear Lyapunov functions fulfilling the conditions (8) is based on the so-called CPA method to compute Lyapunov functions, see e.g. [15, 3, 8, 10]. Its description is somewhat more involved than the LMI approach, because it is based on partitioning a neighborhood of the origin into simplices and the underlying idea behind constructing this collection of simplices, called _triangulation_, is described in the next subsection. ### _The Triangulation \(\mathcal{T}_{K}^{\mathbf{F}}\)_ Roughly speaking, a triangulation is the subdivision of a subset of \(\mathbb{R}^{n}\) into simplices. A suitable concrete triangulation for our aim of parameterizing Lyapunov functions for the individual subsystems is the triangular-fan of the triangulation in [7], where its efficient implementation is also discussed. In its definition, we use the functions \(\mathbf{R}^{\mathcal{J}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), defined for every \(\mathcal{J}\subset\{1,2,\ldots,n\}\) by \[\mathbf{R}^{\mathcal{J}}(\mathbf{x}):=\sum_{i=1}^{n}(-1)^{\mathbf{z}_{ \mathcal{J}}(i)}x_{i}\mathbf{e}_{i},\] where \(\mathbf{e}_{i}\) is the standard \(i\)th unit vector in \(\mathbb{R}^{n}\), and \(\mathbf{1}_{\mathcal{J}}(i)=1\) if \(i\in\mathcal{J}\) and \(0\) otherwise. Thus, \(\mathbf{R}^{\mathcal{J}}(\mathbf{x})\) is the vector \(\mathbf{x}\), except for a minus has been put in front of the coordinate \(x_{i}\) whenever \(i\in\mathcal{J}\). We first define the triangulation \(\mathcal{T}^{\text{std}}\) and use it to construct the intermediate triangulation \(\mathcal{T}_{K}\), which in turn is used to define our desired triangulation \(\mathcal{T}_{K}^{\mathbf{F}}\). The standard triangulation \(\mathcal{T}^{\text{std}}\) consists of the simplices \[\mathfrak{C}_{\underline{a}\mathcal{J}\rho}:=\operatorname{co}\left\{\mathbf{x}_ {0}^{\mathbf{z}\mathcal{J}\rho},\mathbf{x}_{1}^{\mathbf{z}\mathcal{J}\rho}, \ldots,\mathbf{x}_{n}^{\mathbf{z}\mathcal{J}\rho}\right\},\] where \(\operatorname{co}\) denotes the convex hull, and \[\mathbf{x}_{j}^{\mathbf{z}\mathcal{J}\rho}:=\mathbf{R}^{\mathcal{J}}\left( \mathbf{z}+\sum_{i=1}^{j}\mathbf{e}_{\rho(i)}\right), \tag{11}\] for all \(\mathbf{z}\in\mathbb{N}_{0}^{n}=\{0,1,\ldots\}^{n}\), all \(\mathcal{J}\subset\{1,2,\ldots,n\}\), all \(\rho\in S_{n}\), and \(j=0,1,\ldots,n\). Here, \(S_{n}\) denotes the set of all permutations of \(\{1,2,\ldots,n\}\). Now fix a \(K\in\mathbb{N}_{+}=\{1,2,\ldots\}\) and define the hypercube \(\mathcal{H}_{K}:=[-K,K]^{n}\). Consider the simplices \(\mathfrak{S}_{\mathbf{z}\mathcal{J}\rho}\subset\mathcal{H}_{K}\) in \(\mathcal{T}^{\text{std}}\), that intersect the boundary of \(\mathcal{H}_{K}\). We are only interested in those intersections that are \((n-1)\)-simplices, i.e. we take every simplex with vertices \(\mathbf{x}_{j}:=\mathbf{R}^{\mathcal{J}}\left(\mathbf{z}+\sum_{i=1}^{j}\mathbf{ e}_{\rho(i)}\right)\), \(j\in\{0,1,\ldots,n\}\), where exactly one vertex \(\mathbf{x}_{j^{*}}\) satisfies \(\|\mathbf{x}_{j^{*}}\|_{\infty}<K\) and the other \(n\) of the \(n+1\) vertices satisfy \(\|\mathbf{x}_{j}\|_{\infty}=K\), i.e. for \(j\in\{0,1,\ldots,n\}\setminus\{j^{*}\}\). Then we replace the vertex \(\mathbf{x}_{j^{*}}\) by \(\mathbf{0}\); it is not difficult to see that \(j^{*}\) is necessarily equal to \(0\). The collection of such vertices triangulates \(\mathcal{H}_{K}\) and this new triangulation of \(\mathcal{H}_{K}\) is our desired triangulation \(\mathcal{T}_{K}\). It has been shown [2] that it is often advantageous in the CPA method to map the vertices of the triangulation by the mapping \(\mathbf{F}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\), \(\mathbf{F}(\mathbf{0})=\mathbf{0}\) and \[\mathbf{F}(\mathbf{x}):=\frac{\|\mathbf{x}\|}{\|\mathbf{x}\|_{\infty}}\mathbf{ x},\quad\text{if }\mathbf{x}\neq\mathbf{0}. \tag{12}\] Note that \(\mathbf{F}\) maps the hypercubes \(\{\mathbf{x}\in\mathbb{R}^{n}\colon\|\mathbf{x}\|_{\infty}=r\}\) to the spheres \(\{\mathbf{x}\in\mathbb{R}^{n}\colon\|\mathbf{x}\|=r\}\). Finally, we define the triangulation \(\mathcal{T}_{K}^{\mathbf{F}}\) that will be used in the LP problem to parameterize CPA Lyapunov functions. Let \(\mathcal{T}_{K}^{\mathbf{F}}\) be the triangulation consisting of the simplices \[\mathfrak{S}_{\nu}:=\mathrm{co}\{\mathbf{0},\mathbf{F}(\mathbf{x}_{1}^{\nu}),\mathbf{F}(\mathbf{x}_{2}^{\nu}),\ldots,\mathbf{F}(\mathbf{x}_{n}^{\nu})\},\] where \[\mathrm{co}\left\{\mathbf{0},\mathbf{x}_{1}^{\mathbf{z}\mathcal{J}_{P}}, \mathbf{x}_{2}^{\mathbf{z}\mathcal{J}_{P}},\ldots,\mathbf{x}_{n}^{\mathbf{z} \mathcal{J}_{P}}\right\}\in\mathcal{T}_{K}.\] The subset of \(\mathbb{R}^{n}\) subdivided into simplices by the triangulation \(\mathcal{T}_{K}^{\mathbf{F}}\) is denoted by \[\mathcal{D}_{\mathcal{T}_{K}^{\mathbf{F}}}:=\bigcup_{\mathfrak{S}_{\nu}\in \mathcal{T}_{K}^{\mathbf{F}}}\mathfrak{S}_{\nu}.\] Figure 1 depicts two exemplary triangulations of the type \(\mathcal{T}_{K}^{\mathbf{F}}\) for two and three dimension with \(K=5\). ### _LP Problem_ We are now ready to state our LP problem to parameterize piecewise linear Lyapunov functions for the switched system fulfilling the conditions in (8). For formulating this LP, and showing that its feasibility provides us the lower bound on average dwell-time, we focus our attention on the switched linear systems: \[\dot{\mathbf{x}}=A_{\sigma}\mathbf{x} \tag{13}\] with \(\sigma:[0,\infty)\to\mathcal{P}\) being the switching signal, and \(A_{i}\in\mathbb{R}^{n\times n}\), for each \(i\in\mathcal{P}\). We use three constants \(\underline{a},\overline{a}>0\) and \(\mu\geq 1\) in the LP problem. We want the ratio \(\overline{a}/\underline{a}\) to be large, as discussed in the last section, and then we want to try out different \(\mu\geq 1\) to obtain as good a lower bound on the average dwell-time as possible. The variables of the LP problem are \(\alpha\in\mathbb{R}\) and \(V_{\mathbf{x},i}\in\mathbb{R}\) for every vertex \(\mathbf{x}\) of a simplex in \(\mathcal{T}_{K}^{\mathbf{F}}\) and every \(i\in\mathcal{P}\). The objective of the LP problem is to maximize \(\alpha\). The constraints of the LP problem are: **(C1)**: The first set of constraints is that, for every \(i\in\mathcal{P}\), we set \(V_{\mathbf{0},i}=0\), and for every vertex \(\mathbf{x}\) of a simplex in \(\mathcal{T}_{K}^{\mathbf{F}}\) and for every \(i\in\mathcal{P}\): \[\underline{a}\|\mathbf{x}\|\leq V_{\mathbf{x},i}\leq\overline{a}\|\mathbf{x}\| \tag{14}\] **(C2)**: The second set of constraints is more involved. For every simplex \(\mathfrak{S}_{\nu}:=\mathrm{co}\{\mathbf{0},\mathbf{x}_{1}^{\nu},\mathbf{x}_{2 }^{\nu}\ldots,\mathbf{x}_{n}^{\nu}\}\in\mathcal{T}_{K}^{\mathbf{F}}\), we define the matrix \(X_{\nu}=\left(\mathbf{x}_{1}^{\nu}\ \mathbf{x}_{2}^{\nu}\ \cdots\mathbf{x}_{n}^{\nu}\right)\), i.e. \(\mathbf{x}_{k}^{\nu}\) is the \(k\)th column of \(X_{\nu}\). Further, we define for every \(i\in\mathcal{P}\), the vector of variables \(\mathbf{v}_{\nu,i}=\left(V_{\mathbf{x}_{1}^{\nu},i}\ V_{\mathbf{x}_{2}^{\nu}, i}\ \cdots\ V_{\mathbf{x}_{n}^{\nu},i}\right)^{\top}\). The constraints are: for every simplex \(\mathfrak{S}_{\nu}\in\mathcal{T}_{K}^{\mathbf{F}}\), for all \(j=1,\ldots,n\) and all \(i\in\mathcal{P}\): \[\mathbf{v}_{\nu,i}^{\top}X_{\nu}^{-1}A_{i}\mathbf{x}_{j}^{\nu}\leq-\alpha\| \mathbf{x}_{j}^{\nu}\|. \tag{15}\] Note that these constraints are automatically fulfilled for \(j=0\), i.e. \(\mathbf{x}_{j}^{\nu}=\mathbf{0}\). **(C3)**: The third set of constraints is: for every vertex \(\mathbf{x}\) of a simplex in \(\mathcal{T}_{K}^{\mathbf{F}}\) and for every \(i,j\in\mathcal{P}\), \[V_{\mathbf{x},j}\leq\mu V_{\mathbf{x},i}. \tag{16}\] ### _Solution to LP delivers lower bounds on dwell-time_ In the previous subsection, we formulated an LP which basically specified the constraints in (8) at the vertices of the simplices contained in the triangulation. Here we prove that the feasibility of such a program provides us with piecewise linear Lyapunov functions for the individual subsystems over the entire state space that additionally fulfill (8c), thereby providing a lower bound on the average dwell-time. Toward this end, assume that the LP problem in Section III-B has a solution with \(\alpha>0\). We then define the piecewise linear function \(V_{i}\colon\mathcal{D}_{\mathcal{T}_{K}^{\mathbf{F}}}\to\mathbb{R}\), for every \(i\in\mathcal{P}\), in the following way: * For every \(\mathbf{x}\in\mathcal{D}_{\mathcal{T}_{K}^{\mathbf{F}}}\) there exists a simplex \(\mathfrak{S}_{\nu}=\mathrm{co}\{\mathbf{0},\mathbf{x}_{1}^{\nu},\mathbf{x}_{2}^ {\nu}\ldots,\mathbf{x}_{n}^{\nu}\}\in\mathcal{T}_{K}^{\mathbf{F}}\) such that \(\mathbf{x}\in\mathfrak{S}_{\nu}\) and Fig. 1: Our proposed triangulation in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\). there exist a unique \(\lambda\in[0,1]^{n}\), \(\sum_{j=1}^{n}\lambda_{j}\leq 1\), such that \(\mathbf{x}=\sum_{j=1}^{n}\lambda_{j}\mathbf{x}_{j}^{\nu}\). We define \[V_{i}(\mathbf{x})=\sum_{j=1}^{n}\lambda_{j}V_{\mathbf{x}_{j}^{\nu},i}.\] It is not difficult to see that the functions \(V_{i}\), \(i\in\mathcal{P}\), are continuous functions that are linear on each simplex \(\mathfrak{S}_{\nu}\in\mathcal{T}_{\mathcal{K}}^{\mathbf{F}}\), in particular each \(V_{i}\) has the constant gradient \(\nabla V_{\nu,i}:=\mathbf{v}_{\nu,i}^{\top}X_{\nu}^{-1}\) (row vector) on the interior of \(\mathfrak{S}_{\nu}\), see e.g. [8, Rem. 9]. Hence, for any \(\mathbf{x}\in\mathfrak{S}_{\nu}\in\mathcal{T}_{K}^{\mathbf{F}}\), \(\mathbf{x}=\sum_{j=1}^{n}\lambda_{j}\mathbf{x}_{j}^{\nu}\), we have for any \(i\in\mathcal{P}\) by **(C1)** and **(C2)** that \[\nabla V_{\nu,i}\!\cdot\!A_{i}\mathbf{x} =\mathbf{v}_{\nu,i}^{\top}X_{\nu}^{-1}A_{i}\sum_{j=1}^{n}\lambda_ {j}\mathbf{x}_{j}^{\nu}=\sum_{j=1}^{n}\lambda_{j}\mathbf{v}_{\nu}^{\top}X_{\nu }^{-1}A_{i}\mathbf{x}_{j}^{\nu}\] \[\leq-\alpha\sum_{j=1}^{n}\lambda_{j}\|\mathbf{x}_{j}^{\nu}\|\leq- \frac{\alpha}{\overline{a}}\sum_{j=1}^{n}\lambda_{j}V_{i}(\mathbf{x}_{j}^{\nu})\] \[=-\frac{\alpha}{\overline{a}}V_{i}(\sum_{j=1}^{n}\lambda_{j} \mathbf{x}_{j}^{\nu})=-\frac{\alpha}{\overline{a}}V(\mathbf{x}). \tag{17}\] Now, for any \(\mathbf{x}\) in the interior of \(\mathcal{D}_{\mathcal{T}_{\mathcal{K}}^{\mathbf{F}}}\), we have that, for any \(i\in\mathcal{P}\), there exists a simplex \(\mathfrak{S}_{\nu}\in\mathcal{T}_{K}^{\mathbf{F}}\) and an \(h>0\), such that \[\mathbf{x}+[0,h]A_{i}\mathbf{x}\subset\mathfrak{S}_{\nu},\] where \(\nu\) can depend on both \(\mathbf{x}\) and \(i\). Because \(V_{i}\) is linear on \(\mathfrak{S}_{\nu}\) we have \[\limsup_{h\to 0+}\frac{V_{i}(\mathbf{x}+hA_{i}\mathbf{x})-V_{i}(\mathbf{x})}{h}= \nabla V_{\nu,i}\!\cdot\!A_{i}\mathbf{x}\leq-\frac{\alpha}{\overline{a}}V_{i}( \mathbf{x})\] and since this holds true for all \(i\in\mathcal{P}\), we have \(D^{+}V_{i}(\mathbf{x},A_{i}\mathbf{x})\leq-\frac{\alpha}{\overline{a}}V_{i}( \mathbf{x})\). Since \[V_{i}(\mathbf{x})=\sum_{j=1}^{n}\lambda_{j}V_{\mathbf{x}_{j}^{\nu},i}\geq \underline{a}\sum_{j=1}^{n}\lambda_{j}\|\mathbf{x}_{j}^{\nu}\|\geq\underline{a }\|\mathbf{x}\|\] by the constraints **(C1)**, and for all \(i,k\in\mathcal{P}\), \[V_{i}(\mathbf{x})=\sum_{j=0}^{n}\lambda_{j}V_{\mathbf{x}_{j}^{\nu},i}\leq \sum_{j=0}^{n}\lambda_{j}\mu V_{\mathbf{x}_{j}^{\nu},k}=\mu V_{k}(\mathbf{x}),\] it is clear that the \(V_{i}\) fulfill the constraints (8) in the interior of \(\mathcal{D}_{\mathcal{T}_{K}^{\mathbf{F}}}\). Just define \[\overline{a}^{\prime}:=\max_{\|\mathbf{x}\|=1}\max_{i=1,2,\ldots,N}V_{i}( \mathbf{x})\] and we have \[\underline{a}\|\mathbf{x}\|\leq V_{i}(\mathbf{x})\leq\overline{a} ^{\prime}\|\mathbf{x}\|,\] \[D^{+}V_{i}(\mathbf{x},A_{i}\mathbf{x})\leq-\frac{\alpha}{ \overline{a}}V_{i}(\mathbf{x})\] for all \(\mathbf{x}\) in the interior of \(\mathcal{D}_{\mathcal{T}_{K}^{\mathbf{F}}}\). Note that we proved (17) directly from the constraints and did not go through constraints (8) with \(\overline{a}=\overline{a}^{\prime}\), which would lead to a worse estimate on \(\tau_{a}\). By extending \(V_{i}\) to \(\mathbb{R}^{n}\) in the obvious way, i.e. for every \(\mathbf{x}\in\mathbb{R}^{n}\) there exists a \(\mathfrak{S}_{\nu}\) and unique numbers \(\lambda_{j}\geq 0\) such that \(\mathbf{x}=\sum_{j=1}^{n}\lambda_{j}\mathbf{x}_{j}^{\nu}\) (a cone defined by the vertices of \(\mathfrak{S}_{\nu}\)) and we set \(V_{i}(\mathbf{x})=\sum_{j=1}^{n}\lambda_{j}V_{\mathbf{x}_{j}^{\nu},i}\), we see that the \(V_{i}\) fulfill the constraints (8) on \(\mathbb{R}^{n}\), for each \(i\in\mathcal{P}\). ## IV Simulations We will now test our LP algorithm for several examples from the literature. The class of systems for these simulations is (13). The set \(\mathcal{P}\) and the matrices \(\{A_{i}\}_{i\in\mathcal{P}}\) will be specified differently for the examples considered here. In the examples, we always fix \(\underline{a}=10^{-5}\) and \(\overline{a}=10\). We used YALMIP / sdpt3 and Gurobi to solve the LMI and LP problems, respectively. ### _Example 1: Dwell-time stable but not under arbitrary switching_ Consider the switched system (13) with \[A_{1}=\begin{pmatrix}-0.1&-1\\ 2&-0.1\end{pmatrix},\quad A_{2}=\begin{pmatrix}-0.1&-2\\ 1&-0.1\end{pmatrix}\] This example is taken from [12, p. 26]. It is stable for a certain minimum value for the average dwell-time, but it is not stable under arbitrary switching. Solving (10) using the LMI approach, the best \(\tau_{a}\) obtained is \(5.1929\) with \(\mu=2\). Using the LP approach, the best \(\tau_{a}\) obtained is \(4.5283\) with \(\mu=1.4\) and using \(K=500\) in the triangulation. Using triangulations with fewer triangles delivers a higher lower bound \(\tau_{a}\) for the average dwell time; \(K=50\) gives \(\tau_{a}=5.16493\) with \(\mu=1.45\), \(K=100\) gives \(\tau_{a}=4.79315\) with \(\mu=1.4\), and \(K=200\) gives \(\tau_{a}=4.62407\) with \(\mu=1.4\). In all cases they are better than the bounds from the LMI approach. ### _Example 2: Stable under arbitrary switching but no common quadratic Lyapunov function_ Take \(\mathcal{P}=\{1,2\}\), with the matrices \[A_{1}=\begin{pmatrix}-1&-1\\ 1&-1\end{pmatrix},\quad A_{2}=\begin{pmatrix}-1&-10\\ 0.1&-1\end{pmatrix}\] This example is taken from [5]. It is stable under arbitrary switching but the matrices \(A_{1}\) and \(A_{2}\) do not share a common quadratic Lyapunov function. This example helps us see the limitation of using LMIs because searching for quadratic certificates in this case is not the best choice. Solving (10) using LMIs, the minimum value for \(\tau_{a}\) is \(\tau_{a}=17.0394\) with \(\mu=3.1\). Whereas, with our LP approach, \(K=20\) gives a solution with \(\mu=1\). Hence, the origin is stable under arbitrary switching (\(\tau_{a}=0\)), and we get Fig. 2: Plots for \(\mu\) versus \(\tau_{a}\) in Example 1. a common piecewise linear Lyapunov function although no quadratic Lyapunov function exists. ### _Example 3: Exponentially stable system under arbitrary switching with 5 modes_ We consider the switched system (13) with \(\mathcal{P}=\{1,2,3,4,5\}\), where \[A_{1}=\left(\begin{smallmatrix}-5&1&2\\ 0&-5&1\\ 0&1&-2\end{smallmatrix}\right),\;\;A_{2}=\left(\begin{smallmatrix}-1&3&1\\ 0&-2&0\\ 0&1&-1\end{smallmatrix}\right),\] \[A_{3}=\left(\begin{smallmatrix}0&0&3\\ -2&1&-3\\ -1&0&-2\end{smallmatrix}\right),\;\;A_{4}=\left(\begin{smallmatrix}-1&-0&-3\\ -2&-4\\ 1&0&-1\end{smallmatrix}\right),\] \[A_{5}=\left(\begin{smallmatrix}-1&0&0\\ -1&-1&-1\\ -3&0&-4\end{smallmatrix}\right)\] This example was also considered in [13] with a graph that determines the switching sequence, and in this particular, we have the star topology. With \(K=6\) we get a solution with \(\mu=1\), i.e. the origin is exponentially stable under arbitrary switching (which is then arbitrary without the graph too). With the LMI approach in (10), the minimum value for the average dwell-time is \(\tau_{a}=4.6870\) with \(\mu=2.7\). ## V Conclusions We considered a linear programming (LP) based computational algorithm for computing lower bounds on average dwell-times that ensure asymptotic stability of switched systems. The algorithm is essentially based on gridding the state space into simplices and computing values for the corresponding Lyapunov functions at the vertices of these simplices. By choosing appropriate values of the parameters in the inequalities defining the linear program, the solution provides us lower bounds on the average dwell-time necessary to assure stability. From the simulations, we see in several case studies, that LP based bounds are better than the ones based on linear matrix inequalities (LMIs). This is not really surprising since LMIs restrict the Lyapunov functions to be quadratic, whereas the proposed LPs can potentially approximate a broader class of Lyapunov function templates. Computing dwell-time via inequalities in (8) introduces some conservatism because we optimize over a single parameter \(\alpha\) while keeping \(\mu\) fixed. The same conservatism is observed in going from (6) to (10). As a topic of ongoing investigation, we are working out algorithm to optimize \(\alpha\) and \(\mu\) simultaneously directly using an LP version of (6).
2309.12748
The Reversed Zeckendorf Game
Zeckendorf proved that every natural number $n$ can be expressed uniquely as a sum of non-consecutive Fibonacci numbers, called its Zeckendorf decomposition. Baird-Smith, Epstein, Flint, and Miller created the Zeckendorf game, a two-player game played on partitions of $n$ into Fibonacci numbers which always terminates at a Zeckendorf decomposition, and proved that Player 2 has a winning strategy for $n\geq 3$. Since their proof was non-constructive, other authors have studied the game to find a constructive winning strategy, and lacking success there turned to related problems. For example, Cheigh, Moura, Jeong, Duke, Milgrim, Miller, and Ngamlamai studied minimum and maximum game lengths and randomly played games. We explore a new direction and introduce the reversed Zeckendorf game, which starts at the ending state of the Zeckendorf game and flips all the moves, so the reversed game ends with all pieces in the first bin. We show that Player 1 has a winning strategy for $n = F_{i+1} + F_{i-2}$ and solve various modified games.
Zoë X. Batterman, Aditya Jambhale, Steven J. Miller, Akash L. Narayanan, Kishan Sharma, Andrew K. Yang, Chris Yao
2023-09-22T09:51:13Z
http://arxiv.org/abs/2309.12748v2
# The reversed Zeckendorf game ###### Abstract. Zeckendorf [Ze] proved that every natural number \(n\) can be expressed uniquely as a sum of non-consecutive Fibonacci numbers, called its _Zeckendorf decomposition_. Baird-Smith, Epstein, Flint, and Miller [BEFM1] created the _Zeckendorf game_, a two-player game played on partitions of \(n\) into Fibonacci numbers which always terminates at a Zeckendorf decomposition, and proved that Player 2 has a winning strategy for \(n\geq 3\). Since their proof was non-constructive, other authors have studied the game to find a constructive winning strategy, and lacking success there turned to related problems. For example, Cheigh, Moura, Jeong, Duke, Milgrim, Miller, and Ngamlamai [CMJDMMN] studied minimum and maximum game lengths and randomly played games. We explore a new direction and introduce the _reversed Zeckendorf game_, which starts at the ending state of the Zeckendorf game and flips all the moves, so the reversed game ends with all pieces in the first bin. We show that Player 1 has a winning strategy for \(n=F_{i+1}+F_{i-2}\) and solve various modified games. Key words and phrases:Zeckendorf game, Reversed games, Fibonacci numbers 2020 Mathematics Subject Classification: 11B39, 65Q30, 05C57, 91A05, 91A46 This work was completed during the 2023 SMALL REU program at Williams College. It was supported in part by NSF Grants DMS1561945 and DMS1659037, Williams College, and Churchill College, Cambridge. ## 1. Introduction and Main Results ### History The Fibonacci numbers, which for uniqueness results in decompositions requires us to define them by \(F_{1}=1\), \(F_{2}=2\), and \(F_{n+1}=F_{n}+F_{n-1}\), is a sequence with many interesting properties which have been widely studied. With this choice of indexing, Zeckendorf [11] proved that every natural number \(n\) has a unique _Zeckendorf decomposition_, which expresses \(n\) as the sum of distinct, non-adjacent Fibonacci numbers. Note the decomposition would no longer be unique if we considered there to be two ones or a zero in the Fibonacci sequence. An example of such a Zeckendorf decomposition is \[2024\ =\ 1597+377+34+13+3. \tag{1.1}\] Building upon this, Baird-Smith, Epstein, Flint and Miller in [3] and [4] created the _Zeckendorf game_. The Zeckendorf game, which we will refer to as the _forwards game_, starts with a natural number \(n\). A game state consists of a partition of \(n\) into Fibonacci numbers, constituents of which we call _chips_. We say the collection of \(F_{i}\)'s in this partition is the \(i^{\text{th}}\) bin, with its cardinality \(h_{i}\) called the _height_ of the bin. There are two types of moves. 1. _Combine_: If \(h_{i}>0\) and \(h_{i-1}>0\), then the move is \[F_{i-1}+F_{i} \ \mapsto\ F_{i+1},\] (1.2) \[2F_{1} \ \mapsto\ F_{2}.\] In other words, we remove one chip from each of the \(i^{\text{th}}\) and \((i-1)^{\text{th}}\) bins and add a chip to the \((i+1)^{\text{th}}\) bin. 2. _Split_: If \(h_{i}\geq 2\) with \(i>2\), then we have the move \[2F_{i} \ \mapsto\ F_{i-2}+F_{i+1},\] (1.3) \[2F_{2} \ \mapsto\ F_{3}+F_{1}.\] Note that each of these moves keeps the total sum of the values of the chips constant at \(n\). The forwards game starts with \(n\) ones and proceeds until a player can't make a move, in which case the player who can't move loses. This makes the Zeckendorf game a normal, impartial, combinatorial game.1 Footnote 1: The terms normal and impartial are defined in §2. The authors in [4] showed the game for \(n\) always terminates at \(n\)'s Zeckendorf decomposition, and they showed, nonconstructively, Player 2 wins for all \(n\geq 3\). After this result was published, many authors have investigated the properties of this game and various modifications of it such as possible game lengths, random games, etc; see [1], [2], [3], [5], [6], [7], [8], [9], [10], [11], [12], and [13]. We introduce another version of this game, the _reversed Zeckendorf game_, where the players play with all the moves reversed and the starting and ending positions exchanged; for the full definition, see SS3. This game demonstrates a more complex winning structure than the forwards game.2 Footnote 2: Here, “winning structure” refers to when each player has a forced win. ### Main Results First, in SS3, we analyze the reversed Zeckendorf game, look at its winning structure, and make further conjectures. As in the literature, we denote the golden ratio \(\frac{1+\sqrt{5}}{2}\) by \(\phi\). Moveover, for brevity in proofs, we will say that a player "has a win" (respectively "has a loss") by which we mean "has a forced winning strategy" (respectively "the opponent has a force winning strategy"). We state our main results. **Theorem 1.1**.: _Player 1 has a winning strategy for the reversed Zeckendorf game when_ \[n=F_{i+1}+F_{i-2}. \tag{1.4}\] We have both a nonconstructive and constructive proof for this theorem. The nonconstructive proof involves a strategy-stealing argument, and the constructive proof provides an explicit strategy. **Conjecture 1.2**.: _For the reversed Zeckendorf game, Player 2 has a winning strategy for infinitely many \(n\)._ **Conjecture 1.3**.: _In the limit, the percent of the time Player 1 has a winning strategy for the reversed Zeckendorf game is \(\varphi^{-1}\approx.618\)._ Continuing in SS3.1, we transfer results from [CMJDMMN] to get results about randomly played reversed Zeckendorf games as well as bounds on how long the game can last. These results are summarized here. **Theorem 1.4**.: _Let \(Z(n)\) be the number of terms in the Zeckendorf decomposition of \(n\)._ _(i) The shortest possible reversed Zeckendorf game is \(n-Z(n)\)._ _(ii) An upper bound on the longest possible reversed Zeckendorf game is_ \[\lfloor\phi^{2}n-Z_{I}(n)-2Z(n)+\phi-1\rfloor, \tag{1.5}\] _where \(Z_{I}(n)\) is the sum across the indices of the Fibonacci numbers in the Zeckendorf decomposition._ **Theorem 1.5**.: _For any integer \(Z\geq 1\) and \(z\in\{0,1,\ldots,Z-1\}\), we have_ \[\lim_{N\to\infty}\mu_{N}(\text{game length equals $z$ mod $Z$})\ =\ \lim_{N\to\infty}\mathbb{P}_{N}(\text{game length equals $z$ mod $Z$})\ =\ \frac{1}{Z}, \tag{1.6}\] _where \(\mu_{N}\) and \(\mathbb{P}_{N}\) are two different probability measures, defined in SS3.1, being placed on the space of all possible games starting at the Zeckendorf decomposition of \(N\)._ In SS4, we analyze winning strategies for different starting positions of the reversed Zeckendorf game. That is, we assume the starting position is not at the Zeckendorf decomposition but at some other partition of \(n\) into Fibonacci numbers. Specifically, we completely solve the game when the starting position only involves ones, twos, and threes. **Theorem 1.6**.: _For any game starting with \(a\) ones, \(b\) twos, and \(c\) threes, we have the following forced wins._ \begin{tabular}{|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & & _Player having forced win_ \\ \hline _Even_ & _Even_ & _Even_ & & _Player 2_ \\ \hline _Odd_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Even_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a>c\) & _Player 2_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a<c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a>c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a<c\) & _Player 2_ \\ \hline _Even_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline \end{tabular} Finally, in SS5, we conclude with a further modification of this game, which we call the _build up game_, and we completely solve it. The proof involves Nim-like strategies--strategies that involve forcing the opponent along game states associated to a fixed residue class modulo some integer. ## 2. Reversed Games Intuitively, two players playing a reversed game should look, if time is running backwards, like the forwards game. As such, we want a move in the reversed game from a state \(A\) to a state \(B\) to be legal if and only if moving from \(B\) to \(A\) is legal in the forwards game. To formalize this, we add the assumption that the game is _normal and impartial_. **Definition 2.1**.: We say a game is _normal and impartial_ when 1. the allowable moves depend only on the position and not on player order, and 2. the last player to move wins. We can consider a normal, impartial game as a game played on a directed graph. Players take turns moving a game piece across the graph until a terminal node is reached (e.g., a node where there are no moves). **Definition 2.2** (**Reversed Game**).: Let \(G\) be the associated directed graph of a normal, impartial game. Suppose that the graph has a unique loss node and unique starting node. Then the _reverse_ of that game is obtained by reversing all the edges in \(G\) and playing from the loss node to the starting node. A player loses if they run out of moves. **Remark 2.3**.: Under this definition, the reverse of a game is also normal and impartial. **Example 2.4**.: Consider the reversed game for standard Nim, where players take turns removing \(1\), \(2\), or \(3\) from an natural number \(n\equiv 1\pmod{4}\), never going below \(1\), with the last player to move winning. Note standard Nim is a normal, impartial game with starting node \(n\) and ending node \(1\). The reversed game would be played by starting at \(1\) and adding either \(1,2\), or \(3\) (never going above \(n\)), with the first person to make the number \(n\) winning. This is transparently the same as Nim, with the same winning strategy for Player \(2\). Thus, this game does not demonstrate any new behavior when reversed: The winning structure is still the same as Player \(2\) always wins with the same strategy. In general, reversed games do not exhibit the same winning structure or winning strategies. As we will see, reversed Chomp has a similar winning structure but different winning strategy, and the reversed Zeckendorf game has an entirely different winning structure. ### Reversed Chomp As a useful example of a reversed game, we solve the reversed Chomp game. First, we introduce the game of _Chomp_. Chomp is a normal, impartial game played on a rectangular board with \(N\) rows and \(M\) columns. Players take turns choosing a square on the board and "eating" it, removing that square and all the squares both above and to the right of it. The player forced to eat the "poisoned square" in the bottom left corner loses the game. We consider the game played with two players and ignore the trivial game \(N=M=1\). We can consider this game as played on a directed graph with a starting node representing the full board and an ending node representing the poisoned square. It can be shown by a "strategy-stealing argument" that for all \(N,M\), Chomp is a win for Player 1. However, constructing an explicit winning strategy proves difficult. Consider _reversed Chomp_, which starts from the poisoned square and ends with the full board. Without loss of generality, assume \(N\geq M\). When \(M=1\), Player 1 wins in one move. We provide a constructive proof that \(M>1\) is always a win for Player 2 by giving the following winning strategy. 1. **Case \(M=2\)**: Player 1 has three options for their first move: complete the bottom row, complete the leftmost column, or partially complete the leftmost column. The first two choices allow Player 2 to win in one move. Suppose Player 1 partially completed the leftmost column to a height of \(1<h<N\) squares. Then Player 2 can respond by filling in the other column to a height of \(h-1\). The game has now been reduced to another game with 2 columns but with fewer rows than we started with (see figure 1). Player 1 is presented with the same three choices. Player 2 can continue responding in the above manner until we reach the game with 2 rows and 2 columns. From this position, regardless of Player 1's choice, Player 2 wins in one move. 2. **Case \(M>2\)**: Player 1 has four options for their first move: complete the bottom row, complete the leftmost column, partially complete the bottom row, or partially complete the leftmost column. The first two choices allow Player 2 to win in one move. If Player 1 partially completes the bottom row to a length of \(1<l<M\) squares, then Player 2 can respond by filling in the leftmost \(l-1\) columns (see figure 2). This reduces to a game with the same number of rows and fewer columns than we started with. Similarly, if Player 1 partially completes the leftmost column to a height of \(1<h<N\) squares, then Player 2 can respond by filling in the bottom \(h-1\) rows which reduces to a game with the same number of columns but fewer rows (see Figure 3). Either way, Player 1 is presented with the same four choices. Player 2 can continue responding in this manner until the game is reduced to a state with either only 2 rows or 2 columns. By the previous case, this is a win for Player 2. **Remark 2.5**.: Along with the winning strategy, the winning structure for reversed Chomp is also different than normal Chomp. Player 1 wins reversed Chomp for all \(N\) when \(M=1\). This is an Figure 1. \(M=2\). Gray indicates move by Player 1, black indicates move by Player 2 infinite number of games but not a positive proportion. In forward Chomp, Player 2 only wins a finite number of games, namely just the trivial game. ## 3. The Reversed Zeckendorf Game Following our definition of the reversed game in Definition 2, we give an explicit construction for the _reversed Zeckendorf game_. The terminology is the same as in the forwards game (with the starting and ending positions flipped, so we always start at the Zeckendorf decomposition of \(n\)) except we rename each move. 1. _Split_: We remove one chip from the \((i+1)^{\rm th}\) bin and place one chip each in the \(i^{\rm th}\) and \((i-1)^{\rm th}\) bins. \[F_{i+1} \mapsto\ F_{i-1}+F_{i},\] (3.1) \[F_{2} \mapsto\ 2F_{1}.\] 2. _Combine_: For \(i>2\), we have \[F_{i-2}+F_{i+1} \mapsto\ 2F_{i},\] (3.2) \[F_{3}+F_{1} \mapsto\ 2F_{2}.\] Figure 3. \(M>2\), Player 1 partially completes leftmost column. Figure 2. \(M>2\), Player 1 partially completes bottom row. **Theorem 1.1**.: _Player 1 has a winning strategy for the reversed Zeckendorf game when_ \[n=F_{i+1}+F_{i-2}. \tag{1.4}\] We present two proofs, one constructive and the other nonconstructive. Proof.: (Nonconstructive) By way of contradiction, suppose Player 2 has a winning strategy. If Player 1 chose to combine for their first move, Player 2 has a forced win starting at the state \(2F_{i}\). There is only one move in this position, which means Player 2 has a forced win with Player 1 starting at the state \(F_{i}+F_{i-1}+F_{i-2}\). However, Player 1 could choose to instead split the \((i+1)^{\text{th}}\) bin, which makes it Player 2's turn at the state \(F_{i}+F_{i-1}+F_{i-2}\). Now, Player 1 can steal the strategy from Player 2 to have a forced win in the starting state, a contradiction. **Remark 3.1**.: The constructive proof relies on Lemma 4.1. As a corollary of this lemma, we have a constructive proof for why the game state \(2F_{i}\) is a win for whoever goes second. Before that, Player 1 starts with combining to bring the game into this situation. The next natural question is whether or not the same can be said of Player 2. **Conjecture 1.2**.: _For the reversed Zeckendorf game, Player 2 has a winning strategy for infinitely many \(n\)._ To investigate this conjecture, we directly computed which player had a forced winning strategy for \(n\leq 129\) (see Appendix A.1). The code for computing which player has a forced winning strategy is listed in Appendix B and has a algorithmic complexity of \(O(\exp(\sqrt{n})\). For \(n=129\), the program took close to 2 hours. Figure 4 shows the proportion Player 2 wins. We plot \(n\) versus the percent of games won by Player 2 in games 2 through \(n\). For the first \(n\leq 129\), the average number of Player 1 wins is \(80/129\approx.620\). The average number of wins seems to stabilize computationally. Combining these two together, we are led to the following conjecture. **Conjecture 1.3**.: _In the limit, the percent of the time Player 1 has a winning strategy for the reversed Zeckendorf game is \(\varphi^{-1}\approx.618\)._ This is a natural conjecture due to the connection between the Fibonacci numbers and the golden ratio. Work on determining the winner is provided in SS4. ### Other Facts We gather various results about the reversed Zeckendorf game which follow from already-known facts about the forwards game. Since we have merely reversed the arrows in the forward game tree to produce the reversed game, many properties about the forwards game extend immediately as corollaries. One property that extends is the lengths of games. From [1] for the lower bound and [12] for the upper bound, we have the following. **Theorem 1.4**.: _Let \(Z(n)\) be the number of terms in the Zeckendorf decomposition of \(n\)._ _(i) The shortest possible reversed Zeckendorf game is \(n-Z(n)\)._ _(ii) An upper bound on the longest possible reversed Zeckendorf game is_ \[\lfloor\phi^{2}n-Z_{I}(n)-2Z(n)+\phi-1\rfloor, \tag{1.5}\] _where \(Z_{I}(n)\) is the sum across the indices of the Fibonacci numbers in the Zeckendorf decomposition._ Theorems 1.8 and 1.9 from the paper [CMJDMMN] also transfer. Finally, we have results on randomly played games with one subtlety. The paper discusses two different probability measures put on the space of all possible games with a fixed \(n\). One of them simply assigns the same probability to each game (denoted \(\mu_{n}\)), as well as assigning each game according to the probability of playing it if each player picked a random move each turn, uniformly from the possible moves (denoted \(\mathbb{P}_{n}\)). Now, the first measure \(\mu\) assigns the same probability to each game as its corresponding flipped game in the reversed version. As such, statements about this probability measure easily translate as a corollary. However, the measure \(\mathbb{P}_{n}\) requires more work than deducing the result for the reversed game from the original game. In this case, modifications and checks are required throughout Section 4 of their paper until Lemma 4.5. In the end, the main result still holds. **Theorem 1.5**.: _For any integer \(Z\geq 1\) and \(z\in\{0,1,\ldots,Z-1\}\), we have_ \[\lim_{N\to\infty}\mu_{N}(\text{game length equals $z$ mod $Z$})\ =\ \lim_{N\to\infty}\mathbb{P}_{N}(\text{game length equals $z$ mod $Z$})\ =\ \frac{1}{Z}, \tag{1.6}\] _where \(\mu_{N}\) and \(\mathbb{P}_{N}\) are two different probability measures, defined in SS3.1, being placed on the space of all possible games starting at the Zeckendorf decomposition of \(N\)._ Figure 4. Percent of games \(\leq n\) won by Player 2 This theorem works for a more general setting than that of the two player game. The game can be modified to \(Z\) many players, wherein the players take turns in sequence and the last player who moves wins. The theorem can then be interpreted as saying that in the \(Z\) player game, where each player chooses their move uniformly at random from their possible choices, any given player will win approximately \(1/Z\) percent of the time. For the specific case of two players and an even \(n\), we have the following. **Theorem 3.2**.: _For the Reversed Zeckendorf Game player on an even \(n\),_ \[\mathbb{P}_{n}(\text{game length equals $1$ mod $2$})\ =\ \frac{1}{2}. \tag{3.3}\] Proof.: Equivalently, we can show that each player, playing randomly on each turn, each has a \(1/2\) probability of winning. At some point, every game will have the \(h_{3}=1\) and \(h_{i}=0\) for all \(i>3\). Furthermore, we can continue to the move that gets rid of the three. We have \[n\ =\ h_{1}+2h_{2}+3. \tag{3.4}\] Since \(n\) is even, this means \(h_{1}>0\), so at the final turn \(h_{3}=1\), the player has \(2\) choices: (i) combine a one and a three or (ii) split the three. Once there are no threes (i.e., \(h_{3}=0\)), the game only has one option, namely splitting the two until there are no moves left. The winner is determined by the parity of the second bin. However, (i) will yield the height of bin \(2\) equals \(h_{2}+2\) and (ii) will yield the height of bin \(2\) equals \(h_{2}+1\). There is an equal chance the player will lose or win, and nothing that came before or after matters. We conclude the probability either player wins is \(1/2\), as desired. ## 4. Other Starting Positions A natural modification of the reversed Zeckendorf game is to alter the starting position. Instead of starting at the Zeckendorf decomposition, we instead specify a starting position and start the game from there. In fact, such problems are often tractable as the starting and ending positions are well-understood. Contrast this with applying the same idea to the forwards game: In this case, we would still be headed towards the Zeckendorf decomposition, which can vary as \(n\) changes. **Lemma 4.1**.: _If the starting position of the reversed Zeckendorf game has all bins of even height, then Player 2 has a winning strategy._ Proof.: The proof uses a "copycat" strategy that is, if Player 1 does a move, Player 2 plays the exact same move. We must show the following: 1. if Player 1 makes a move, then Player 2 can also make that move, and 2. after Player 2 moves, all the bins are of even height. If these are both the case, then Player 1 will have to move on a position where there are only ones and twos. At this point, there will be an even number of twos, so the game from here on is deterministic; both players will split two until there are no twos left. Since the parity of the twos is even, this means it will be Player 1's turn when there are no twos left, so they lose. Consider Player 1 making a split move so that \[h_{i} \mapsto h_{i}-1,\] \[h_{i-1} \mapsto h_{i-1}+1,\text{ and}\] \[h_{i-2} \mapsto h_{i-2}+1.\] Since \(h_{i}\) is even, then \(h_{i}-1\) is odd. Thus, \(h_{i}-1\geq 1>0\). This means Player 2 can legally make the same splitting move with the net effect of \[h_{i} \mapsto h_{i}-2,\] \[h_{i-1} \mapsto h_{i-1}+2,\text{ and}\] \[h_{i-2} \mapsto h_{i-2}+2,\] with all the other bins unchanged, so all the heights are still even. We can do a similar argument for the combine move which proves (1) and (2). We also solve the game when the starting position consists only of ones, twos, and threes. We denote the starting position with \(a\) ones, \(b\) twos, and \(c\) threes as the ordered triple \((a,b,c)\) from this point on. We will also often use the notation that an \(O\) or \(E\) in the place of the element of the tuple means that the corresponding variable is odd or even respectively (i.e., \((E,E,E)\) means that \(a,b\) and \(c\) are all even). Further, \(O^{\prime}\) and \(E^{\prime}\) mean that the corresponding variable is odd or even, but independent from the previous \(O\) and \(E\). **Theorem 1.6**.: _For any game starting with \(a\) ones, \(b\) twos, and \(c\) threes, we have the following forced wins._ \begin{tabular}{|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & _Player having forced win_ \\ \hline _Even_ & _Even_ & _Even_ & & _Player 2_ \\ \hline _Odd_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Even_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a>c\) & _Player 2_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a<c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a>c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a<c\) & _Player 2_ \\ \hline _Even_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline \end{tabular} Proof.: The first case is a corollary of Lemma 4.1, since all the bins have even height. The next three cases follow from the first case as Player 1 simply splits the 3, splits the 2, or combines the 1 and the 3 respectively to bring the game state into all three bins being even. The next few cases are a bit more work. We show the (Odd, Odd, Even) case, where \(a>c\), and put the rest in Appendix A.2. For this case, we assume \(a\) is odd and \(b\) and \(c\) are even with \(a>c\). We show Player 2 wins by force. First, assume Player 1 splits one of the threes. This gives the game state \[(a+1,b+1,c-1). \tag{4.1}\] Since \(c\) is even, we have \(c-1\geq 1\). This allows Player 2 to split another three, yielding \[(a+2,b+2,c-2). \tag{4.2}\] Note that we have \(a+2>c-2\). Moreover, \(a-2\) is odd while \(b+2\) and \(c-2\) remain even. By induction, Player 2 wins. Next, assume Player 1 splits one of the twos. This yields the game state \[(a+2,b-1,c). \tag{4.3}\] Since \(b\) is even, we have \(b-1\geq 1\). This allows Player 2 to split another two, yielding \[(a+4,b-2,c). \tag{4.4}\] Note we have \(a+4>c\). Moreover, \(a+4\) is odd while \(b-2\) and \(c\) remain even. By induction, Player 2 wins. Finally, assume Player 1 combines a one and a three. This yields the game state \[(a-1,b+2,c-1). \tag{4.5}\] Since \(c\) is even, we have \(c-1\geq 1\). Moreover, since \(a>c\) by assumption, we have \(a>1\). In particular, Player 2 is able to perform another combine, leading to \[(a-2,b+4,c-2). \tag{4.6}\] Note \(a-2>c-2\). Moreover, all parities are preserved. By induction, Player 2 wins. **Remark 4.2**.: We summarize two key points in the above proof. 1. The method in the above Theorem is constructive, so following it carefully provides a winning strategy. 2. The last two cases appear to not depend on whether \(a>c\) or \(a<c\), but the strategy to win changes in those cases. ## 5. The Build Up Game We move on to the _Build Up 1-2-3 game_. In this game, two players, say Player 1 and Player 2, take turns putting down a 1, 2 or 3 until their sum equals exactly \(n\), generating an ordered triple \((a,b,c)\). After reaching exactly \(n\), the players start playing the reverse Zeckendorf game on this ordered triple and start with the player who did not move last. Whoever wins this reversed Zeckendorf game wins the whole game. We solved this game. **Theorem 5.1**.: _For \(n\neq 4\),_ \[n=4 \implies\text{Player 1 wins}, \tag{5.1}\] \[n\text{ odd} \implies\text{Player 1 wins},\] (5.2) \[n\neq 4\text{ even} \implies\text{Player 2 wins}. \tag{5.3}\] Proof.: Using Theorem 1.6, we know who wins in each ordered triple. We provide a constructive proof in which we split the game into cases mod 4. We consider residue classes modulo 4 since any player may play 4 minus what was played previously and thus preserve parity modulo 4. This will be referred to as _nimming down_. An extra property is that the number of ones and threes put down is the same, and the number of twos is even. For \(n\equiv 0\,(4)\) and \(n\geq 8\), Player 2 should nim down until there is 8 left to play. From here, there are two cases. 1. Player 1 is left to play on \((E,E^{\prime},E)\): 1. If Player 1 puts down a 1 or a 3, then Player 2 puts down the other, leaving \((E+1,E^{\prime},E+1)\). 1. If Player 1 puts down a 1 or a 3, then Player 2 puts down the other, leaving player 1 to start on \((E+2,E^{\prime},E+2)\), so Player 2 wins. 2. If Player 1 puts down a 2, then Player 2 puts down a 1, forcing Player 1 to put down a 1. Player 2 then starts on \((E+3,E^{\prime}+1,E+1)\), which is a win for Player 2. 2. If Player 1 puts a 2 down, Player 2 puts down a 1, leaving Player 1 to play on \((E+1,E^{\prime}+1,E)\), with 5 total left to play. 1. If Player 1 puts down a 1 or a 3, Player 2 puts down the other, forcing Player 1 to put down a 1. This leaves Player 2 to start on \((E+3,E^{\prime}+1,E+1)\), which is a win for Player 2. 2. If Player 1 puts down a 2, then Player 2 puts down a 2, and Player 1 is forced to put down a 1. Player 2 then starts on \((E+2,E^{\prime}+3,E)\), which is a win for Player 2. 2. Player 1 is left to play on \((O,E,O)\): 1. If Player 1 puts down a 1, then Player 2 puts down a 2, so Player 1 plays on \((O+1,E+1,O)\), with a total of 5 left to play. 1. If Player 1 puts down a 1 or 3, then Player 2 puts down the other, forcing Player 1 to put down a 1. This leaves Player 2 to start on \((O+3,E+1,O+1)\), which is a win for Player 2. 2. If Player 1 puts down a 2, then Player 2 puts down a 2, forcing Player 1 to put down a 1. Player 2 starts on \((O+2,E+3,O)\), which is a win for Player 2. 2. If Player 1 puts down a 2 or 3, then Player 2 puts down the other, so Player 1 plays on \((O,E+1,O+1)\) with 3 total left. 1. If Player 1 puts down a 3, then Player 2 starts the game on \((O,E+1,O+2)\), which is a win for Player 2. 2. If Player 1 puts down a 1 or 2, then Player 2 puts down the other. Player 1 then starts the game on \((O+1,E+2,O+1)\), which is a win for Player 2. The other three cases are very similar. These, as well as the winning strategy for small values of \(n\), have been included in the appendix. ## 6. Further Directions and Conclusions There are numerous directions for future work. 1. While the authors believe Conjecture 1.3 is very much out of reach, Conjecture 1.2 can be proven by finding a family of Zeckendorf decompositions where Player 2 can force a win. 2. More generally, one might study other reversed games to see if any other games reveal such a complex winning structure upon reversal. 3. Future work could also try to solve the Reversed game for a larger family of starting positions, perhaps ones closer to the Zeckendorf decomposition. 4. Other modifications of the game can be considered, including a "stagnant one" variation, where all chips with value one are removed from the game (or equivalently, any move requiring a one cannot be played). We conclude with a final, bold conjecture. **Conjecture 6.1**.: _The winning strategy for the forward Zeckendorf games is dependent upon who wins the reverse Zeckendorf game._ This is a potential explanation of why a constructive proof of why Player 2 always wins the forwards game has remained elusive. ## Appendix A Additional Details ### Table of Wins for Reverse Zeckendorf Game The following table lists which player wins for the starting number \(n\) as well as the number of vertices and edges of the associated directed graph. A grayed square means Player 2 wins while a white one means Player 1 wins. \begin{tabular}{|c|c|c|} \hline \(n\) & Result & Edges & Vertices \\ \hline [MISSING_PAGE_POST] \end{tabular} ### Table of Wins for Reverse Zeckendorf Game The following table lists which player wins for the starting number \(n\) as well as the number of vertices and edges of the associated directed graph. A grayed square means Player 2 wins while a white one means Player 1 wins. \begin{tabular}{|c||c|c|} \hline \(n\) & Result & Edges & Vertices \\ \hline [MISSING_PAGE_POST] \end{tabular} ### Proof of Theorem 1.6 \begin{tabular}{|c||c|c|} \hline \(n\) & Result & Edges & Vertices \\ \hline [MISSING_PAGE_POST] \end{tabular} **Theorem 1.6**.: _For any game starting with \(a\) ones, \(b\) twos, and \(c\) threes, we have the following forced wins._ \begin{tabular}{|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & & _Player having forced win_ \\ \hline _Even_ & _Even_ & _Even_ & & _Player 2_ \\ \hline _Odd_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Even_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a>c\) & _Player 2_ \\ \hline _Odd_ & _Even_ & _Even_ & \(a<c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a>c\) & _Player 1_ \\ \hline _Even_ & _Even_ & _Odd_ & \(a<c\) & _Player 2_ \\ \hline _Even_ & _Odd_ & _Odd_ & & _Player 1_ \\ \hline _Odd_ & _Odd_ & _Even_ & & _Player 1_ \\ \hline \end{tabular} Proof.: For the \((O,E,E)\) case, where \(a<c\), we show Player 1 wins by force. Player 1 wins by performing a combine with a one and a three, yielding the game state \[(a-1,b+2,c-1).\] (A.1) Note \(a-1<c-1\). This also yields the \((E,E,O)\) case with Player 2 on move. By work above, the player on move loses which means Player 1 wins. For the \((E,E,O)\) and \(a>c\) case, we show Player 1 wins by force. Player 1 wins by combining a one and a three. This gives the game state \[(a-1,b+2,c-1).\] (A.2) Then, \(a-1\) is odd, \(b+2\) is even, and \(c-1\) is even. Moreover, we have \(a-1>c-1\). Therefore, by the \((O,E,E)\) proof, the player on move loses. Since Player 2 is on move, then Player 1 wins. For the \((E,E,O)\) and \(a<c\) case, we show Player 2 wins by force. First, assume Player 1 combines a one and a three. This gives the game state \[(a-1,b+2,c-1).\] (A.3) Since Player 1 was able to combine a one and a three, then \(a\geq 2\) which implies \(c>2\). We have \(a-1,c-1\geq 1\), and so Player 2 may perform another combine to get \[(a-2,b+4,c-2).\] (A.4) The point is that \(a-2<c-2\) and all parities are preserved. Eventually, Player 1 will run out of ones and threes to combine and is forced to either split a three or a two. Both of these cases are covered next, and are wins for Player 2. (Informally, the main idea in this case is to force Player 1 to run out combines, upon which they will be forced to perform a split. We will show that all splits are losing for Player 1, and so Player 2 forces a win by running Player 1 out of combines.) Assume Player 1 splits one of the twos. This gives the game state \[(a+2,b-1,c).\] (A.5) Now, if \(a+3>c-1\), Player 2 wins by splitting one of the threes. (Since \(c\) is odd, a three exists.) This yields the game state \[(a+3,b,c-1).\] (A.6) At this point, \(a+3\) is odd, \(b\) is even, and \(c-1\) is even. Moreover, we have \(a+3>c-1\). By work above, the player on move loses. Since Player 1 is on move, Player 2 wins. On the other hand, if \(a+3<c-1\) (i.e., \(a+4<c\)), Player 2 should copy Player 1 and split another two. Note that a two exists; since \(b\) is even and Player 1 was able to split a two, we must have \(b\geq 2\). This forces \(b-1\geq 1\), and so a two is available. This yields the game state \[(a+4,b-2,c).\] (A.7) The upshot is the number of ones is increasing, and so Player 2 will eventually be able to reduce to the situation analyzed in the \(a+3>c-1\) case. The parities are preserved and \(a+4<c\). Therefore, Player 2 wins. Finally, assume Player 1 splits one of the threes. This yields the game state \[(a+1,b+1,c-1).\] (A.8) If \(c=1\), then \(c-1=0\). Since \(a<c\), we get \(a=0\). The point is that we have the game state \((1,b+1,0)\) with Player 2 on move. The only option for both players at this point is to keep splitting twos. Since \(b+1\) is odd, Player 1 will run out of splits first, and so Player 2 wins. Now, assume \(c>1\). If \(a+3>c-1\), Player 2 wins by splitting a two, yielding the position \[(a+3,b,c-1).\] (A.9) Note \(a+3\) is odd, \(b\) is even, and \(c\) is even. Since \(a+3>c-1\), then earlier work implies the player on move loses. As Player 1 is on move, Player 2 wins. Finally, if \(a+3<c-1\), Player 2 wins by splitting a three, yielding the position \[(a+2,b+2,c-2).\] (A.10) All parities are preserved, and \(a+2<c-2\) since \(a+3<c-1\). By induction, Player 2 wins. In all cases, we have shown Player 2 has a forced win. For the \((E,O,O)\) case, we show Player 1 wins by force. The winning strategy depends on the number of ones versus the number of threes. First, assume \(a+2>c\); that is, there is a large number of ones. Player 1 wins by splitting a three, yielding \[(a+1,b+1,c-1).\] (A.11) Note \(a+1\) is now odd, while \(b+1\) and \(c-1\) are even. Furthermore, we have \(a+1>c-1\) since \(a+2>c\). By earlier work, the player on move loses. Since Player 2 is on move, then Player 1 wins. On the other hand, we assume \(a+2<c\); that is, there is a small number of ones. Player 1 wins by splitting a two, yielding the game state \[(a+2,b-1,c).\] (A.12) Note \(a+2\) is even, \(b-1\) is even, and \(c\) is odd. Moreover, we have \(a+2<c\). By earlier work, the player on move loses. Since Player 2 is on move, then Player 1 wins. In either case, we have exhibited a winning strategy for Player 1, and so Player 1 wins by force. For the \((O,O,E)\) case, we assume \(a\) and \(b\) are odd and \(c\) is even. We show Player 1 wins by force. As before, the winning strategy depends on the number of ones versus the number of threes. First, assume \(a+2<c\). Player 1 wins by splitting a three, yielding \[(a+1,b+1,c-1).\] (A.13) Note \(a+1\) and \(b+1\) are now even, while \(c-1\) is odd. Furthermore, we have \(a+1>c-1\) due to \(a+2<c\). By earlier work, the player on move loses. As Player 2 is on move, Player 1 wins. Finally, assume \(a+2>c\). Player 1 wins by splitting a two, yielding the game state \[(a+2,b-1,c).\] (A.14) Note \(a+2\) is odd, \(b-1\) is even, and \(c\) is even. Moreover, we have \(a+2>c\). By earlier work, the player on move loses. As Player 2 is on move, Player 1 wins. In either case, we have exhibited a winning strategy for Player 1, and so Player 1 wins by force. ### Proof of Theorem 5.1 **Theorem 5.1**.: _For \(n\neq 4\),_ \[n=4 \implies\text{Player 1 wins}, \tag{5.1}\] \[n\ odd \implies\text{Player 1 wins},\] (5.2) \[n\neq 4\ even \implies\text{Player 2 wins}. \tag{5.3}\] Proof.: For \(n\equiv 1\,(4)\) and \(n\geq 9\), Player 1 puts down a 3, and then nims down to 6. At this point, the board is either \((O,E,O+1)\) or \((E,E^{\prime},E+1)\). 1. If Player 2 puts down a 1, the game state is either \((O+1,E,O+1)\) or \((E+1,E^{\prime},E+1)\). 1. In the case of \((O+1,E,O+1)\), Player 1 puts down a 2 leaving \((O+1,E+1,O+1)\). 1. If Player 2 puts down a 1 or 2, then Player 1 puts down the other. Player 2 then starts on \((O+2,E+2,O+1)\), which is a win for Player 1. 2. If Player 2 puts down a 3, then Player 1 starts on \((O+1,E+1,O+2)\) which is a win for Player 1. 2. In the case of \((E+1,E^{\prime},E+1)\), Player 1 puts down a 3, leaving \((E+1,E^{\prime},E+2)\). 1. If Player 2 puts down a 1, then player 1 is forced to put down a 1. Player 2 then starts on \((E+3,E^{\prime},E+2)\), which is a win for Player 1. 2. If Player 2 puts down a 2, then Player 1 starts on \((E+1,E^{\prime}+1,E+2)\), which is a win for Player 1. 2. If Player 2 puts down a 2 or 3, then Player 1 puts down the other, forcing Player 2 to put down a 1. Player 1 then starts on \((O+1,E+1,O+2)\) or \((E+1,E^{\prime}+1,E+2)\), both of which are wins for Player 1. For \(n\equiv 2\,(4)\) and \(n\geq 6\), Player 2 nims down until there is 6 left. The game state is either \((O,E,O)\) or \((E,E^{\prime},E)\). 1. If Player 1 puts down a 1, then Player 2 is to move on \((O+1,E,O)\) or \((E+1,E^{\prime},E)\), with 5 total left. 1. In the case of \((O+1,E,O)\), Player 2 puts down a 3, leaving \((O+1,E,O+1)\), with a total of 2 left to play. 1. If Player 1 puts down a 1, Player 2 is forced to put down a 1. Player 1 then starts on \((O+3,E,O+1)\), which is a win for Player 2. 2. If Player 1 puts down a 2, Player 2 starts on \((O+1,E+1,O+1)\), which is a win for Player 2. 2. In the case of \((E+1,E^{\prime},E)\), Player 2 puts down a 2, leaving \((E+1,E^{\prime}+1,E)\). 1. If Player 1 puts down a 1 or 2, then Player 2 puts the other down. Player 1 then starts on \((E+2,E^{\prime}+2,E)\), which is a win for Player 2. 2. If Player 1 puts down a 3, then Player 2 starts on \((E+1,E^{\prime}+1,E+1)\), which is a win for Player 2. 2. If Player 1 puts down a 2 or a 3, then Player 2 puts down the other, forcing Player 1 to play a 1. Player 2 then starts on \((O+1,E+1,O+1)\) or \((E+1,E^{\prime}+1,E+1)\), both of which are wins for Player 2. For \(n\equiv 3\,(4)\), Player 1 puts down a 2, and then nims down to 1, upon which Player 2 puts down a 1. Player 1 then starts on an ordered triple \((a,b,c)\) with \(b\) odd since nimming down preserves the parity of \(twos\) and there is a single 2 put down at the start. In all cases where \(b\) is odd, the player that starts wins, so Player 1 wins. For \(n=1\), Player 1 puts down a 1. Player 2 then loses as they cannot move on \((1,0,0)\). For \(n=2\), if Player 1 puts down a 1, Player 2 must do the same, and so Player 2 wins as Player 1 cannot move on \((2,0,0)\). If Player 2 puts down a 2, Player 2 starts on \((1,1,0)\), and therefore Player 2 wins. For \(n=4\), Player 1 puts down a 3, forcing Player 2 to put down a 1. Player 1 starts on \((1,0,1)\), a winning position for Player 1. For \(n=5\), Player 1 puts down a 2. 1. If Player 2 puts down a 3, then Player 1 starts on \((0,1,1)\), a win for Player 1. 2. If Player 2 puts down a 1 or a 2, then Player 1 puts down the other. Player 2 then starts on \((1,2,0)\), which is a win for Player 1. ## Appendix B Code The program we used to brute force solve who has a winning strategy is listed below. It was coded in Jupyter Notebook, but it can be run by any program that can run python. The computation complexity is about \(O(\exp(\sqrt{n}))\), with the program taking about 2 hours for \(n=129\) and about 24 hours for \(n=144\). ``` importnetworkxasnx frommatplotlibimportpyplotasplt fromnetworkx.drawing.nx_agraphimportgraphviz_layout defis_game_a_loss(current_state): """ Input:Alistrepresentingthecurrentstateofthegame Output:Boolean ReturnsTrueifthegameisover(i.e.,isa lossfortheplayernexttomove) ReturnsFalseifthegameisnotover. """ returnsum(current_state[1:])==0 defcombine(current_state): """ Input: A list representing the current state of the game Output: A list of lists, where each list is a possible next state of the game Finds all the potential combine moves in the current state and creates the next states after the combine move """ future_states = [] for i, val inenumerate(current_state): tmp = list(current_state). copy() if i >= 3 and val and current_state[i-3]: tmp[i] -= 1 tmp[i-3] -= 1 tmp[i-1] += 2 future_states.append(tuple(tmp)) if len(current_state) > 2: if current_state[0] and current_state[2]: tmp = list(current_state).copy() tmp[0] -= 1 tmp[2] -= 1 tmp[1] += 2 future_states.append(tuple(tmp)) return set(future_states) defsplit(current_state): """ Input: A list representing the current state of the game Output: A list of lists, where each list is a possible next state of the game Finds all the potential split moves in the current state and creates the next states after the split move """ if is_game_a_loss(current_state): return "An Error Has Occurred" future_states = [] for i, val inenumerate(current_state): tmp = list(current_state).copy() if i == 0: continue if i == 1 and val: tmp[1] -= 1 tmp[0] += 2 future_states.append(tuple(tmp)) elif val: tmp[i] -= 1 tmp[i-1]+=1 tmp[i-2]+=1 future_states.append(tuple(tmp)) returnset(future_states) defnearestSmallerEqFib(n): """ Input: integer n Output: tuple, where the first entry is the greatest Fibonacci number smaller than n and the second is the index of that Fibonacci number """ # Cornercases if (n == 0): return "404:BROKEN" elif n == 1: return (1,n) # Finds the greatest Fibonacci Numbers smaller # than n. f1, f2, f3 = 0, 1, 1 index = 0 while (f3 <= n): index += 1 f1 = f2; f2 = f3; f3 = f1 + f2; return (f2, index); def int_to_zeck(n): """ Input: integer n Output: tuple that represents the Zeckendorf decomposition of n """ arr = [] while (n>0): f_i, i = nearestSmallerEqFib(n); arr.append(i) n = n-f_i arr2 = [0] * max(arr) for i in arr: arr2[i-1] += 1 return tuple(arr2) def Fibonacci(n): """ Input: integer n Output: integer F_n, the n-th Fibonacci number (where F_2 = 2) """ if n == 0: return 1 # Check if n is 1,2 # it will return 1 elif n == 1: return 1 else: return (Fibonacci(n-1) + Fibonacci(n-2)) def zeck_to_int(zeck_tuple): """ Input: a tuple representing the Zeckendorf decomposition of a number n Output: the integer n """ tot = 0 fori, val in enumerate(zeck_tuple): tot += val * Fibonacci(i+1) return tot def game_solver(number, draw_graph = False, graph_labels = False): """ Input: an integer n (or a tuple representing a zeckendorf decomposition) Output: a tuple of the number, its zeckendorf decomposition, the winner, the number of edges, and vertices Alsocand draw the graph of the game """ # Checks whether the input is a tuple or an integer, and # converts it to the zeckendorf decomposition if the latter if type(number) is tuple: initial_state = number number = zeck_to_int(number) else: initial_state = int_to_zeck(number) edges = [] current_states = [initial_state] calculated_states = [] # Continually loops through the list of possible # states to generate the next states for the game whileTrue: future_states = [] forcurrent_state in current_states: if current_state in calculated_states: continue next_states = set() if is_game_a_loss(current_state): continue next_states = next_states.union(split(current_state)) next_states = next_states.union(combine(current_state)) edges += ((current_state, state) for state in next_states) future_states += next_states.copy() calculated_states += [current_state] future_states = set(future_states) # If therearenostates left to becalculated, # wemustbeattheendnode, i.e., game is over if len(future_states) == 0: break current_states = future_states.copy() # Initializesthegraphofthegame G = nx.DiGraph() foredgeinset(edges): G.add_edge(edge[0], edge[1]) # Setstheinitialwin/lossstateforthevertices,allaresettoFalse i.e.,notcalculated)excepttheendnode,whichisaloss data=dict() fornodeinG.nodes: ifis_game_a_loss(node): data[node] = "L" else: data[node] = False """ Goesthroughthegametreeinreverse andcalculatesthestateofeachvertex Ifanyvertexleadstoonethatisaloss, thatvertexisautomaticallywin Ifallthechildnodesofavertexareawin, thenthatvertexisaloss """ not_done=True whilenot_done: fornodeinG.nodes: ifdata[node]:continue children_states_L=[data[node]=="L" fornodeinG.neighbors(node)] children_states_W=[data[node]=="W" fornodeinG.neighbors(node)] ifany(children_states_L): data[node]="W" elifall(children_states_W): data[node]="L" not_done=notall(data[node]fornodeinG.nodes) #Colorstheverticesbasedonwhethertheyarewinsorlosses color_map=[] fornodeinG.nodes: ifsum(node[1:])==0: color_map.append("black") elifdata[node]=="W": color_map.append("green") elifdata[node]=="L": color_map.append("red") ifnode==initial_state: ifdata[node]=='W':winner=1 else:winner=2 #Drawsthegraph ifdraw_graph: plt.figure(3,figsize=(20,20)) pos=graphviz_layout(G,prog='dot') nx.draw_networkx(G,pos,node_color=color_map,with_labels=graph_labels) return(number,"".join([str(i)foriinitial_state]), winner,len(G.edges),len(G.nodes)) #Example:shouldoutputa5-tuple(7,'0101',1,16,10) #andagraphrepresentingthegame game game_solver(7,True,True)
2309.15027
On the Distances to the X-ray Binaries Cygnus X-3 and GRS 1915+105
In this paper we significantly improve estimates of distance to the X-ray binary systems Cyg X-3 and GRS 1915+105. We report a highly accurate trigonometric parallax measurement for Cyg X-3 using the VLBA at 43 GHz, placing the source at a distance of 9.67+0.53-0.48 kpc. We also use Galactic proper motions and line-of-sight radial velocity measurements to determine 3-dimensional (3D) kinematic distances to both systems, under the assumption that they have low peculiar velocities. This yields distances of 8.95+-0.96 kpc for Cyg X-3 and 9.4+-0.6 (statistical)+-0.8 (systematic) for GRS 1915+105. The good agreement between parallax and 3D kinematic distances validates the assumption of low peculiar velocities, and hence small natal kicks, for both of the systems. For a source with a low peculiar velocity, given its parallax distance, Cyg X-3 should have a Vlsr near -64+-5 km/s. Our measurements imply a slightly higher inclination angle, and hence lower black hole mass for GRS 1915+105 than found from previous work by Reid et al (2014) and strengthen arguments from X-ray polarization that Cyg X-3 would be an ultraluminous X-ray source if viewed face-on.
M. J. Reid, J. C. A. Miller-Jones
2023-09-26T15:57:30Z
http://arxiv.org/abs/2309.15027v1
# On the Distances to the X-ray Binaries Cygnus X-3 and GRS 1915+105 ###### Abstract In this paper we significantly improve estimates of distance to the X-ray binary systems Cyg X-3 and GRS 1915+105. We report a highly accurate trigonometric parallax measurement for Cyg X-3 using the VLBA at 43 GHz, placing the source at a distance of \(9.67^{+0.53}_{-0.48}\) kpc. We also use Galactic proper motions and line-of-sight radial velocity measurements to determine 3-dimensional (3D) kinematic distances to both systems, under the assumption that they have low peculiar velocities. This yields distances of \(8.95\pm 0.96\) kpc for Cyg X-3 and \(9.4\pm 0.6\) (statistical) \(\pm 0.8\) (systematic) for GRS 1915+105. The good agreement between parallax and 3D kinematic distances validates the assumption of low peculiar velocities, and hence small natal kicks, for both of the systems. For a source with a low peculiar velocity, given its parallax distance, Cyg X-3 should have a \(V_{\rm LSR}\) near \(-64\pm 5\) km s\({}^{-1}\). Our measurements imply a slightly higher inclination angle, and hence lower black hole mass for GRS 1915+105 than found from previous work by Reid et al. (2014) and strengthen arguments from X-ray polarization that Cyg X-3 would be an ultraluminous X-ray source if viewed face-on. ## 1 Introduction Knowledge of the distance to an astronomical source is fundamental for estimating its true nature, including its mass and luminosity. The case of the high-mass X-ray binary Cyg X-1 is an excellent example. It was the first binary suggested to include a black hole, based on its periodic velocity excursions and the lack of an observable companion (Webster & Murdin, 1972; Bolton, 1972). However, for nearly 40 years, one could not be certain whether the companion was a black hole or a neutron star, since distance estimates ranged by more than a factor of two, from about 1.1 to 2.5 kpc (see, e.g., Caballero-Nieves et al., 2009), and at the lower end of the range of distances companion masses could be below about 5 M\({}_{\odot}\). This problem was resolved by Reid et al. (2011) and Miller-Jones et al. (2021), using the Very Long Baseline Array (VLBA) to observe the radio emission from the compact companion and measure a trigonometric parallax relative to background quasars, with the latter study yielding a distance of \(2.22^{+0.18}_{-0.17}\) kpc. This firmly established that the Cyg X-1 system contains a black hole and a massive young star. However, accurate parallaxes for more distant X-ray binaries have been hard to obtain. In particular, two well-studied X-ray binaries, GRS 1915+105 and Cyg X-3, have large distance uncertainties, which limit our understanding of their nature. Reid et al. (2014) observed GRS 1915+105 with the VLBA and measured a _relative_ parallax to a nearby (in both projection and distance) water maser associated with a massive, young star. Combining the relative parallax of GRS 1915+105 with the absolute parallax of the maser (Wu et al., 2014), and prior constraints on distance based on models of jet kinematics, resulted in a distance estimate for GRS 1915+105 of \(8.6^{+2.0}_{-1.6}\) kpc and led to an estimate of its compact companion mass of \(12.4^{+2.0}_{-1.8}\) M\({}_{\odot}\). Previous distance estimates had only indirectly constrained it to be larger than about 6 kpc, inferred from H i absorption (Mirabel & Rodriguez, 1994), and smaller than 12.5 kpc, based on the ratio of apparent speeds of the approaching and receding jets (Fender et al., 1999). Determining the distance to Cyg X-3 has been less successful than for GRS 1915+105. Dickey (1983) noted that there is absorption from interstellar H i toward Cyg X-3 from Local Standard of Rest velocities, \(V_{\rm LSR}\), of zero to at least \(-70\) km s\({}^{-1}\), suggesting a lower limit for its distance of \(>11.6\times(R_{0}/10\) kpc), where \(R_{0}\) is the distance to the Galactic center. Predehl et al. (2000) compared the angular extent of the X-ray halo of Cyg X-3 with the time delay of X-rays scattered by intervening dust and estimated a distance of \(9^{+4}_{-2}\) kpc. Ling et al. (2009) re-analyzed the X-ray data and, assuming that the scattering occurs in the Cyg OB2 association of young stars at 1.7 kpc, estimated a distance of Cyg X-3 of \(7.2^{+0.3}_{-0.5}\) kpc. However, allowing the Cyg OB2 distance to be between 1.38 and 1.82 kpc places Cyg X-3 between 3.4 and 9.3 kpc distant. In this paper we present a very accurate trigonometric parallax for the high-mass X-ray binary Cyg X-3, as well as an independent estimate of its distance using 3-dimensional (3D) kinematics. For the micro-quasar GRS 1915+105, which already has a trigonometric parallax measurement, we also provide an independent estimate of distance using 3D kinematics. Finally, we carefully examine the fundamental assumption of kinematic distances - that the sources have only small to moderate (\(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}\hss}\raise 2.0pt \hbox{$\mathchar 318$}}20\) km s\({}^{-1}\)) non-circular motions - as this has strong implications for how some compact stars can form, since it requires small natal "kicks." ## 2 Estimating distance with 3D motions Reid (2022) analyzed the use of proper motions in addition to line-of-sight velocities to obtain 3D kinematic distance estimates, concluding that they had great potential for sources more distant than about 8 kpc. 3D kinematic distance estimates compare the full observed velocity vector to a model of Galactic rotation, with distance as an adjustable parameter. The fundamental Galactic parameters - the distance to the Galactic center (\(R_{0}\)), the circular rotation speed of the Sun (\(\Theta_{0}\)), and the rotation curve of the Milky Way - are now known to near 1% accuracy (see Reid et al., 2019; Do et al., 2019; Reid & Brunthaler, 2020; GRAVITY Collaboration et al., 2021, for details). For distant sources, measurements of proper motion can require orders of magnitude less precision compared to parallax measurements for similar fractional distance uncertainty. Thus, 3D kinematic distances offer an opportunity to refine distance estimates for sources that follow Galactic rotation. For the Galactic model needed for kinematic distance estimates, we assume the values of \(R_{0}=8.15\) kpc and \(\Theta_{0}=236\) km s\({}^{-1}\) from Reid et al. (2019) and their Solar Motion parameters (\(U_{\odot}=10.6,V_{\odot}=10.7,W_{\odot}=7.6\)) km s\({}^{-1}\). For the rotation curve of the Milky Way, we adopt that of Reid et al. (2019, documented in their appendix B). This rotation curve model follows the 2-parameter "universal" formulation of Persic et al. (1996) and was obtained by fitting 147 maser sources with Galactocentric radii between 4 and 15 kpc using measured 3D motions and "gold standard" parallax distances. Following Reid (2022), we estimate 3D kinematic distances by forming likelihoods as a function of distance for three components of motion: the velocity with respect to the Local Standard of Rest, \(V_{\rm LSR}\), the proper motion in Galactic longitude, \(\mu_{l}\), and the proper motion in Galactic latitude, \(\mu_{b}\). Assuming a flat prior on distance, the product of these likelihoods gives the combined posterior distribution function (PDF) for distance. ## 3 GRS 1915+105 The proper motion of GRS 1915+105 has been measured relative to compact extra-galactic sources by Dhawan et al. (2007) and Reid et al. (2014) using the VLBA of the National Radio Astronomy Observatory1. Dhawan et al. (2007) observed predominantly at 8.4 GHz between 1996 and 2006 and achieved single-epoch astrometric precision of \(\approx 1\) mas. They measured the eastward and northward motions to be \(\mu_{\alpha}=-2.86\pm 0.07\) mas y\({}^{-1}\) and \(\mu_{\delta}=-6.20\pm 0.09\) mas y\({}^{-1}\). Independently, Reid et al. (2014) observed at 22 GHz between 2008 and 2013 and with improved astrometric techniques, including using "geodetic blocks" to measure and remove residual tropospheric delays (Reid et al., 2009), and achieved single-epoch precision of \(\approx 0.2\) mas, yielding \(\mu_{\alpha}=-3.19\pm 0.03\) mas y\({}^{-1}\) and \(\mu_{\delta}=-6.24\pm 0.05\) mas y\({}^{-1}\). The variance-weighted average is \(\mu_{\alpha}=-3.14\pm 0.03\) mas y\({}^{-1}\), \(\mu_{\delta}=-6.23\pm 0.04\) mas y\({}^{-1}\), which converts to motions in Galactic longitude and latitude of \(\mu_{l}=-6.98\pm 0.05\) mas y\({}^{-1}\), \(\mu_{b}=-0.12\pm 0.01\) mas y\({}^{-1}\). Reid et al. (2014) recalibrated the data of Steeghs et al. (2013) and estimated the heliocentric line-of-sight velocity \(\gamma=+12.3\pm 1.0\) km s\({}^{-1}\), corresponding to a \(V_{\rm LSR}=30.4\) km s\({}^{-1}\). Footnote 1: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Figure 1 displays the likelihood functions for the three components of motion of GRS 1915+105. The line-of-sight velocity (\(V_{\rm LSR}\)) gives maximum likelihood distances of 1.80 or 9.65 kpc; the Galactic longitude proper motion favors a distance of 8.15 kpc; and the latitude motion weakly constrains the distance (favoring a large value). The combined 3D kinematic distance estimate for GRS 1915+105 is \(9.4\pm 0.6\) (statistical) \(\pm\) 0.8 (systematic) kpc, where the statistical uncertainty is a Gaussian 1\(\sigma\) width of the combined PDF and the systematic uncertainty is half of the separation of the peaks of the line-of-sight (\(V_{\rm LSR}\)) and longitude motion (\(\mu_{l}\)) likelihoods. Using the same fundamental parameters of the Milky Way adopted for the model for the 3D kinematic distance, the non-circular (peculiar) motion components for GRS 1915+105 are \((U,V,W)=(18\pm 2,8\pm 22,2\pm 2)\) km s\({}^{-1}\), where \(U\) is toward the Galactic center, \(V\) is in the direction of Galactic rotation, and \(W\) is toward the North Galactic Pole. Thus, the magnitude of the peculiar motion of GRS 1915+105 is fairly small (\(\sim 20\) km s\({}^{-1}\)). ## 4 Cyg X-3 ### Trigonometric Parallax Previous attempts to measure the parallax of Cyg X-3 used the VLBA under program BM343. Those observations at 12 GHz employed background quasars for calibration which were separated from Cyg X-3 by \(\approx 3^{\circ}\). Owing to this large separation, these observations yielded only a marginal parallax detection. The lack of compact quasars near Cyg X-3 is a result of strong scattering from interstellar electrons over a few degrees on the sky toward the Cygnus X region. Such scattering increases the apparent angular size of radio sources, making them heavily resolved on long interferometer baselines. Since scattering angles decrease as the inverse-square of observing frequency, in order to minimize scatter broadening and find a Figure 1: Likelihoods for three components of motion, \(V_{\rm LSR}\) in (_blue_), Galactic longitude (_red solid line_) and latitude (_red dashed line_), as a function of distance for GRS 1915+105. The product of the three likelihoods is shown in _black_, indicating a distance of \(9.4\pm 0.6\) (statistical) \(\pm 0.8\) (systematic) kpc. The parallax-based distance from Reid et al. (2014) is indicated with the \(\pi\) symbol and its 68% confidence range is given above it. The range of distances for spiral arms along the line-of-sight are indicated below the distance axis. closer background source, we surveyed continuum radio sources within \(2^{\circ}\) of Cyg X-3 with the VLBA at 43 GHz and found one, J2033+4000, which was relatively compact and separated by only \(1^{\circ}\) from Cyg X-3. In VLBA program BR212, we observed Cyg X-3 and J2033+4000 at 43 GHz at eight epochs spanning one year. The calibrator, J2033+4000, was resolved on the longest baselines, and we only used seven antennas (stations codes: BR, FD, KP, LA, NL, OV, PT) with a maximum baseline length of 2300 km. We "nodded" the array between the two sources, changing sources every 20 sec, in order to transfer phase from J2033+4000 to Cyg X-3 within the interferometer coherence time, limited by rapid fluctuations in water vapor. We also calibrated the slowly varying (hours time-scale) changes in total water vapor above each antenna by observing "geodetic-like" blocks of quasars at 24 GHz across the sky. These and other calibration methods are described in detail in Reid et al. (2009). The data were correlated using the VLBA DiFX software correlator (Deller et al., 2011), and analyzed using the Astronomical Image Processing System (Greisen, 2003). Fig. 2 shows a representative image of Cyg X-3 from observations on 2017 May 14. The northern bright spot was clearly visible at all epochs and served as the astrometric point for the parallax measurement. Table 1 gives the dates of the observation, and measured positions and brightnesses for Cyg X-3 obtained by fitting a Gaussian brightness distribution to the northern spot. Figure 2: VLBA contour map of Cyg X-3 at 43 GHz on 2017 May 14 when the source was weakest. Contour levels are -0.5, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0 mJy. A 0.5 mas FWHM beam is shown as the shaded disk at the bottom left. The brightest component, near \((0.2,-0.5)\) mas, was used for fitting the parallax. The relative positions in Table 1 were modeled by a trigonometric parallax signature and a linear proper motion and fitted by variance-weighted least squares. In order to account for delay errors from uncompensated tropospheric water vapor, we added "error floors" in quadrature to the formal uncertainties in the East and North offsets. The error floors were adjusted to \((\sigma_{E},\sigma_{N})=(\pm 0.014,\pm 0.070)\) mas to give a reduced chi-squared per degree of freedom near unity in each coordinate. The reason for the error floor in the northerly direction, \(\sigma_{N}\), being five-times larger than in the easterly direction, \(\sigma_{E}\), is likely due to unresolved jitter in the core position owing to weak jetted emission in the North-South direction. Fig. 3 displays the parallax data and fits, as sky positions for all epochs, as well as East and North offsets as a function of time. The best-fit parallax is \(0.1034\pm 0.0054\) mas. With just a 5% uncertainty, we can simply invert the parallax to determine the distance, without the need for a prior. This gives a distance of \(9.67^{+0.53}_{-0.48}\) kpc. The eastward and northward components of proper motion are \(-2.589\pm 0.014\) and \(-3.747\pm 0.069\) mas y\({}^{-1}\). ### 3D Kinematic Distance The proper motion of Cyg X-3 had been measured relative to compact extragalactic sources by Miller-Jones et al. (2009) to be \(\mu_{\alpha}=-2.73\pm 0.06\) mas y\({}^{-1}\) y and \(\mu_{\delta}=-3.70\pm 0.06\) mas y\({}^{-1}\) using mostly Very Large Array A-configuration observations spanning 1983 to 2006 at 8.4 \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{Date} & East Offset & North Offset & Brightness \\ & (mas) & (mas) & (mJy beam\({}^{-1}\)) \\ \hline 2016.806 & \(1.495\pm 0.003\) & \(1.702\pm 0.005\) & \(18.5\pm 0.2\) \\ 2016.836 & \(1.425\pm 0.004\) & \(1.548\pm 0.007\) & \(19.0\pm 0.4\) \\ 2017.308 & \(0.397\pm 0.001\) & \(-0.072\pm 0.002\) & \(40.2\pm 0.3\) \\ 2017.325 & \(0.346\pm 0.006\) & \(-0.159\pm 0.007\) & \(15.3\pm 0.4\) \\ 2017.344 & \(0.291\pm 0.004\) & \(-0.319\pm 0.005\) & \(13.6\pm 0.2\) \\ 2017.368 & \(0.266\pm 0.007\) & \(-0.441\pm 0.010\) & \(6.7\pm 0.2\) \\ 2017.815 & \(-1.126\pm 0.003\) & \(-2.146\pm 0.004\) & \(16.8\pm 0.2\) \\ 2017.847 & \(-1.185\pm 0.005\) & \(-2.180\pm 0.007\) & \(14.2\pm 0.4\) \\ \hline \end{tabular} Note. – Column 1 gives the date of the observation. Columns 2 and 3 give the measured position offsets of Cyg X-3 relative to J2033+4000, after removing a constant angular difference assuming J2000 coordinates of (20:32:25.76955,+40:57:27.8820) for Cyg X-3 and (20:33:03.671208,+40:00:24.40818) for J2033+4000. Column 4 gives the peak brightness obtained for Cyg X-3 by fitting a Gaussian distribution. Typical beam sizes were 0.5 mas FWHM. All errors are formal 1\(\sigma\) fitting uncertainties and do not include systematic errors. \end{table} Table 1: Parallax Data for Cyg X-3 GHz (or higher frequencies). This motion is in reasonable agreement with our more accurate measurement given above. The line-of-sight velocity of this binary is very poorly constrained, owing to a combination of high visual extinction and the infrared emission lines arising from differing locations in the turbulent wind of the Wolf-Rayet primary, which cannot therefore be used to determine the radial velocity of the system itself (e.g. Koljonen & Maccarone, 2017). Here we adopt a very loose prior on \(V_{\rm LSR}\) of \(-50\pm 50\) km s\({}^{-1}\), which is essentially consistent with it being a Galactic source toward a longitude of \(\approx 80^{\circ}\). Fig. 4 displays the likelihood functions for the three components of motion of Cyg X-3. While the likelihood for its \(V_{\rm LSR}\) provides no useful constraint on distance, the proper motion component in Galactic longitude strongly (and the latitude component weakly) constrains distance to be \(8.95\pm 0.96\) kpc. Since the Cyg X-3 binary contains a Wolf-Rayet star, which lives Myr, it should be very near its birth location inside a spiral arm of the Milky Way. Fig. 5 shows the latest model of the spiral arms of the Milky Way by Reid et al. (2019), which is based on parallax measurements of \(\approx 150\) massive, young stars. The parallax (and kinematic) distance of Cyg X-3 places it in the Outer spiral arm, which has a Galactic latitude of \(1\fdg 7\) near the \(79\fdg 85\) longitude of Cyg X-3. The latitude of Cyg X-3 differs from that of other Outer arm by about \(1^{\circ}\), corresponding to about 170 pc at 9.67 kpc distance, which is well within the (Gaussian \(1\sigma\)) vertical width of that arm of about 200 pc (extrapolated from figure 4 of Reid et al., 2019). Figure 3: Parallax data and fits for Cyg X-3. _Left Panel:_ Sky view with East and North offsets. _Middle Panel:_ East (data and fitted solid line in red) and North (data and fitted dashed line in blue) positions vs. time. _Right Panel:_ Same as middle panel, but with fitted proper motion removed to highlight the parallax effect. One-sigma error bars include systematic uncertainty added in quadrature with the formal fitting uncertainties, yielding chi-squared per degree of freedom near unity in each coordinate. While Cyg X-3 does not have a reliable line-of-sight velocity, its proper motion has been accurately measured and can be compared to those of four massive, young stars with maser astrometry that straddle Cyg X-3 in Galactic longitude and are known to be in the Outer spiral arm, whose locations are shown in Fig. 5. Fig. 6 shows the easterly and northerly proper motions of these stars. Note that both components of motion for Cyg X-3 are consistent with interpolations between the sources, which bracket Cyg X-3 in Galactic longitude. This further supports the association of Cyg X-3 with the Outer arm. Given the strong evidence that Cyg X-3 formed recently in the Outer spiral arm of the Milky Way, we now examine evidence that can constrain its \(V_{\rm LSR}\). The four massive, young stars which straddle Cyg X-3 and have consistent proper motions, have line-of-sight velocities that range from \(-82<V_{\rm LSR}<-58\) km s\({}^{-1}\). We now calculate the pecular motion of Cyg X-3 as a function of its (unknown) line-of-sight velocity and display this in Fig. 7. The magnitude of the peculiar motion is less than 20 km s\({}^{-1}\) for \(-82<V_{\rm LSR}<-47\) km s\({}^{-1}\), similar to the range of \(V_{\rm LSR}\) for the four young stars, and there is a clear minimum for the 3D peculiar motion of Cyg X-3 near \(V_{\rm LSR}=-64\) km s\({}^{-1}\). Note that our parallax distance and uncertainty, would yield standard (1D) kinematic distance for \(V_{\rm LSR}=-64\pm 5\) km s\({}^{-1}\). All together there is strong circumstantial evidence that Cyg X-3 has a small peculiar motion (\(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}\hss}\raise 2.0pt \hbox{$\mathchar 318$}}20\) km s\({}^{-1}\)), suggesting a small natal kick when its compact star formed. Indeed, the small value (\(<1\) km s\({}^{-1}\)) of the peculiar motion component toward the North Galactic Figure 4: Likelihoods for three components of motion, \(V_{\rm LSR}\) in (_blue_), Galactic longitude (_red solid line_) and latitude (_red dashed line_), as a function of distance for Cyg X-3. The product of the three likelihoods is shown in _black_, indicating a distance of \(8.95\pm 0.96\) kpc. The parallax distance reported in this paper is indicated with the \(\pi\) symbol, and its 68% and 95% confidence ranges are indicated with _solid_ and _dotted lines_ above it. The distance range for the Outer spiral arm along the line-of-sight is indicated below the distance axis. Pole, which is nearly independent of the \(V_{\rm LSR}\) of Cyg X-3, supports this conclusion, since it is _a priori_ unlikely for a natal kick to be entirely in the Galactic plane. ## 5 Discussion ### Grs 1915+105 The 3D kinematic distance estimate for GRS 1915+105 is consistent with, but more precise than, the parallax distance determined by Reid et al. (2014). As noted in that work, the distance affects the inferred black hole mass through its effect on the inclination of the orbit (assumed to be aligned with the jets). The proper motions \(\mu_{\rm app}\) and \(\mu_{\rm rec}\) of intrinsically symmetric approaching and receding jet ejecta can be used to constrain the product \(\beta\cos i\), where \(\beta\) is the jet speed normalized to the speed of light, and \(i\) is the inclination angle of the jet axis to the line of sight. With an accurate distance, these two parameters can be disentangled via \[\tan i=\frac{2d}{c}\frac{\mu_{\rm app}\mu_{\rm rec}}{\mu_{\rm app}-\mu_{\rm rec }}. \tag{1}\] Figure 5: Plan-view model of the Milky Way after Reid et al. (2019), indicating the locations of Cyg X-3 (_black dot_) and four massive, young stars with water masers (_red letters_) associated with the Outer spiral arm (_red lines_) between Galactic longitudes 70\({}^{\circ}\) and 100\({}^{\circ}\) and with parallax distances from VLBI astrometry. A is G097.53+3.18 (Hachisuka et al., 2015; Reid et al., 2019), B is G095.05+3.97 (Sakai et al., 2020), C is G075.30+1.32 (Sanna et al., 2012), and D is G073.65+0.19 (Reid et al., 2019). \(V_{\rm LSR}\) values in km s\({}^{-1}\) for the masers are indicated next to their positions; each is uncertain by about \(\pm 10\) km s\({}^{-1}\). The larger distance of GRS 1915+105 inferred from its 3D kinematics implies a larger inclination angle for the jet axis. Using a census of paired ejecta with accurate proper motion measurements from Miller-Jones et al. (2007), we find a weighted mean inclination angle of \(64^{\circ}\pm 4^{\circ}\). The increase in the inferred inclination angle implies a higher jet speed for a given set of proper motion measurements, with the proper motions of Miller-Jones et al. (2007) giving jet speeds ranging from 0.68-0.91\(c\). Given the \(\sin^{3}i\) dependence of the mass function on inclination, our higher inclination (relative to the \(60^{\circ}\pm 5^{\circ}\) determined from the parallax distance estimate by Reid et al., 2014) would translate to a slight reduction in the inferred black hole mass, from 12.4 \(M_{\odot}\) to 11.2 \(M_{\odot}\). This reduction in inferred black hole mass makes GRS 1915+105 less of an outlier relative to the black hole mass distribution of the low-mass X-ray binary population, as estimated by Farr et al. (2011) and Kreidberg et al. (2012). As shown by Dhawan et al. (2007), the peculiar velocity of GRS 1915+105 is minimized at distances of between 8 and 10 kpc, such that our 3D kinematic distance does not significantly impact the calculated non-circular motion of the system. At only 36 pc from the Galactic plane, and with a peculiar velocity of \(\sim 20\) km s\({}^{-1}\), GRS 1915+105 is likely to have formed either via direct collapse, or in a supernova with a very low natal kick. Indeed, determining the potential kick velocity via the method of Atri et al. (2019), we find a median of 32 km s\({}^{-1}\) Figure 6: Proper motions of four massive, young stars with water masers (_red symbols_) and Cyg X-3 (_black symbols_) as a function of Galactic longitude. Source names are given below their measurements along with letter codes used in Fig. 5(Sanna et al., 2012; Hachisuka et al., 2015; Reid et al., 2019; Sakai et al., 2020). _Open circles_ indicate motions in the easterly direction and _filled squares_ indicate motions in the northerly direction. The young stars, which are associated with the Outer spiral arm of the Milky Way and straddle Cyg X-3 in Galactic longitude, have motions consistent with that of Cyg X-3. with a 90% confidence interval of 17-65 km s\({}^{-1}\). This is comparable to the lowest inferred natal kicks of any low-mass X-ray binary. ### Cyg X-3 Under the reasonable assumption (supported by strong circumstantial evidence, as detailed in Section 4.2) that the peculiar velocity of Cyg X-3 is small, we determine a 3D kinematic distance of \(8.95\pm 0.96\) kpc, in good agreement with the independently-determined trigonometric parallax measurement of \(9.67^{+0.53}_{-0.48}\) kpc. This provides confidence in the distance determination and validates the effectiveness of the 3D kinematic distance method (Reid, 2022) for sources with a low peculiar velocity. While early distance estimates for Cyg X-3 (Dickey, 1983; Predehl et al., 2000) placed the source at a distance of \(\sim 10\) kpc, more recent dust scattering measurements by Ling et al. (2009) favored a lower distance of \(7.2^{+0.3}_{-0.5}\) kpc. However, as noted by the authors, this measurement was highly sensitive to the distance of the Cyg OB2 association (assumed as 1.7 kpc), and more recent _Gaia_ data (Berlanas et al., 2019) have shown the cluster to be slightly more distant, at \(\sim 1.76\) kpc. Extended X-ray emission which varied on the orbital period of Cyg X-3 was found to arise from X-ray scattering by a Bok globule along the line of sight (McCollough et al., 2013). Standard 1D kinematic distances to the globule with \(V_{\rm LSR}=-47.5\) km s\({}^{-1}\) are either \(6.1\pm 0.6\) or \(7.8\pm 0.6\) kpc. Modeling the time delay of the scattered light Figure 7: Peculiar (non-circular) components of motion of Cyg X-3 as a function of its (unknown) line-of-sight velocity component (\(V_{\rm LSR}\)): \(U_{pec}\) toward the Galactic center (_red_), \(V_{pec}\) in the direction of Galactic rotation (_blue_), and \(W_{pec}\) toward the North Galactic Pole (_green_). The 3D magnitude of the peculiar motion is plotted in _black_ and has a minimum of 8 km s\({}^{-1}\) at \(V_{\rm LSR}=-64\) km s\({}^{-1}\). The \(V_{\rm LSR}\) range for the motion magnitude being \(>20\) km s\({}^{-1}\) is shown by _dot-dashed lines_. Our parallax distance of 9.67 kpc is assumed. curve yielded possible distances to Cyg X-3 of either \(7.4\pm 1.1\) and \(10.2\pm 1.2\) kpc, at 62 and 38% probability, respectively (McCollough et al., 2016). The farther distance estimate would be fully consistent with our measurement. Since our distance measurement is consistent with that of earlier works (Dickey, 1983; Predehl et al., 2000), the update does not significantly change jet velocities calculated by Mioduszewski et al. (2001) or Miller-Jones et al. (2004). However, Koljonen & Maccarone (2017) noted that a distance of \(\sim 10\) kpc would imply a slight increase in the inferred mass of the Wolf-Rayet donor, to a range of 11-14 \(M_{\odot}\). Recent X-ray polarization measurements from the Imaging X-ray Polarimetry Explorer (IXPE) suggested that the central compact object in Cyg X-3 is highly obscured (Veledina et al., 2023), likely due to an optically-thick envelope which surrounds a narrow funnel, whose walls allow reflected and scattered light to escape. Unless the opening angle of the funnel was very small (\(\lesssim 16^{\circ}\)), the inferred geometry would suggest an intrinsic luminosity exceeding the Eddington limit, even for an accretor in excess of \(20M_{\odot}\). Since these calculations were based on the lower distance of Ling et al. (2009), these inclination angle limits should be even more stringent, strengthening the argument that Cyg X-3 would be an ultraluminous X-ray source if observed face-on. Furthermore, recent work by Koljonen et al. (2023) suggested a spatial and temporal association between gamma-ray flaring in Cyg X-3 and IceCube neutrino detections, raising the possibility that protons could be accelerated to highly-relativistic energies within the jets of this system, and making Cyg X-3 a possible source of cosmic rays. Our new distance determination would allow for a more accurate assessment of the proton luminosity, and hence the potential cosmic ray contribution of microquasar systems. ## 6 Conclusions and Outlook X-ray binary distance measurements are crucial to understanding their nature, allowing us to determine their underlying physical parameters. For Cyg X-3, we have measured a highly accurate trigonometric parallax distance of \(9.67^{+0.53}_{-0.48}\) kpc. To date, this is the most distant X-ray binary with a radio parallax measurement, demonstrating the potential of VLBI measured parallaxes for bright sources, even at large distances along highly scatter-broadened lines-of-sight. Our refined distance measurement strengthens the argument that Cyg X-3 would appear as an ultraluminous X-ray source if viewed face-on. Using Cyg X-3's measured proper motion, we determine a 3D kinematic distance of \(8.95\pm 0.96\) kpc, which is consistent with the more accurate parallax distance, demonstrating that 3D kinematics can provide reliable distance estimates for X-ray binaries with low peculiar velocities. Both the parallax and kinematic distance locate the system within the Outer spiral arm of the Galaxy. Its proper motion is consistent with those of young, massive stars in the same region, which have LSR velocities near \(-70\) km s\({}^{-1}\), suggesting that Cyg X-3 has a similar LSR velocity and supporting a small peculiar velocity of \(<20\) km s\({}^{-1}\). We also estimated a 3D kinematic distance for GRS 1915+105 of \(9.4\pm 0.6\) (stat.) \(\pm\) 0.8 (sys.) kpc. This distance is consistent with, but more precise than the previous par allax result of Reid et al. (2014). At 9.4 kpc, GRS 1915+105 would have a slightly higher inclination angle, and hence lower black hole mass, than previously suggested. This work underscores the importance of high-precision astrometric measurements of X-ray binary systems, even in cases where a parallax distance measurement is not possible (e.g., due to the lack of a close calibrator, a large distance, or significant scatter broadening). When coupled with a line-of-sight radial velocity measurement, they can provide reliable 3D kinematic distances for sources with low peculiar velocities. We thank Arash Bahramian for useful discussions and assistance with fitting jet parameters. This work made use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under licence. This work has made use of NASA's Astrophysics Data System. VLBA AIPS (Greisen, 2003)
2309.12990
A Review of Bayesian Methods for Infinite Factorisations
Defining the number of latent factors has been one of the most challenging problems in factor analysis. Infinite factor models offer a solution to this problem by applying increasing shrinkage on the columns of factor loading matrices, thus penalising increasing factor dimensionality. The adaptive MCMC algorithms used for inference in such models allow to defer the dimension of the latent factor space automatically based on the data. This paper presents an overview of Bayesian models for infinite factorisations with some discussion on the properties of such models as well as their comparative advantages and drawbacks.
Margarita Grushanina
2023-09-22T16:42:08Z
http://arxiv.org/abs/2309.12990v1
# A Review of Bayesian Methods for Infinite Factorisations ###### Abstract Defining the number of latent factors has been one of the most challenging problems in factor analysis. Infinite factor models offer a solution to this problem by applying increasing shrinkage on the columns of factor loading matrices, thus penalising increasing factor dimensionality. The adaptive MCMC algorithms used for inference in such models allow to defer the dimension of the latent factor space automatically based on the data. This paper presents an overview of Bayesian models for infinite factorisations with some discussion on the properties of such models as well as their comparative advantages and drawbacks. **Keywords:** Factor analysis, adaptive Gibbs sampling, spike-and-slab prior, Indian buffet process, multiplicative gamma process, increasing shrinkage ## 1 Introduction Latent factor models represent a popular tool for data analysis in many areas of science, including psychology, marketing, economics, finance, genetic research, pharmacology and medicine. Their history dates back to Spearman (1904), who first suggested common factor analysis as a single factor model in the context of psychology. Thurstone (1931) and Thurstone (1934) extended it to multiple common factors and introduced some important factor analysis concepts, such as communality, uniqueness, and rotation. Anderson and Rubin (1956) in their seminal paper established important theoretical foundations of latent factor analysis. Since then there has been a vast and constantly growing pool of literature covering various theoretical and practical aspects of factor analysis. Some selective reviews include, for example, Barhoumi et al. (2013) and Stock and Watson (2016) for dynamic factor models, Bai and Wang (2016) for large factor models, and Fan et al. (2021) for factor models in application to econometric learning. Recent years have also seen a considerable research in the area of Bayesian latent factor models. Some of the many important contributions in this area include Geweke and Zhou (1996), Aguilar and West (2000), West (2003), Lopes and West (2004), Fruhwirth-Schnatter and Lopes (2010), Conti et al. (2014), Rockova and George (2016), Kaufmann and Schumacher (2019) and Fruhwirth-Schnatter et al. (2022a). One of the most challenging tasks in factor analysis concerns the inference of the true number of latent factors in the model. The most common approach in the literature has long been to use various criteria to choose a model with the correct number of factors. Thus, Bai and Ng (2002) use information criteria to compare models with different factors' cardinalities. Kapetanios (2010) performs model comparison using test statistics, while Polasek (1997) and Lopes and West (2004) rely on marginal likelihood estimation to determine the true number of factors in the model. Carvalho et al. (2008) perform the evolutionary stochastic model search which iteratively increases the model by an additional factor until reaching some pre-specified limit or until the process stops including additional factors. As a different approach, Lopes and West (2004) customise a reversible jump MCMC (RJMCMC) algorithm introduced in Green (1995) for moving between models with different numbers of factors, while Fruhwirth-Schnatter and Lopes (2018) suggest a one-sweep algorithm to estimate the true number of factors from an overfitting factor model. However, such methods are often computationally demanding, especially when the dimensionality of the analysed data set is high. Recently, another approach has been developed which allows the factors' cardinality to be derived from data by letting the number of factors to potentially be infinite. The dimension reduction is then achieved by assigning a nonparametric prior to factor loadings which penalises the increase of the number of columns in the factor loading matrix via increasing shrinkage of the factor loadings on each additional factor to zero. Thus, in their pioneering work, Bhattacharya and Dunson (2011) introduced the multiplicative gamma process (MGP) prior on the precision of factor loadings, which is defined as a cumulative product of gamma distributions. Knowles and Ghahramani (2011) and Rockova and George (2016) employed the Indian Buffet Process (IBP) to enforce sparsity on factor loadings and at the same time penalise the increasing dimensionality of latent factors. Legramanti et al. (2020) introduced the cumulative shrinkage process (CUSP) prior which applies cumulative shrinkage on the increasing number of columns of the factor loading matrix via a sequence of spike-and-slab distributions. Model inference is usually performed via Gibbs sampler steps, however, the models' changing dimensions at different iterations of the sampler require the usage of adaptive algorithms, which have some specific properties that need to be taken into account. This paper provides a review of the methods for infinite factorisations, with a focus on their properties, comparative advantages and drawbacks. The paper proceeds as follows: Section 2 briefly reviews the formulation of a Bayesian factor model and a shrinkage prior on factor loadings. Sections 3 - 5 provide an insight into the three above mentioned priors for infinite factorisations, namely, MGP, CUSP and IBP priors, and outline their main advantages and drawbacks. Section 6 reviews the concept of generalized infinite factorization models. Section 7 concludes with a discussion. ## 2 Bayesian infinite factor model ### Bayesian latent factor model In the traditional Bayesian factor analysis data on \(p\) related variables are assumed to arise from a multivariate normal distribution \(\mathbf{y}_{t}\sim N_{p}(\mathbf{0},\mathbf{\Omega})\), where \(\mathbf{y}_{t}\) is the \(t\)-th of the \(T\) observations and \(\mathbf{\Omega}\) is the unknown covariance matrix of the data. A factor model represents each observation \(\mathbf{y}_{t}\) as a linear combination of \(K\) common factors \(\mathbf{f}_{t}=(f_{1t},\ldots,f_{Kt})^{T}\): \[\mathbf{y}_{t}=\mathbf{\Lambda}\mathbf{f}_{t}+\mathbf{\epsilon}_{t}, \tag{1}\] where \(\mathbf{\Lambda}\) is an unknown \(p\times K\) factor loading matrix with factor loadings \(\lambda_{ih}\) (\(i=1,\ldots,p\), \(h=1,\ldots,K\)) and it is typically assumed that \(K\ll p\). Often, the latent factors are assumed to be orthogonal and follow a normal distribution \(\mathbf{f}_{t}\sim N_{p}(\mathbf{0},\mathbf{I}_{p})\). Furthermore, it is assumed that the factors \(\mathbf{f}_{t}\) and \(\mathbf{f}_{s}\) are pairwise independent for \(t\neq s\) The idiosyncratic errors \(\epsilon_{t}\) are also assumed normal and pairwise independent: \[\epsilon_{t}\sim N_{p}(\mathbf{0},\mathbf{\Sigma}),\qquad\mathbf{\Sigma}=\mathrm{ diag}(\sigma_{1}^{2},\ldots,\sigma_{p}^{2}).\] These assumptions allow to represent the covariance matrix of the data in the following way: \[\mathbf{\Omega}=\mathbf{\Lambda}\mathbf{\Lambda}^{T}+\mathbf{\Sigma}. \tag{2}\] There are many different ways to choose a prior for the elements of the factor loading matrix \(\mathbf{\Lambda}\). A typical choice involves a version of a normal prior \(\lambda_{ih}\sim N(d_{ih}^{0},D_{ih}^{0})\) for the reason of conjugacy. The hyperparameter \(d_{ih}^{0}\) is often chosen to be equal to zero. This has an additional advantage that with a suitably chosen hyperprior for \(D_{ih}^{0}\) such setting can result in a sparse \(\mathbf{\Lambda}\) with many zero elements, which is justified for many applications of factor models. To ensure identifiability, it is often assumed that \(\mathbf{\Lambda}\) has a full rank lower triangular structure, which imposes a choice of a truncated normal prior for the diagonal elements of \(\mathbf{\Lambda}\) to ensure positivity and a normal prior for the lower diagonal elements (see, e.g. Geweke and Zhou (1996), Lopes and West (2004), Ghosh and Dunson (2009), amongst others). The idiosyncratic variances \(\sigma_{i}^{2}\) are usually assigned an inverse Gamma prior \(\sigma_{i}^{2}\sim\mathcal{G}^{-1}(c_{0i},C_{0i})\) mainly for the reasons of its conditional conjugacy. ### Standard Gibbs sampler Inference is usually performed via a Gibbs sampler, sequentially sampling factor loadings, idiosyncratic variances and factors from their respective conditional distributions. These steps are rather generic for a wide range of factor models and choices of parameters. Assuming that the data is explained by \(K\) latent factors and that in the normal prior for the elements of the factor loading matrix \(d_{ih}^{0}=0\), the Gibbs sampler steps for updating \(\mathbf{\Lambda}\), \(\mathbf{\Sigma}\) and \(\boldsymbol{F}=\{\boldsymbol{f}_{t}:t=1,\ldots,T\}\) will look as follows: _Step 1._ Sample \(\boldsymbol{\lambda}_{i}\) for \(i\) in \((1,\ldots,p)\) from \[\boldsymbol{\lambda}_{i}^{T}|-\sim N_{K}\left((\boldsymbol{\Psi}_{i}^{-1}+ \sigma_{i}^{-2}\boldsymbol{F}\boldsymbol{F}^{\boldsymbol{T}})^{-1}\boldsymbol{ F}\sigma_{i}^{-2}\boldsymbol{y}_{i}^{T},(\boldsymbol{\Psi}_{i}^{-1}+\sigma_{i}^{-2} \boldsymbol{F}\boldsymbol{F}^{T})^{-1}\right)\] where \(\boldsymbol{\Psi}_{i}=\mathrm{diag}(D_{i1}^{0},\ldots,D_{iK}^{0})\) and \(\boldsymbol{\lambda}_{i}\) is the \(i\)th row of the factor loading matrix \(\mathbf{\Lambda}\). _Step 2._ Sample \(\sigma_{i}^{-2}\) for \(i\) in \((1,\ldots,p)\) from \[\sigma_{i}^{-2}|-\sim\mathcal{G}\left(c_{0i}+\frac{T}{2},C_{0i}+\frac{1}{2} \sum_{t=1}^{T}(y_{it}-\boldsymbol{\lambda}_{i}^{T}\boldsymbol{f}_{t})^{2} \right).\] _Step 3._ Sample \(\boldsymbol{f}_{t}\) for \(t\) in \((1,\ldots,T)\) from \[\boldsymbol{f}_{t}|-\sim N_{K}\left((\boldsymbol{I}_{K}+\boldsymbol{\Lambda}^{ T}\mathbf{\Sigma}^{-1}\mathbf{\Lambda})^{-1}\mathbf{\Lambda}^{T}\mathbf{ \Sigma}^{-1}\boldsymbol{y}_{t},(\boldsymbol{I}_{K}+\boldsymbol{\Lambda}^{T} \mathbf{\Sigma}^{-1}\mathbf{\Lambda})^{-1}\right)\] where \(\mathbf{\Sigma}=\mathrm{diag}(\sigma_{1}^{2},\ldots,\sigma_{p}^{2})\). Additional steps can be added to update hyperparameters if hyperpriors are assigned to any of the parameters of the prior distributions for \(\lambda_{ih}\) and \(\sigma_{i}^{2}\). ### Infinite factorisations and increasing shrinkage of the prior for \(\boldsymbol{\Lambda}\) In the above described Gibbs sampler steps, we take the number of latent factors \(K\) as known. In reality, this is rarely the case and determining the plausible number of latent factors can be a difficult and time consuming problem, especially in high-dimensional data sets. In the last decade, there has been a rise in the literature using a different approach towards determining the number of latent factors. This approach assumes that a factor model can in theory include infinitely many factors, i.e. the factor loading matrix \(\boldsymbol{\Lambda}\) can be comprised of infinitely many columns. This means that \(\boldsymbol{\Lambda}\) is seen as a parameter-expanded factor loading matrix with redundant parameters. More formally, if \(\boldsymbol{\Theta}_{\Lambda}\) denotes the collection of all matrices \(\boldsymbol{\Lambda}\) with \(p\) rows and infinitely many columns, then the product \(\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{T}\) is a \(p\times p\) matrix with all entries finite if and only if the following condition holds1: Footnote 1: This follows from the Cauchy-Schwartz inequality, see the proof in Bhattacharya and Dunson (2011). \[\boldsymbol{\Theta}_{\Lambda}=\Big{\{}\boldsymbol{\Lambda}=(\lambda_{ih}),\;i= 1,\ldots,p,\;h=1,\ldots,\infty,\;\max_{1\leq i\leq p}\sum_{h=1}^{\infty} \lambda_{ih}^{2}<\infty\Big{\}}\] The prior on the elements of \(\boldsymbol{\Lambda}\) is defined in such a way that it allows \(\lambda_{ih}\)s to decrease in magnitude if the column index \(h\) grows, thus penalising the increasing factor dimensionality. This approach allows the number of factors to be derived automatically from data via an adaptive inference algorithm. In the next sections we discuss the most notable methods for infinite factorisations in detail. ## 3 Multiplicative gamma process prior ### The prior specification In their seminal paper, Bhattacharya and Dunson (2011) proposed one way to choose a prior on the elements of a factor loading matrix so that to penalise the effect of additional columns: \(\lambda_{ih}\)s are given a normal prior centred at zero, while the prior precisions of \(\lambda_{ih}\)s for each \(h\) are defined as a cumulative product of gamma priors. The MGP prior can be formalised as follows: \[\lambda_{ih}|\phi_{ih},\tau_{h} \sim N(0,\phi_{ih}^{-1}\tau_{h}^{-1}), \phi_{ih}\sim\mathcal{G}(\nu_{1}/2,\nu_{2}/2), \tau_{h}=\prod_{l=1}^{h}\delta_{l}, \tag{3}\] \[\delta_{1}\sim\mathcal{G}(a_{1},b_{1}), \delta_{l}\sim\mathcal{G}(a_{2},b_{2}),\quad l\geq 2,\] where \(\delta_{l}\) (\(l=1,\ldots,\infty\)) are independent, \(\tau_{h}\) is a global shrinkage parameter for the \(h\)-th column, \(\phi_{ih}\) are local shrinkage parameters for the elements of the \(h\)-th column. The condition \(a_{2}>1\) is imposed on the shape parameter of the prior for \(\delta_{l}\) to insure that \(\tau_{h}\)s are stochastically increasing with increasing \(h\). In Bhattacharya and Dunson (2011), \(b_{1}=b_{2}=1\) are set at \(1\), while \(a_{1}\) and \(a_{2}\) are assigned the hyperprior \(\mathcal{G}(2,1)\) and sampled in a Metropolis-within-Gibbs step. ### Inference and adaptive Gibbs sampler The inference is done via a Gibbs sampler with a few additional steps to the standard ones described in Section 2.2. A distinctive feature of the sampler suggested in Bhattacharya and Dunson (2011) is that it truncates the factor loading matrix \(\boldsymbol{\Lambda}\) to have \(k^{*}\) columns, where \(k^{*}\) is the number of factors supported by the data at each given iteration of the sampler. The truncation procedure deserves some closer attention. Although theoretically the number of factors is allowed to be infinitely large, in reality one chooses a suitable level of truncation \(k^{*}\), designed to be large enough not to miss any important factors, but also not too conservative to induce unnecessary computational effort. The sampler is initiated with a conservative guess \(K_{0}\), which is chosen to be substantially larger than the supposed actual number of factors. At each iteration of the sampler, the posterior samples of the factor loading matrix \(\boldsymbol{\Lambda}\) contain information about the effective number of factors supported by the data in the following way. Let \(m^{(g)}\) be the number of columns of \(\boldsymbol{\Lambda}\) at iteration \(g\) which have all their elements so small that they fall within some pre-specified neighbourhood of zero. Then these columns are considered redundant and \(k^{*(g)}=k^{*(g-1)}-m^{(g)}\) is defined to be the effective number of factors at iteration \(g\). To keep balance between dimensionality reduction and exploring the whole space of possible factors, \(k^{*}\) is adapted with probability \(p(g)=\exp(\alpha_{0}+\alpha_{1}g)\), with the parameters chosen so that the adaptation occurs more often at the beginning of the chain and decreases in frequency exponentially fast (the adaptation is designed to satisfy the diminishing adaptation condition in Theorem 5 of Roberts and Rosenthal (2007), which is necessary for convergence). When the adaptation occurs, the redundant factors are discarded and the corresponding columns are deleted from the loading matrix (together with all other corresponding parameters). If none of the columns appear redundant at iteration \(g\), a factor is added, with all its parameters sampled from the corresponding prior distributions. Adaptation is made to occur after a suitable burn-in period in order to ensure that the true posterior distribution is being sampled from before truncating the loading matrices. In the adaptive Gibbs sampler with the MGP prior on the factor loadings, the first three steps will be essentially the same as in Section 2.2, with two alterations: the number of factors \(K\) will be replaced by \(k^{*}\) and in Step \(1\)\(D_{i1}^{0},\ldots,D_{iK}^{0}\) will consequently be replaced by \(\phi_{i1}^{-1}\tau_{1}^{-1},\ldots,\phi_{ik^{*}}^{-1}\tau_{k^{*}}^{-1}\). The additional steps will have the following form: _Step 4._ Sample \(\phi_{ih}\) for \(i\) in \((1,\ldots,p)\) and \(h\) in \((1,\ldots,k^{*})\) from \[\phi_{ih}|-\sim\mathcal{G}\left(\frac{\nu_{1}+1}{2},\frac{\nu_{2}+\tau_{h} \lambda_{ih}^{2}}{2}\right)\;.\] _Step 5._ Sample \(\delta_{1}\) from \[\delta_{1}|-\sim\mathcal{G}\left(\frac{2a_{1}+pk^{*}}{2},1+\frac{1}{2}\sum_{l= 1}^{k^{*}}\tau_{l}^{(1)}\sum_{i=1}^{p}\phi_{il}\lambda_{il}^{2}\right)\;.\] Sample \(\delta_{h}\) for \(h\geq 2\) from \[\delta_{h}|-\sim\mathcal{G}\left(\frac{2a_{2}+p(k^{*}-h+1)}{2},1+\frac{1}{2} \sum_{l=h}^{k^{*}}\tau_{l}^{(h)}\sum_{i=1}^{p}\phi_{il}\lambda_{il}^{2}\right)\] where \(\tau_{l}^{(h)}=\prod_{t=1,t\neq h}^{l}\delta_{t}\) for \(h\) in \((1,\ldots,k^{*})\). _Step 6._ Sample the posterior densities of \(a_{1}|\delta_{1}\) and \(a_{2}|\delta_{2},\ldots,\delta_{k^{*}}\) via a random walk Metropolis-Hastings step with \(a_{1}^{p}\sim N(a_{1},s_{1}^{2})\) and \(a_{2}^{p}\sim N(a_{2},s_{2}^{2})\) serving as proposal quantities and the accep tance probabilities being: \[\rho_{a_{1}} =\frac{\Gamma(a_{1})}{\Gamma(a_{1}^{p})}\,\frac{a_{1}^{p}}{a_{1}}\, \delta_{1}^{a_{1}^{p}-a_{1}}\;e^{a_{1}-a_{1}^{p}},\] \[\rho_{a_{2}} =\left(\frac{\Gamma(a_{2})}{\Gamma(a_{2}^{p})}\right)^{-(k^{*}-1)} \,\frac{a_{2}^{p}}{a_{2}}\,\left(\prod_{l=2}^{k^{*}}\delta_{l}\right)^{a_{2}^ {p}-a_{2}}\,e^{a_{2}-a_{2}^{p}}.\] _Step 7._ At each iteration generate a random number \(u_{g}\) from \(\mathcal{U}(0,1)\). If \(u_{g}\leq p(g)\), check if any columns of the factor loading matrix \(\boldsymbol{\Lambda}\) are within the pre-specified neighbourhood of 0, and if this is so, discard the redundant columns and all its corresponding parameters. In the case when the number of such columns is zero, generate an additional factor by sampling its parameters from the prior distributions. ### Practical applications and properties The MGP prior has initially been developed for high-dimensional datasets with \(p\gg T\) and a sparse covariance matrix structure, such as genes expression data. However, it acquired a wide-spread popularity and has been proved useful in various applications, see e.g. Montagna et al. (2012) and Rai et al. (2014), amongst others. An application of particular interest is the infinite mixture of infinite factor analysers (IMIFA) model introduced in Murphy et al. (2020), where the MGP prior was used in the context of a mixture of factor analysers to allow automatic inference on the number of latent factors within each cluster. However, the MGP model has also some important limitations. Some of these limitations are investigated in Durante (2017), who addressed the dependence of the shrinkage induced by the MGP prior on the value of the hyperparameters \(a_{1}>0\) and \(a_{2}>0\). Bhattacharya and Dunson (2011) state that the \(\tau_{h}\)s in (3) are stochastically increasing with increasing \(h\) under the restriction \(a_{2}>1\), which means that the induced prior on \(1/\tau_{h}\) increasingly shrinks the underlying quantity towards zero as the column index \(h\) increases, provided that \(a_{2}>1\). Durante (2017) argues that this is not sufficient to guarantee the increasing shrinkage property in a general case. Instead, further conditions are required, such as \[a_{2}>b_{2}+1,\qquad a_{2}>a_{1} \tag{4}\] for the increasing penalization of a high number of factors to hold (in expectation), providing that \(a_{1}>0\) and \(a_{2}>0\) and the values of \(a_{1}\) are not excessively high. In his simulation study of the performance of the MGP prior for various values of the hyperparameters \(a_{1}\) and \(a_{2}\), Durante (2017) investigates the behaviour of the model with \(T=100\), \(p=10\), and two different values for the true number of factors, namely \(K=2\) and \(K=6\). The results show an improved posterior concentration when the parameters \(a_{1}\) and \(a_{2}\) satisfy condition (4), specially for the case \(K=2\). As the true rank of the model increases, there is evidence that the shrinkage induced by the MGP prior might be too strong. Another critique of the MGP prior appeared in Legramanti et al. (2020), who pointed out that the hyperparameters \(a_{1}\) and \(a_{2}\) both control the rate of shrinkage and the prior for the loadings on active factors. This creates a trade-off between the need to maintain considerably diffuse priors for active components and the endeavour to shrink the redundant ones. In their simulation study, Legramanti et al. (2020) found that the MGP prior significantly overestimates the number of active factors on a medium sized data set with \(p<T\). In an attempt to evaluate the performance of the MGP prior when the hyperparameters \(a_{1}\) and \(a_{2}\) are derived from data, we simulated a dataset in a similar way as in Bhattacharya and Dunson (2011). More specifically, a synthetic data set was simulated with \(T=100\) and idiosyncratic variances sampled from \(\mathcal{G}^{-1}(1,0.25)\). The number of non-zero elements in each column of \(\boldsymbol{\Lambda}\) were chosen between \(2k\) and \(k+1\), with zeros allocated randomly and non-zero elements sampled independently from \(N(0,9)\). We generated \(\boldsymbol{y}_{t}\) from \(N_{p}(0,\boldsymbol{\Omega})\), where \(\boldsymbol{\Omega}=\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\prime}+ \boldsymbol{\Sigma}\). Further, we chose six \((p,K)\) combinations to test various dimensions of \(\boldsymbol{\Lambda}\), namely \((6,2)\), \((10,3)\), \((30,5)\), \((50,8)\), \((100,15)\) and \((150,25)\) with a conservative initial upper bound of \(k_{0}=\min(p,5\log(p))\), and \(k_{0}=10\log(p)\) for the latter case with \(p>T\). For each pair we considered 10 simulation replicates. The simulation was run for \(30000\) iterations with a burn-in of \(10000\). We used the following hyperparameter values: \(\nu_{1}\) and \(\nu_{2}\) both equal to \(3\), the rate parameters \(b_{1}\) and \(b_{2}\) in the Gamma priors for \(\delta_{1}\) and \(\delta_{l}\) are set at \(1\). For the case when \(p<T\), \(\alpha_{0}\) and \(\alpha_{1}\) in the adaptation probability expression were set as \(-0.5\) and \(-3\times(10)^{-4}\), and as \(-1\) and \(-5\times(10)^{-4}\) for the case when \(p\geq T\). The threshold for monitoring the columns to discard as \(0.01\)2 with the proportion of elements required to be below the threshold at \(80\) % of \(p\). Footnote 2: Setting the threshold for monitoring the redundant columns at a smaller value than \(0.01\) in the case when \(p\geq T\) led to a an improvement of the results. However, tuning the threshold parameters remains highly heuristic and can be tricky while working with real data sets when the true number of factors is not known. The simulation results in Table 1 show that the model tends to overestimate the number of active factors in the case when \(p\leq T\). In the last case, when the number of variables \(p\) exceeds the number of observations \(T\), the number of active factors is severely underestimated compared to the true one. The last two columns in Table 1 show the posterior mean of \(a_{1}\) and \(a_{2}\). The first efficient shrinkage condition of Durante (2017), \(a_{2}>b_{2}+1\), holds for all \((p,k)\) combinations considered. For the first three combinations of \(p\) and \(k\), the column shrinkage parameters \(a_{1}\) and \(a_{2}\), estimated from the data, are in accordance with the second efficient shrinkage condition of Durante (2017), namely \(a_{2}>a_{1}\). However, with higher \(p\), the condition \(a_{2}>a_{1}\) seems to cease holding when \(p\) gets closer to \(50\). This result is of some interest especially in view of the simulation study in Durante (2017), which suggests that the shrinkage induced by the MGP prior (and satisfying the condition \(a_{2}>a_{1}\)) might prove too strong when the dimension of the data set increases. Assigning a hyperprior to influential parameters, like we did in the case of \(a_{1}\) and \(a_{2}\), is a good way to reduce uncertainty and subjectivity of the model. However, the adaptation mechanism of such a sampler involves several hyperparameters, which may need to be adjusted depending on the nature \begin{table} \begin{tabular}{c|c|c|c|c} \hline \((p,K)\) & mode \(k^{*}\) & IQR & \(\hat{a}_{1}\) & \(\hat{a}_{2}\) \\ \hline \((6,2)\) & 6.00 & 1.00 & 1.41 & 5.89 \\ \((10,3)\) & 5.75 & 1.30 & 1.31 & 5.12 \\ \((30,5)\) & 8.34 & 1.30 & 2.61 & 3.27 \\ \((50,8)\) & 12.30 & 1.60 & 2.62 & 2.49 \\ \((100,15)\) & 19.80 & 1.70 & 2.68 & 2.10 \\ \((150,25)\) & 5.00 & 0.00 & 4.32 & 4.96 \\ \hline \end{tabular} \end{table} Table 1: _Performance of the adaptive Gibbs sampler based on the MGP prior for various combinations of \(p\) and \(K\). The modal estimates of \(k^{*}\) and the interquartile range (IQR) are reported. \(\hat{a}_{1}\) and \(\hat{a}_{2}\) are the estimates of the values of \(a_{1}\) and \(a_{2}\) in (3) inferred via the Metropolis-Hastings step._ and dimensionality of data. For example, we used an additional parameter indicating the proportion of the factor loadings in the column of \(\mathbf{\Lambda}\) which needs to be within the chosen neighbourhood of zero to be considered redundant. This was first introduced in Murphy et al. (2020), who found the choice of these truncation parameters to be a delicate issue which strongly depends on the type of the data. The threshold defining the neighbourhood of \(0\), which is used to decide which factor loadings should be discarded, is another such example. Moreover, the parameters of the adaptation probability, \(\alpha_{0}\) and \(\alpha_{1}\), also need some tuning. In our simulation study, the speed of the adaptation differed for the settings with \(p<T\) and \(p>T\), when using the same values for \(\alpha_{0}\) and \(\alpha_{1}\). The importance and difficulty of choosing a suitable truncation criteria in the adaptive infinite factor algorithms was addressed in Schiavon and Canale (2020). The authors argue that the choice of truncation criteria, such as the predefined neighbourhood of zero, plays a vital role for the performance of the model. The optimal value of the criterion depends of the scale of data, while the number of active factors can be severely underestimated if the value of the truncation criterion is too large, and severely overestimated if it is too small. This is especially true for high-dimensional data, as with \(p\) getting larger, the probability of having all values of \(|\lambda_{ih}|\) smaller than the predefined threshold goes to zero exponentially. In the absence of any guidance towards choosing an optimal value of such a threshold, this remains a highly subjective and random procedure. Schiavon and Canale (2020) suggest another way to define a criterion for truncating the redundant factors, which is robust to the scale of the data and has a well-defined upper bound. The main idea is to truncate \(\mathbf{\Lambda}\) in such a way that the truncated model is able to explain at least a fraction \(Q\in(0,1)\) of the total variability of the data, where the variability of \(\boldsymbol{y}\) is measured by the trace of the covariance matrix \(\mathbf{\Omega}\): \[\frac{tr(\mathbf{\Lambda}_{k^{*}}\mathbf{\Lambda}_{k^{*}}^{T})+tr(\mathbf{ \Sigma})}{tr(\mathbf{\Omega})}\geq Q,\] where \(\mathbf{\Lambda}_{k^{*}}\) denotes the factor loading matrix obtained by discarding the columns of \(\mathbf{\Lambda}\) starting from \(k^{*}+1\). The authors conduct a simulation study which shows that using the suggested method to select the relevant active factors drastically improves the performance of the MGP model. ## 4 Cumulative shrinkage process prior ### The prior specification Legramanti et al. (2020) proposed another type of a nonparametric prior on the variances of the elements of \(\mathbf{\Lambda}\), which largely corrects the drawbacks of the MGP prior. The CUSP prior on the factor loadings induces shrinkage via a sequence of spike-and slab distributions that assign growing mass to the spike as the model complexity grows. The CUSP prior formalises as follows: \[\lambda_{ih}\,|\,\theta_{h}\sim N(0,\theta_{h}),\quad\text{where}\,\,\,i=1, \ldots,p\,\,\text{and}\,\,\,h=1,\ldots,\infty\] \[\theta_{h}\,|\,\pi_{h}\sim(1-\pi_{h})\mathcal{G}^{-1}(a_{\theta},b_{\theta}) +\pi_{h}\delta_{\theta_{\infty}},\qquad\pi_{h}=\sum_{l=1}^{h}w_{l},\qquad w_{l }=v_{l}\prod_{m=1}^{l-1}(1-v_{m}) \tag{5}\] where \(\pi_{h}\in(0,1)\) and the \(v_{h}\)s are generated independently from \(\mathcal{B}(1,\alpha)\), following the usual stick-breaking representation introduced in Sethuraman (1994). By integrating out \(\theta_{h}\), each loading \(\lambda_{ih}\) has the marginal prior3 Footnote 3: In the equation (5) the inverse gamma distribution for the slab is chosen for the reasons of conjugacy. In principle, this expression provides a general prior, where a sufficiently diffuse continuous distribution needs to be chosen for the slab. \[\lambda_{ih}\sim(1-\pi_{h})t_{2a_{\theta}}(0,b_{\theta}/a_{\theta})+\pi_{h}N(0, \theta_{\infty})\] where \(t_{2a_{\theta}}(0,b_{\theta}/a_{\theta})\) denotes the Student-\(t\) distribution with \(2a_{\theta}\) degrees of freedom, location 0 and scale \(b_{\theta}/a_{\theta}\). To facilitate effective shrinkage of the redundant factors, \(\theta_{\infty}\) should be set close to 0. The authors recommend a small value \(\theta_{\infty}>0\), following Ishwaran and Rao (2005), as it induces a continuous shrinkage prior on every factor loading, thus improving mixing and identification of inactive factors. The authors use the fixed value of \(\theta_{\infty}=0.05\), however, it can be replaced by some continuous distribution without affecting the key properties of the prior. This is shown in Kowal and Canale (2022), where a normal mixture of inverse-gamma priors is employed for the spike and slab distributions. The slab parameters \(a_{\theta}\) and \(b_{\theta}\) should be specified so as to induce a moderately diffuse prior on active loadings. ### Inference and adaptive Gibbs sampler The inference is done via Gibbs sampler steps. Similarly to the MGP model, the first three steps remain essentially the same as in Section 2.2, with the difference that in Step \(1\)\(D^{0}_{i1},\ldots,D^{0}_{iK}\) will be replaced by \(\theta_{1}\ldots,\theta_{H}\), where \(H\) is the truncation level. This truncation level is chosen differently than in Bhattacharya and Dunson (2011) and the adaptation process is also different and designed in such a way that it depends less on heuristically chosen parameters. While the probability of adaptation at iteration \(g\) of the sampler is also set to satisfy the diminishing adaptation condition of Roberts and Rosenthal (2007), there is no need to pre-specify an ad-hoc parameter describing some small neighbourhood of \(0\). The inactive columns of \(\boldsymbol{\Lambda}\) are identified as those which are assigned to the spike and are discarded at iteration \(g\) with the probability \(p(g)=e^{\alpha_{0}+\alpha_{1}g}\) together with all corresponding parameters. If at iteration \(g\) all columns of the factor loading matrix are identified as active, i.e. assigned to the slab, an additional column of \(\boldsymbol{\Lambda}\) is generated from the spike and all the corresponding parameters are sampled from their respective prior distributions. The initial number of columns \(H\) at which the CUSP model is truncated is set equal to \(p+1\), following the consideration that there can be at most \(p\) active factors and by construction at least one column is assigned to the spike. The assignment of the columns of \(\boldsymbol{\Lambda}\) to spike or slab at iteration \(g\) is done using \(H^{(g)}\) categorical variables \(z_{h}\in\{1,2,\ldots,H^{(g)}\}\) with a discrete prior \(Pr(z_{h}=h\,|\,w_{h})=w_{h}\), where \(H^{(g)}\) is the number of columns in \(\boldsymbol{\Lambda}\) at iteration \(g\). The additional Gibbs sampler steps will look as follows: _Step 4._ Sample \(\theta_{h}\) in a data augmentation step. Thus, (5) can be obtained by marginalising out independent latent indicators \(z_{h}\) with probabilities \(p(z_{h}=l\,|\,w_{l})=w_{l}\) for \(l=1,\ldots,H\), from the equation \[\theta_{h}\,|\,z_{h}\sim\{1-\boldsymbol{1}(z_{h}\leq h)\}\mathcal{G}^{-1}(a_{ \theta},b_{\theta})+\boldsymbol{1}(z_{h}\leq h)\delta_{\theta_{\infty}}.\] Sample \(z_{h}\) for \(h\) in \((1,\ldots,H)\) from a categorical distribution with probabilities as below \[p(z_{h}=l\,|\,-)\sim\begin{cases}w_{l}N_{p}(\boldsymbol{\lambda}_{h};0,\theta _{\infty}\boldsymbol{I}_{p}),&l=1,\ldots,h,\\ w_{lt}t_{2a_{\theta}}\left(\boldsymbol{\lambda}_{h};0,(b_{\theta}/a_{\theta}) \boldsymbol{I}_{p}\right),&l=h+1,\ldots,H.\end{cases}\] _Step 5._ Sample \(v_{l}\) for \(l\) in \((1,\ldots,H-1)\) from \[v_{l}\,|-\sim\mathcal{B}\left(1+\sum_{h=1}^{H}\mathbf{1}(z_{h}=l),\alpha+\sum_{h =1}^{H}\mathbf{1}(z_{h}>l)\right)\;.\] Set \(v_{H}=1\) and update \(w_{1},\ldots,w_{H}\) from \(w_{l}=v_{l}\prod_{m=1}^{l-1}(1-v_{m})\). _Step 6._ For \(h\) in \((1,\ldots,H)\): if \(z_{h}\leq h\) set \(\theta_{h}=\theta_{\infty}\), otherwise sample \(\theta_{h}\) from \(\mathcal{G}^{-1}\left(a_{\theta}+\frac{1}{2}p,b_{\theta}+\frac{1}{2}\sum_{j=1 }^{p}\lambda_{ih}^{2}\right).\) _Step 7._ After some burn-in period \(\tilde{g}\) required for the stabilization of the chain, the truncation index \(H^{(g)}\) and the number of active factors \(H^{*(g)}=\sum_{h=1}^{H^{(g)}}\mathbf{1}(z_{h}^{(g)}>h)\) are adapted with probability \(p(g)=exp(\alpha_{0}+\alpha_{1}g)\)4 as follows: Footnote 4: The coefficients \(\alpha_{0}\) and \(\alpha_{1}\) are chosen according to the criteria described in Section 3.2 * if \(H^{*(g)}<H^{(g-1)}-1\): \[\text{set }H^{(g)}=H^{*(g)}+1\text{, drop inactive columns in }\mathbf{\Lambda}^{(g)}\text{ along with the associated parameters in }\mathbf{F}^{(g)}\text{, }\mathbf{\theta}^{(g)}\text{ and }\mathbf{w}^{(g)}\text{, and add the final component sampled from the spike to }\mathbf{\Lambda}^{(g)}\text{, together with the associated parameters in }\mathbf{F}^{(g)}\text{, }\mathbf{\theta}^{(g)}\text{ and }\mathbf{w}^{(g)}\text{ sampled from the corresponding priors}\] * otherwise: \[\text{set }H^{(g)}=H^{(g-1)}+1\text{ and add the final column sampled from the spike to }\mathbf{\Lambda}^{(g)}\text{, together with the associated parameters in }\mathbf{F}^{(g)}\text{, }\mathbf{\theta}^{(g)}\text{ and }\mathbf{w}^{(g)}\text{ sampled from the corresponding priors. ### Practical applications and properties Since its introduction, the CUSP prior has been widely used in both theoretical studies and practical applications. The most notable of them include Kowal and Canale (2022), who employed the further generalised CUSP prior in the context of nonparametric functional bases; Fruhwirth-Schnatter (2023), who extended the CUSP prior to the class of generalized cumulative shrinkage priors with arbitrary stick-breaking representations which might be finite or infinite; Gu and Dunson (2023), who applied the CUSP prior to infer the number of latent binary variables in the context of a Bayesian Pyramid (a multilayer discrete latent structure model for discrete data). In contrast to the MGP prior, the CUSP prior on factor loadings provides a clear separation in the parameters which control active factors and the shrinkage of the redundant terms. Thus, the shrinkage rate depends on \(\alpha\) in a sense that smaller values of \(\alpha\) enforce more rapid shrinkage and therefore smaller number of factors. The parameters \(a_{\theta},b_{\theta}\) of the inverse gamma prior for the slab control modelling of active factors (the inverse gamma prior can be replaced by another suitable continuous prior) and can be sampled from data in the spirit of the parameters \(a_{1}\) and \(a_{2}\) in the MGP model. To evaluate the comparative performance of the model with the CUSP prior on the data sets of various dimensionality, we simulated data sets in the same way as in Section 3.3. The stick breaking parameter \(\alpha\), which represents a prior expectation of the number of active factors in the dataset, was set to \(5\) (as in Legramanti et al. (2020)). We also choose the same parameters of the slab distribution as in Legramanti et al. (2020), namely \(a_{\theta}=b_{\theta}=2\) and \(\theta_{\infty}=0.05\). The parameters of the adaptation probability of the sampler \(\alpha_{0}\) and \(\alpha_{1}\) were set as \(-1\) and \(-5\times(10)^{-4}\). The simulations were run for 15,000 iterations, with 5,000 discarded as burn-in, as convergence was achieved faster than in the case of the MGP prior. The simulation results are presented in Table 2 and show that the model was able to recover the correct number of factors in all considered cases. The CUSP model offers significant advantages compared to the MGP model by eliminating the very subjective and influential truncation threshold and decoupling the generation mechanism for active and redundant components. This results in much more robust estimations of the number of factors in data sets of various dimensions. In our experience, assigning some continuous distribution to \(\delta_{\theta_{\infty}}\) and a hyperprior to \(b_{\theta}\) can improve the performance, especially on a non-standardised data sets. The model provides poor uncertainty quantification with the sampler often being stuck in one (in most cases correct) value of \(H^{*}\). This problem was addressed in Kowal and Canale (2022) by extending the CUSP prior with a parameter expansion scheme which disperses the shrinkage applied to the factors. ## 5 Indian buffet process prior ### The prior specification Another, slightly different approach to modelling factor loading matrices involves Indian Buffet Process (Griffiths and Ghahramani (2006)), which defines a distribution over infinite binary matrices, to provide sparsity and a framework for inferring the number of latent factors in the data set. This approach was first suggested in Knowles and Ghahramani (2011) and is formally presented below. First, a binary matrix \(\mathbf{Z}\) is introduced whose elements indicate whether an observed variable \(i\) has a contribution (non-zero loading) of factor \(h\). Then the elements of \(\mathbf{\Lambda}\) can be modelled in the following way: \[\lambda_{ih}|z_{ih}\sim z_{ih}N(\lambda_{ih};0,\beta_{h}^{-1})+(1-z_{ih})\delta _{0}(\lambda_{ih}),\] where \(\beta_{h}\) is a precision of the factor loadings in the \(h\)th column of \(\mathbf{\Lambda}\) and \(\delta_{0}\) is a delta function with a point-mass at \(0\). Thus, the factor loadings are modelled via a spike-and-slab distribution, however, differently from the CUSP prior, the separation into the spike and the slab is done not with a variance parameter but directly for the factor loadings \(\lambda_{ih}\) via an auxiliary binary indicator matrix. This allows a potentially infinite number of latent factors, i.e. \(\mathbf{Z}\) has infinitely many columns of which only a finite number \begin{table} \begin{tabular}{c|c|c} \hline \((p,K)\) & mode \(H^{*}\) & IQR \\ \hline \((6,2)\) & 2.00 & 0.00 \\ \((10,3)\) & 3.00 & 0.00 \\ \((30,5)\) & 5.00 & 0.00 \\ \((50,8)\) & 8.00 & 0.00 \\ \((100,15)\) & 15.00 & 0.00 \\ \hline \end{tabular} \end{table} Table 2: _Performance of the adaptive Gibbs sampler based on the CUSP prior for various combinations of \(p\) and \(K\). The modal estimates of \(H^{*}\) and the interquartile range (IQR) are reported._ will have nonzero entries. If \(\pi_{h}\) is a probability of a factor \(h\) contributing to any of the \(p\) variables, and \(K\) is (for the moment the finite) number of latent factors, the IBP with the intensity parameter \(\alpha_{IB}\) arises from the Beta-Bernoulli prior: \[z_{ih}|\pi_{h}\sim Bernoulli(\pi_{h}),\qquad\pi_{h}|\alpha_{IB}\sim\mathcal{B} \left(\frac{\alpha_{IB}}{K},1\right),\] by setting \(K\rightarrow\infty\) and integrating out \(\pi_{h}\). ### Inference and adaptive Gibbs sampler The inference is done via a Gibbs sampler, of which the second and the third steps are the same as in Section 2.2. The initial number of factors, which will define the dimensions of \(\mathbf{\Lambda}\) and \(\mathbf{Z}\) is chosen as some conservative number which clearly overfits any possible number of factors in the data set. Step \(1\) has the difference that not the \(i\)th row of the factor loadings matrix \(\mathbf{\Lambda}\) but each element \(\lambda_{ih}\) is sampled separately from the univariate normal distribution, if \(z_{ih}=1\): _Step 1_. Sample \(\lambda_{ih}\) for which \(z_{ih}=1\) from \[\lambda_{ih}|-\sim N\left((\beta_{h}+\sigma_{i}^{-2}\mathbf{f}_{h}\mathbf{f}_{h}^{T})^ {-1}\sigma_{i}^{-2}\mathbf{f}_{h}\mathbf{y}_{i}^{T},(\beta_{h}+\sigma_{i}^{-2}\mathbf{f}_ {h}\mathbf{f}_{h}^{T})^{-1}\right)\] where \(\mathbf{f}_{h}\) is a vector of \(t=1,\ldots,T\) observations of factor \(h\). The precisions \(\beta_{h}\) will be sampled in the following way: _Step 4_. Sampling \(\beta_{h}\) providing it is given a gamma prior \(\mathcal{G}(a_{\beta},b_{\beta})\) \[\beta_{h}\,|\,z_{h},\lambda_{ih}\sim\mathcal{G}\left(a_{\beta}+\frac{\sum_{i= 1}^{p}z_{ih}}{2},b_{\beta}+\sum_{i,h}\lambda_{ih}^{2}\right).\] The binary indicator \(z_{ih}\) can be sampled using the fact that it is possible to calculate the posterior density of the ratio \(\frac{p(z_{ih}=1|-)}{p(z_{ih}=0|-)}\) from the likelihood and prior probabilities and for every element there can be only two events, \(z_{ih}=1\) or \(z_{ih}=0\). This is done in the following way: _Step 5_. Sample binary indicator \(z_{ih}\) using \[\frac{p(z_{ih}=1|-)}{p(z_{ih}=0|-)}\sim\frac{\sqrt{(\beta_{h}+\sigma_{i}^{-2} \mathbf{f}_{h}\mathbf{f}_{h}^{T})^{-1}\beta_{h}}exp\left(\frac{1}{2}(\beta_{h}+\sigma_ {i}^{-2}\mathbf{f}_{h}\mathbf{f}_{h}^{T})^{-1}(\sigma_{i}^{-2}\mathbf{f}_{h}\mathbf{y}_{i}^{T} )^{2}\right)m_{-i,h}}{T-1-m_{-i,h}}\] where \(m_{-i,h}\) is the number of other variables for which factor \(h\) is active, not counting variable \(i\). Although the binary matrix \(\mathbf{Z}\) has infinitely many columns, only the nonzero ones contribute to the likelihood. However, one needs to take into account the zero columns too, as the number of factors can (and in many cases will) change at the subsequent iterations of the sampler. Let us denote \(\kappa_{i}\) the number of columns of \(\mathbf{Z}\) which contain \(1\) only in row \(i\), so it will contain information about the number of factors which are only active for the variable \(i\)5. After the sampling step 5, \(\kappa_{i}=0\) for any \(i\) by design, so the new factors \(\kappa_{i}\) are sampled in a separate MH step. Note that this is not a random walk MH step as the proposal densities are not symmetric. _Step 6._ Sample the number of new active factors \(\kappa_{i}\) in a MH step with the following proposal density \[\rho_{\kappa_{i}}=(2\pi)^{\frac{T\kappa_{i}}{2}}|\mathbf{M}|^{-\frac{T}{2}}exp\left( \frac{1}{2}\sum_{t}\mathbf{m}^{T}\mathbf{M}\mathbf{m}\right)\frac{Pois(\kappa_{i};\alpha_{ IB}/(p-1))}{Pois(\kappa_{i};\alpha_{IB}\nu/(p-1))},\] where \(\nu>0\) is a tuning parameter aimed at improving mixing, \(\mathbf{M}=\sigma_{i}^{-2}\mathbf{\lambda}_{\kappa_{i}}\mathbf{\lambda}_{\kappa_{i}}^{T}+ \mathbf{I}_{\kappa_{i}}\) with \(\mathbf{\lambda}_{\kappa_{i}}\) denoting a \(1\times\kappa_{i}\) vector of the new elements of the factor loading matrix, and \(\mathbf{m}=\mathbf{M}^{-1}\sigma_{i}^{-2}\mathbf{\lambda}_{\kappa_{i}}(y_{it}-\mathbf{ \lambda}_{i}^{T}\mathbf{f}_{t})\). Steps 5 and 6 are designed to be in one loop for \(i=(1,\ldots,p)\), i.e. for each variable \(i\), first, the indicator \(z_{ih}\) is sampled for every \(h\), and then the number of new factors for variable \(i\) is sampled in the following step. _Step 7._ Assuming the gamma prior \(\mathcal{G}(a_{\alpha},b_{\alpha})\), sample the IBP strength parameter \(\alpha_{IB}\) from \[\alpha_{IB}\,|\,\mathbf{Z}\sim\mathcal{G}\left(a_{\alpha}+K_{+},b_{\alpha}+\sum_{ j=1}^{p}\frac{1}{j}\right),\] where \(K_{+}\) is the number of active factors for which \(z_{ih}=1\) at least for one \(i\). ### Practical applications and properties The IBP prior coupled with a spike-and-slab distribution proved to be a useful approach to model sparse factor loadings and represents an alternative to implementing an increasing shrinkage on the columns of the factor loading matrix in terms of inferring the number of active factors. A somewhat related work was introduced earlier by Rai and Daume (2008) in the context of a nonparametric Bayesian factor regression model, where a sparse IBP prior was coupled with a hierarchical prior over factors. The authors did not assume independence of factors as in traditional factor analysis, and instead of a normal prior used a Kingman's coalescent prior which describes an exchangeable distribution over a countable set of factors. The original model of Knowles and Ghahramani (2011) was further extended in Rockova and George (2016), where the authors couple the IBP prior on the binary indicators with a spike-and-slab LASSO (SSL) prior of the elements of \(\mathbf{\Lambda}\). The SSL prior assigns to both the spike and the slab components a Laplace distribution designed so that the slab has a common scale parameter and the spike has a factor-specific scale parameter (different for each \(h\)). This prior tackles the problem of rotational invariance of \(\mathbf{\Lambda}\) by automatically promoting rotations with many zero loadings thus resulting in many exact zeros in the factor loading matrix and facilitating identification. Differently from Knowles and Ghahramani (2011) and Rai and Daume (2008), who do inference via a Gibbs sampler, Rockova and George (2016) use an expectation-maximization (EM) algorithm, which brings computational advantages for high-dimensional data. Recently, Fruhwirth-Schnatter (2023) suggested an exchangeable shrinkage process (ESP) prior for finite number of factors \(K\), which has relation to the IBP prior when \(K\rightarrow\infty\). The prior in its general form is formulated as follows: \[\lambda_{ih}\,|\,\tau_{h}\sim(1-\tau_{h})\delta_{0}+\tau_{h}P_{slab}(\lambda_{ ih}),\qquad\tau_{h}\,|\,K\sim\mathcal{B}(a_{K},b_{K}),\qquad h=1,\ldots,K, \tag{6}\] where \(\delta_{0}\) is a Dirac delta, \(P_{slab}\) is an arbitrary continuous slab distribution, and \(K\) is the finite number of factors. The slab probabilities \(\tau_{h}\)s then decide the number of active factors \(K_{+}<K\). When in (6) \(b_{K}=1\) and \(a_{K}=\alpha_{IB}/K\), for \(K\to\infty\) this prior converges to the IBP prior (Teh et al. (2007)). The ESP prior has been used in the context of sparse Bayesian factor analysis in Fruhwirth-Schnatter et al. (2022) and in the context of a mixture of factor analysers model in Grushanina and Fruhwirth-Schnatter (2023). ## 6 Generalised infinite factor models One of the recent developments in the area of infinite factor models is the generalised infinite factorisation model developed in Schiavon et al. (2022), where authors were motivated by the existing methods' drawbacks such as lack of accommodation for grouped variables and other non-exchangeable structures. While the existing increasing shrinkage models focus on priors for \(\boldsymbol{\Lambda}\) which are exchangeable within columns, they lack consideration for possible grouping of the rows of \(\boldsymbol{\Lambda}\), which can occur in many applications, such as, for example, different genes in genomic data sets. Here we briefly outline the main idea of the proposed method without going into much detail. The generalised model is defined in the following way: \[y_{it}=s_{i}(z_{it}),\qquad\boldsymbol{z}_{t}=\boldsymbol{\Lambda}\boldsymbol{ f}_{t}+\boldsymbol{\epsilon}_{t},\qquad\epsilon_{t}\sim\eta_{\epsilon}, \tag{7}\] where \(\boldsymbol{\Lambda}\) is a \(p\times K\) factor loading matrix, \(\boldsymbol{f}_{t}\) is a \(K\)-dimensional factor with a diagonal covariance matrix \(\boldsymbol{\Xi}=diag(\xi_{11},\ldots,\xi_{KK})\), \(\boldsymbol{\epsilon}_{t}\) is a \(p\)-dimensional error term independent of factors, \(\eta_{\epsilon}\) is some arbitrary distribution, and the function \(s_{i}\) is the function \(s_{i}:\mathbb{R}\to\mathbb{R}\), for \(i=1,\ldots,p\). Here, differently from the factor model described in Section 2.1, it is not necessarily assumed that \(\boldsymbol{f}_{t}\) and \(\boldsymbol{\epsilon}_{t}\) are normally distributed. When, in fact, this is the case and \(s_{i}\) is the identity function, the model (7) takes the form of a Gaussian linear factor model described in Section 2.1. When \(s_{i}=F_{i}^{-1}(\Phi(z_{it}))\) with \(\Phi(z_{it})\) denoting a Gaussian cumulative distribution function, the model (7) becomes a Gaussian copula factor model as described in Murray et al. (2013). Choosing an appropriate \(s_{i}\) and modifying the assumptions regarding the distribution of the parameters in (7) results in other types of factor models. The covariance matrix \(\boldsymbol{\Omega}\) as in (2) has a more general form in the case of the generalised infinite factorisation model \(\boldsymbol{\Omega}=\boldsymbol{\Lambda}\boldsymbol{\Xi}\boldsymbol{\Lambda} ^{T}+\boldsymbol{\Sigma}\), where \(\boldsymbol{\Sigma}\) is the covariance matrix of the error term. The suggested prior on the elements of \(\boldsymbol{\Lambda}\) allows infinitely many columns, so that the number of factors \(K\to\infty\), and is formulated as follows: \[\lambda_{ih}\,|\,\theta_{ih}\sim N(0,\theta_{ih}),\quad\theta_{ih}=\tau_{0} \gamma_{h}\phi_{ih},\quad\tau_{0}\sim\eta_{\tau_{0}},\quad\gamma_{h}\sim\eta_ {\gamma_{h}},\quad\phi_{ih}\sim\eta_{\phi_{i}}, \tag{8}\] where \(\tau_{0}\), \(\gamma_{h}\) and \(\phi_{ih}\) are responsible for global, column-specific and local shrinkage, respectively, are independent a priori and the distributions \(\eta_{\tau_{0}}\), \(\eta_{\gamma_{h}}\) and \(\eta_{\phi_{i}}\) are supported on \([0,\infty)\). What is essentially different to previously described models, is that via \(\phi_{ih}\) a non-exchangeable structure is imposed on the rows of \(\boldsymbol{\Lambda}\) via some meta covariates \(\boldsymbol{X}\), which inform the sparsity structure of \(\boldsymbol{\Lambda}\). Denoting by \(\boldsymbol{X}_{p\times q}\) a matrix of \(q\) meta covariates, \(\eta_{\phi_{i}}\) should be chosen so as to satisfy: \[E(\phi_{ih}\,|\,\boldsymbol{\beta}_{h})=g(\boldsymbol{x}_{i}^{T}\boldsymbol{ \beta}_{h}),\quad\boldsymbol{\beta}_{h}=(\beta_{1h},\ldots,\beta_{qh})^{T}, \quad\beta_{mh}\sim\eta_{\beta},\quad m=1,\ldots,q,\] where \(g\) is a smooth one-to-one differentiable link function, \(\boldsymbol{x}_{i}=(x_{i1},\ldots,x_{iq})\) denotes the \(i\)th row of \(\boldsymbol{X}\), and \(\boldsymbol{\beta}_{h}\) are coefficients controlling the impact of the meta covariates on the shrinkage of the elements of the \(h\)th column of \(\mathbf{\Lambda}\). Taking the example from the ecology application studied in Schiavon et al. (2022), different bird species (variables \(i\)) may belong to the same phylogenetic order (metacovariates \(m\)), have roughly the same size, follow similar diet etc. In more details, the priors and hyperpriors on the factor loading are specified as follows: \[\tau_{0}=1,\qquad\gamma_{h}=\nu_{h}\rho_{h},\qquad\phi_{ih}\,|\, \boldsymbol{\beta}_{h}\sim Ber\{logit^{-1}(\boldsymbol{x}_{i}^{T}\boldsymbol{ \beta}_{h})c_{p}\},\] \[\nu_{h}^{-1}\sim\mathcal{G}(a_{\nu},b_{\nu}),\quad a_{\nu}>1, \quad\rho_{h}=Ber(1-\pi_{h}),\quad\boldsymbol{\beta}_{h}\sim N_{q}(0,\sigma_{ \beta}^{2}\boldsymbol{I}_{q}),\] where the link function \(g(x)\) takes the form of \(logit^{-1}(x)=e^{x}/(1+e^{x})\) and \(c_{p}\in(0,1)\) is a possible offset. The distribution of the parameter \(\pi_{h}=p(\gamma_{h}=0)\) follows a stick-breaking construction \[\pi_{h}=\sum_{l=1}^{h}w_{l},\qquad w_{l}=v_{l}\prod_{m=1}^{l-1}(1-v_{m}), \qquad v_{m}\sim\mathcal{B}(1,\alpha_{gen}),\] similar to Legramanti et al. (2020). The model inference is performed via an adaptive Gibbs sampler, which resembles the one developed for the CUSP model. The frequency of adaptation is set in accordance with the Theorem 5 of Roberts and Rosenthal (2007), and at the iteration, at which the adaptation occurs, the redundant columns of the loading matrix are discarded with all other corresponding parameters and the number of active factors is adapted accordingly. The redundant columns are identified as those for which \(\rho_{h}=0\). If at some iteration there are no redundant columns, then an additional factor and all its corresponding parameters are generated from the priors. The exact form of the Gibbs sampler steps depends on the prior assumptions for the elements of (7). In case of the standard isotropic Gaussian and inverse gamma priors for factors and idyosyncratic variances, steps 2 and 3 of the sampler will be identical to the ones described in Section 2.2. For the detailed description of the Gibbs sampler steps the reader is referred to the Supplementary Material of Schiavon et al. (2022). ## 7 Discussion and identification issues Infinite factorisation models offer an enormous advantage of the automatic inference on the number of active factors by allowing it be derived from data. This is done by assigning a non-parametric prior to the elements of the factor loading matrix, which penalises the increasing number of columns. Some of such models at the same time account for the element-wise sparsity of factor loadings which can be justified in many real life applications, such as genetics, economics, biology, and many others. One of the weak points of such models is that they often rely on rather subjective truncation parameters, with the lack of clear guidance towards the procedure of choosing such parameters. The MGP prior of Bhattacharya and Dunson (2011) is the most prominent example of it, the simulation studies in Schiavon and Canale (2020) and in Section 3.3 of this paper illustrate this point. This subjectivity was significantly reduced in the CUSP prior of Legramanti et al. (2020). Generalisation of the CUSP prior by setting the hyperprior on the spike parameter as in Kowal and Canale (2022) significantly improved the performance of the model on data sets of different nature and eliminated the need of data-dependent parameter tuning. In addition, the parameter-expanded version of the CUSP model suggested in Kowal and Canale (2022) resulted in better uncertainty quantification. The class of generalised infinite factorisation models of Schiavon et al. (2022) generalises the idea of infinite factorisations with increasing shrinkage on factor loadings and incorporates it into a wide class of various types of factor models. In addition, it allows the grouping of the variables, which provides a useful feature for a wide rage of applications. The truncation of the redundant factors is done in a similar way to the CUSP model, however, the complexity of this rather general model makes unavoidable some subjective choices regarding hyperparameters and functional forms. Another important issue concerns the identification of factor loadings. It is well known that the decomposition of the covariance matrix \(\mathbf{\Omega}\) as in (2) is not unique. First, the correct identification of the idiosyncratic covariance matrix should be ensured to guarantee that in the following two representations: \[\mathbf{\Omega}=\mathbf{\Lambda}\mathbf{\Lambda}^{T}+\mathbf{\Sigma},\qquad\quad\mathbf{\Omega}= \mathbf{\Theta}\mathbf{\Theta}^{T}+\mathbf{\Sigma}_{0}\] \(\mathbf{\Sigma}=\mathbf{\Sigma}_{0}\) and, hence, the cross-covariance matrix \(\mathbf{\Lambda}\mathbf{\Lambda}^{T}=\mathbf{\Theta}\mathbf{\Theta}^{T}\) is uniquely identified. This problem is known under the name of variance identification. The row deletion property of Anderson and Rubin (1956) presents a sufficient condition for variance identification and states that whenever an arbitrary row is deleted from \(\mathbf{\Lambda}\), two disjoint matrices of rank \(K\) should remain. This property imposes an upper bound on the number of factors \(K\leq\frac{p-1}{2}\). So, for dense factor models, variance identification can fail if the number of factors is too high. For sparse factor models, additional restrictions on the number of non-zero elements in each column of \(\mathbf{\Lambda}\) need to be applied (see, e.g. Fruhwirth-Schnatter et al. (2022b)). Although in most cases \(K\ll p\) and the upper bound will be respected, there is no formal guarantee of variance identification for infinite factor models even when the factor loading matrix is dense, and even less so in the case of sparse infinite factor models. The second problem deals with the correct identification of \(\mathbf{\Lambda}\) from \(\mathbf{\Lambda}\mathbf{\Lambda}^{T}\). It is referred to as the problem of rotational invariance and stems from the fact that for any semi-orthogonal matrix \(\mathbf{P}:\mathbf{P}\mathbf{P}^{T}=\mathbf{I}\) and \(\mathbf{\Theta}=\mathbf{\Lambda}\mathbf{P}\), \(\mathbf{g}_{t}=\mathbf{P}^{T}\mathbf{f}_{t}\), the two models \[\mathbf{y}_{t}=\mathbf{\Lambda}\mathbf{f}_{t}+\mathbf{\epsilon}_{t}\qquad\text{and}\qquad\mathbf{y }_{t}=\mathbf{\Theta}\mathbf{g}_{t}+\mathbf{\epsilon}_{t}\] are observationally indistinguishable. This problem is often addressed in the literature by imposing restrictions on the elements of \(\mathbf{\Lambda}\), such as, for example, setting the upper diagonal elements equal to zero and requiring the diagonal elements to be positive so that \(\mathbf{\Lambda}\) represents a positive lower triangular matrix. This approach has first been implemented by Geweke and Zhou (1996) and followed by many others (see, for example, Lopes and West (2004) and Carvalho et al. (2008)). This constraint introduces order dependence upon variables, which results in posterior distributions whose shapes depend on the ordering of the variables in the data set and thus is not applicable for infinite factor models. However, these models can still be employed for the tasks of covariance matrix estimation, variable selection and prediction, which do not require identification. However, while variance identification is rarely addressed in the literature and not at all in the context of infinite factor models, in recent years some ex-post identification methods aimed at tackling rotational invariance have been proposed, which are applicable for infinite factor models. These methods usually involve some kind of orthogonalisation procedure applied at a post-processing step, such as, for example, orthogonal Procrustean algorithm (Asmann et al. (2016)) or Varimax procedure (Poworoznek et al. (2021)). There have also been some attempts to embed identification consideration into the estimation procedure. Thus, Rockova and George (2016) offer a solution to the indeterminacy due to rotational invariance via the SSL prior, which automatically promotes the rotations with many zero loadings and thus reduces posterior multimodality. Their EM algorithm provides sparse posterior modal estimates with exact zeroes in the factor loading matrix. Schiavon et al. (2022) propose an identification scheme, which is somewhat similar in the idea. They search for an approximation of the maximum a posteriori estimators of \(\boldsymbol{\Lambda}\), \(\boldsymbol{\beta}=(\beta_{1},\beta_{2},\ldots)\) and \(\boldsymbol{\Sigma}\) by integrating out the scale parameters and latent factors from the posterior density function and taking the parameters of interest from the draw which produced the highest marginal posterior density function \(f(\boldsymbol{\Lambda},\beta,\boldsymbol{\Sigma}\,|\,\boldsymbol{y})\).
2309.14339
Chop & Learn: Recognizing and Generating Object-State Compositions
Recognizing and generating object-state compositions has been a challenging task, especially when generalizing to unseen compositions. In this paper, we study the task of cutting objects in different styles and the resulting object state changes. We propose a new benchmark suite Chop & Learn, to accommodate the needs of learning objects and different cut styles using multiple viewpoints. We also propose a new task of Compositional Image Generation, which can transfer learned cut styles to different objects, by generating novel object-state images. Moreover, we also use the videos for Compositional Action Recognition, and show valuable uses of this dataset for multiple video tasks. Project website: https://chopnlearn.github.io.
Nirat Saini, Hanyu Wang, Archana Swaminathan, Vinoj Jayasundara, Bo He, Kamal Gupta, Abhinav Shrivastava
2023-09-25T17:59:43Z
http://arxiv.org/abs/2309.14339v1
# Chop & Learn: Recognizing and Generating Object-State Compositions ###### Abstract Recognizing and generating object-state compositions has been a challenging task, especially when generalizing to unseen compositions. In this paper, we study the task of cutting objects in different styles and the resulting object state changes. We propose a new benchmark suite Chop & Learn, to accommodate the needs of learning objects and different cut styles using multiple viewpoints. We also propose a new task of Compositional Image Generation, which can transfer learned cut styles to different objects, by generating novel object-state images. Moreover, we also use the videos for Compositional Action Recognition, and show valuable uses of this dataset for multiple video tasks. Project website: [https://chopmlearn.github.io](https://chopmlearn.github.io). ## 1 Introduction Objects often exist in different shapes, colors, and textures in the real-world. These visually discernible properties of objects, also known as states or attributes, can be inherent to an object (, color) or be a result of an action (, chopped). Generalization to unseen properties of objects remains an Achilles heel of current data-driven recognition models (, deep networks) that assume robust training data available for exhaustive object properties. However, humans (and even animals) [4, 7] can innately imagine and recognize a large number of objects with varying properties, by composing a few known objects and their states. This ability to synthesize and recognize new combinations from finite concepts, called _compositional generalization_ is often absent in modern deep learning models [30]. Several recent works have been proposed to study composition in terms of the disentanglement of objects and the states in images [24, 34, 56, 73] as well as videos [3, 5, 12, 19, 55, 60, 61]. A few works have attempted to improve open-world text-to-image generation models [13, 53] for the task of compositional generation. However, current suite of datasets lacks either granular annotations for object states or enough data to study how object states evolve under different conditions. Therefore, measuring the compositional generalizability of these models on different tasks remains an open challenge. In this paper, we propose a new dataset, **Chop & Learn** (**ChopNLearn**) collected to support studying compositional generalization, the ability to recognize and generate unseen compositions of objects in different states. To focus on the compositional aspect, we limit our study to a common task in our daily lives - cutting fruits and vegetables. When using different styles of cutting, these objects undergo different transformations and the resulting states are easily recognizable by humans. Our goal is to study how these different styles can be applied to a variety of Figure 1: We present **Chop & Learn** (**ChopNLearn**), a new dataset and benchmark suite for the tasks of Compositional Image Generation and Compositional Action Recognition. It consists of 1260 video clips and 112 object state combinations captured from multiple viewpoints for 20 objects and 8 cut styles. We also propose two new compositional tasks and benchmarks - (1) Image Generation: given training images of various objects in various states, the goal is to generate images of unseen combinations of objects and states. (2) Action Recognition: training videos are used to recognize objects along with transition from state1 \(\rightarrow\) state2, to generalize on recognizing unseen object-state transitions. objects for recognizing unseen object states. More specifically, we select _twenty_ objects and _seven_ commonly used styles of cuts (plus whole object) which results in object-state pairs with different granularity and sizes (Figure 1). We collect videos of these objects being from _four_ different viewpoints, and label different object states in each video. Each style of cut changes the visual appearance of different objects in different ways. To study and understand object appearance changes, we propose two new benchmark tasks of Compositional Image Generation and Compositional Action Recognition, with a focus on unseen compositions. The objective of the first task is to generate an image based on an (object, state) composition that was not seen during training. As shown in Figure 1, during training, a generative model is provided with images of an (apple, whole) as well as an (orange, round slices). At the test time, the model has to synthesize a new unseen composition (apple, round slices). We propose to adapt large-scale text-to-image generative models for this task. Specifically, by using text prompts to represent the object-state composition, we benchmark several existing methods such as Textual Inversion [13] and DreamBooth [53]. We also propose a new method by introducing new tokens for objects and states and simultaneously fine-tuning language and diffusion models. Lastly, we discuss the challenges and limitations of prior works as well as the proposed generative model with an extensive evaluation. In the second task, we extend an existing task of Compositional Action Recognition [36]. While the focus of prior work [36] is on long-term activity tracking in videos, we aim to recognize subtle changes in object states which is a crucial first step for activity recognition. By detecting the initial state and final object state compositions, our task allows the model to learn unseen object state changes robustly. We benchmark multiple recent baselines for video tasks on the ChopNLearn dataset. Finally, we discuss various other applications and tasks that can use our dataset in image and video domains. To summarize, our contributions are threefold: * We propose a new dataset ChopNLearn, consisting of a large number of images and videos of diverse object-state compositions with multiple camera views. * We introduce the task of Compositional Image Generation, which goes beyond the common conditional image generation benchmarks, and focuses on generating images for unseen object and state compositions. * We introduce a new benchmark for the task of Compositional Action Recognition, which aims at understanding and learning changes in object states over time and across different viewpoints. ## 2 Related Work Object states or attributes have recently received significant attention for recognition tasks, in images and videos. Some of the common works and their dissimilarities with the proposed dataset are mentioned here. **Attributes of Objects.** In the image domain, states are often referred to as attributes for Compositional Learning of attribute-object pairs. Attributes describe the visual properties of objects, such as shape, color, structure and texture. The common datasets used are MIT-states [24], UT-Zappos [73], COCO-attributes [43], CGQA [35] and VAW [45]. All of these datasets consist of web-scraped images of various types of objects (from furniture to shoes and clothes to food items), which makes the variety of states very diverse. Most of the prior works [31, 34, 35, 41, 44, 46, 56, 59, 70, 72] focus on attribute-object recognition tasks using compositional learning but do not expand to image generation tasks due to the diversity in background and attributes. Some works in compositional zero-shot learning of attributes show visual disentanglement of attributes from objects [56, 68], however, they only hallucinate compositions of unseen attribute-object pairs in the feature space, rather than the image space. Moreover, even newer large vision-language models such as CLIP [48], DALL-E [50] fail to capture the subtle attributes of objects which are visually discernible [38, 74]. Therefore, the image generation task for objects with different attributes is still unexplored, which is a major focus of our work. **States for Action Recognition.** Detecting object states and corresponding actions from videos is explored in supervised [3, 5, 12, 55] and self-supervised manners [11, 60, \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c}{Total \# of} & \multicolumn{3}{c}{Avg. \# of Samples} & \multirow{2}{*}{\(N\)} & \multirow{2}{*}{\# of Views} \\ \cline{2-2} \cline{5-10} & Samples & Obj. & Comp. & & & & & & & Views \\ \hline MIT-States\({}^{\dagger}\)[25] & 1676 & 27 & 52 & 4 & 62.07 & 32.23 & 419 & 48 & 1 \\ Youcook2 [76] & 714 & 160 & 313 & 3 & 7.3 & 2.2 & 166.7 & 26 & 1 \\ VISOR [9] & 301 & 58 & 122 & 3 & 5.2 & 2.5 & 42.9 & 3 & 1 \\ COIN [64] & 390 & 6 & 7 & 2 & 65 & 55 & 195 & 6 & 1 \\ Ego4D [14] & 216 & 12 & 12 & 3 & 18.2 & 18 & 54.5 & 8 & 1 \\ 50Salsda [6] & 904 & 5 & 6 & 2 & 182 & 152 & 457 & 6 & 1 \\ Changel [60] & 264 & 8 & 14 & 4 & 46.3 & 26.4 & 96 & 14 & 1 \\ CrossTask [77] & 1150 & 7 & 8 & 2 & 164.3 & 143.7 & 575 & 8 & 1 \\ Breakfast [29] & 1055 & 3 & 4 & 2 & 351.7 & 263.8 & 527.5 & 4 & 1 \\ \hline **ChopNLearn** & **1260** & **20** & **112** & **8** & **74.2** & **11.8** & **185.5** & **112** & **4** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison with other video datasets.** This table highlights the distribution of the objects, states and compositions in different datasets. Obj. refers to objects, Comp. is compositions of objects and styles, \(N\) refers to the number of compositions that have more than 10 samples, and Styles\({}^{*}\) refers to grouping of styles: instead of generic names like cut, chop, etc., we use 3 distinct styles (chop/dice, peel, grate) as styles. MIT-States\({}^{\dagger}\) is the only image-based dataset, the rest are video-based datasets. All these data numbers are for edible objects and cutting style actions from respective datasets. Our dataset has uniform distribution for each metric in the table, which makes it suitable for learning objects and their states. 61]. While some works focus on recognizing actions using states [3, 5, 12, 55], others discover states as the future frames in the videos in [11, 26]. Some works [60, 61] also detect the exact frames of state 1, state 2 and the action that causes transition from state 1 \(\rightarrow\) 2. Another recent work (Ego4D [14]) also proposes new tasks like point-of-return state-change prediction for object state transition detection. Hence, object states so far have been used as a signal for detecting and localizing actions. We focus on extending this understanding of states to generalize across different objects with limited seen object-state transition videos. **Compositional Action Recognition.** In contrast to randomly assigning samples for training and testing, [36] presented a new task of Compositional Action Recognition. The premise of this task is: actions are split based on objects they apply on. During training, only a set of objects are seen corresponding to set of objects, while during testing, unseen object appear for seen action labels. Following studies [28, 33, 47, 67, 71] used relationship between objects and states bounding boxes to model the compositional aspect, where the evaluation is performed on how well the composition of unseen object and state is recognized. We propose a similar task, where videos are trained on seen compositions and tested on unseen compositions. **Comparison with existing Datasets.** The existing image datasets such as MIT-states [24], UT-Zappos [73], COCO-attributes [43], CGQA [35] and VAW [45], are not suitable for image generation tasks for two reasons: 1) there are very few transferable objects and attributes, 2) the images are web-scraped and very diverse with varied background. Due to this, generative models latch on background details rather than understanding subtle changes in objects. In video domain, there have been various video datasets with procedural and kitchen activities that capture object and state transformations, such as Epic-Kitchens [8] with object and hand bounding box annotation version VISOR [9], Youcook2 [76], Ego4D [14], COIN [64], HowTo100M [40], Breakfast [29], 50Salads [62], CrossTask [77] and ChangeIt [60]. There are a few common problems across these datasets: (1) Most of these datasets lack annotations for the granularity of cutting styles. The styles labeled are cut, chop, slice, dice, peel, grate, julienne, which only comprises of three broader styles of transformations, _i.e_. chop/dice, peel and grate. (2) The compositions of different objects and states are highly skewed and similar to image datasets. Some datasets have a long-tail distribution of objects, which can make it challenging for models to learn per-object-based states when there is only one sample available in the dataset. And lastly (3), the frames are noisy with lots of objects and attributes that object states changes are harder to capture (as shown in left side of Figure 2). For most datasets, the ground truth is also not annotated for object detection, which makes it even harder to look for object of interest. Using an object detector to remove the background is an option, however with deformable objects, most Faster-RCNN [51] based object detectors fail to capture the object itself, and latch onto smaller pieces instead. In Table 1, we show statistics of data available in different datasets. The # of clips from other datasets that has granular annotations of object-state pairs and can be used for compositional tasks. For instance, COIN [64] has 180 categories with 10000 videos, but clips that have cutting styles as labels were only 390. Further, these clips only cover cut/peel actions, and cannot be categorized further based on granularity and shape of pieces. Our proposed dataset ChopNLearn is designed to capture various objects and their cut styles, with uniformly distributed samples for 20 objects and 8 styles (including whole, 7 other cut styles Figure 2). ## 3 Chop & Learn Our main objective with Chop & Learn (ChopNLearn) is to understand and learn granular object states, specifically styles of cuts which can be applied to diverse variety of objects. With this in focus, we collect object state transition videos, as well as images of object in various states, with Figure 2: Left: We show examples of cutting styles from popular video datasets (VISOR [9]: chop and peel potato, Youcook2 [76]: chop broccoli, peel radish), image dataset (MIT-states [24]-slice pear, peel orange) and generation pipelines (DALL-E [50]:baton cut apple, half round slices tomato). Most of these are either too noisy to capture subtle differences in objects or do not have the granularity of specific cutting styles. Center: Our 4 camera setup captures videos of one object in 4 different views. Right: We capture 8 styles of object states, which can be derived in a hierarchical manner from larger to small cuts. Each style is of different shape and granularity. 4 different camera views (Figure 2). We discuss the design choices and motivation below. ### Design Choices **Selection of States (styles of cuts).** Fruits and vegetables are commonly cut in specific styles based on the need of the recipes. For instance, for eating an apple, we slice it in relatively large pieces while for using it in a pie, we might cut smaller or round slices of it. We select 8 common styles of cuts, _i.e._, large cut, small cut, baton, julienne, round slices, half round slices, peel, and whole for our study. These are the most common styles of cuts for vegetables and fruits, which do not require any additional training to learn apart from common kitchen operation and knife handling skills. These styles of cuts can also have similarities with respect to shapes, yet are different in granularity. For example, baton (french-fries style cut) and julienne are similar in shape (long pieces), but julienne is more finely cut than baton. Similarly, large cut is a coarser version of small cut, and half round slice is one step from round slices (as shown in Figure 2). We also have annotated the states whole and peel, which are the base states of objects. **Selection of Objects.** We want to learn to transfer styles of cuts to different objects. To ensure consistency in transfer, we also consider the base state, _i.e._, whole state of objects. For instance, it is hard to visualize large cut of carrots, if the seen data only includes rounder objects like oranges. Hence, we consider some fruits and vegetables with similar colors, textures and shapes to include consistency across visual similarities after chopping. In this study, we used seasonal fruits and vegetables categorised on the basis on their shapes, colors and textures: round small objects: [apple, pear, mango, potato, turnip, onion, kiwi], citrus fruits [lemon, orange], flower-like textured objects: [cauliflower, broccoli], larger round objects: [cantaloupe, watermelon], textured from inside objects: [bellpepper, tomato, persimmon], and long objects: [cucumber, carrot, squash, banana]. This consists of 10 fruits and 10 vegetable items, with at least one pair of similar objects presents in the dataset. **Related Groups.** One of the key aspects of this dataset is transferability of cut styles to a variety of objects. We set up some constraints and create related groups for objects and styles. These related group enable us with structural and visual style transfer abilities. If an object is seen from related group \(A\) with a particular style, we should be able to transfer that style to another object from the same related group \(A\) and vice-versa. In other words, we group sets of objects and cut styles which are visually similar (based on color, shape and texture) together to create related groups for objects and states separately. For states, we combine [baton, julienne], [round slices, half-round slices], and [large cut, small cut] together as related groups. For objects, we define seven groups with related objects: [apple, pear, mango], [lemon, orange], [cauliflower, broccoli], [cantaloupe, watermelon, kiwi], [bellpepper, tomato, persimmon], [potato, turnip, onion], and [cucumber, carrot, squash, banana]. ### Data Collection Setup We collect data using four GoPro cameras [1] positioned at different angles, with three participants (Figure 2). We use a green screen and green chopping board for minimum distraction in the background, such that the objects and their cut pieces are easily segmented for each view. **Granularity of styles.** For ease and consistency across participants, the size of cut pieces can be defined as the shape and ratio of one piece with respect to the whole object. For more details, please refer to the appendix. Given a set of \(n\) states and \(m\) objects, we can have at most \(m\times n\) compositions. However, our dataset does not include some compositions which are not commonly found in real world. For instance, due to the texture of onions, it is not feasible to cut onions in baton or julienne style, since the layers of the onion do not stay intact, so we do not have a sample of [baton, onion]. **Video Recording.** We primarily collect video data, and derive state change frames from long videos. Each video consists of 2-3 object states, which are annotated while data collection process using the highlight features of GoPros. For Figure 3: **Statistics for ChopNLearn:** We show the number of samples for each object-style composition in a color-coded manner: orange represents 12 samples, green represents 8 samples and blue represents 4 samples. synchronizing across different cameras, we initially start with a clapper to make a clap sound for indicating the beginning of the video. Then, we highlight the frames in one of the GoPro as the first/initial state. The participant then walks up the object and starts cutting the object. After the object is cut in one style, the participant steps back and we highlight another frame as the next state. The participant performs at least 2 styles of cut in each video, which can be done consecutively. For instance, we can first cut an object with large cuts, and then do small cuts subsequently. The video ends with another clap for the end of video detection and synchronization across different cameras. Henceforth, we collect video data along with annotated states for each participant, without extra effort of annotations. More details and statistics of dataset are shown in Figure 3. Average video clip length (one state change for an object) is 1m40s. The distribution is shown in Fig. 4(a). ## 4 Compositional Image Generation Large-scale deep generative models [49, 52, 54] trained on open-world big datasets have made significant breakthroughs in image generation in the last couple of years. These models, are typically conditioned using a text encoder and also support tasks such as zero-shot image generation, inpainting, image editing, and super-resolution without explicit training on these tasks. However, the performance of these models significantly degrades when it comes to compositional generation [10]. Our dataset, consisting of 112 real-world object and state combinations, is well-suited to test the compositional capabilities of generative models. **Task Description.** The goal of the task is to either train from scratch or fine-tune an existing generative model using the (object, state) pairs provided in the training, and generate images from unseen compositions. We consider all 20 objects, each object captured in up to 7 different states, _i.e_., all the states excluding peel. We split the (object, state) combinations into a training set consisting of 87 combinations and a test set consisting of 25 combinations. The training set covers all objects and states used in our dataset, but it does not overlap with the test set in terms of (object, state) combinations. In other words, for each combination of object and state present in the test set, the training set includes exactly one of either the object, or the state, but not both. We also ensure that for each (object, state) combination \((o,s_{i})\) in the test set, there exists a combination \((o,s_{j})\) in the training set, where \(s_{i}\) and \(s_{j}\) belong to the same state related group defined in Section 3.1. This setting ensures that all object and state information are available in the training set. Each combination in our dataset has 8-12 images, resulting in a total of 1032 images in the training set and 296 images in the test set. The exact split is provided in the appendix along with some examples. ### Methods **Stable Diffusion. (SD)** We evaluate a popular open-source text-to-image generative model Stable Diffusion (SD) [52]. For details on the SD, refer to the original work [52]. Here we briefly describe the sampling process. Diffusion models generate an image from Gaussian noise via an iterative denoising process. SD uses classifier-free guidance [21] for sampling. This means given a text prompt \(\mathbf{c}\), we encode the prompt using CLIP's text classifier [48] and recursively update a Gaussian noise sample with \[\omega\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},\mathbf{c})+(1-\omega)\mathbf{ \epsilon}_{\theta}(\mathbf{x}_{t}) \tag{1}\] where \(\mathbf{x}_{t}\) is the denoised sample at the time step \(t\) and \(\mathbf{\epsilon}_{\theta}\) is SD. With each time step, we try to move the denoised sample using the guidance provided by the text prompt. The strength of the guidance is defined by \(\omega\). As our first baseline approach, we sample zero-shot images from SD with a text prompt "An image of \(o_{i}\) cut in \(s_{j}\) style", where \(o_{i}\) is the \(i^{th}\) object and \(s_{j}\) is the \(j^{th}\) state of the object. Zero-shot generation with a pre-trained SD model doesn't work as intended as shown in Figure 5, and the generated images often perform poorly in capturing the \begin{table} \begin{tabular}{l|c|c|c c|c c} \hline \hline Method & Patch & User & Classifier Acc. (\%) & \multicolumn{2}{c}{User Acc. (\%)} \\ & FID \(\downarrow\) & Realism \(\uparrow\) & Object \(\uparrow\) & State \(\uparrow\) & Object \(\uparrow\) & State \(\uparrow\) \\ \hline Real Images & - & 4.65 & 87.5\({}^{*}\) & 92.0\({}^{*}\) & 73.6 & 84.0 \\ \hline SD & 178.0 & 3.41 & **73.1** & 27.9 & **81.6** & 28.8 \\ SD+T1 & 145.0 & 2.58 & 23.6 & 37.7 & 21.6 & 43.2 \\ DreamBooth & 139.9 & 3.56 & 53.5 & 74.2 & 61.6 & 72.8 \\ SD+FT & 88.9 & **3.78** & 70.5 & 67.7 & 72.0 & 65.6 \\ SD+FT+T1 & **82.2** & 3.47 & 67.8 & **81.4** & 67.2 & **79.2** \\ \hline \hline \end{tabular} \end{table} Table 2: **Compositional generation evaluation.** FID, user scores, and classifier scores of various generative models. User Realism is on a scale of 1-5. (\(\star\)) denotes that accuracies are evaluated on a seen data split. **Bold** represents the best result. Figure 4: **(a)** The clip length distribution for one camera (315 unique clips). **(b)** Preliminary results of using green screen to augment the dataset with different backgrounds. We continue to improve the transfer results by adding shadows and background matting. object state. Several recent works have shown that it is possible to extend models such as SD to achieve high-quality customized generations [13, 53, 75]. We evaluate several methods that have been proposed for compositional generation in the recent literature. We also propose a simple yet strong baseline by fine-tuning a Stable Diffusion (SD) model [52] along with textual inversion. **SD + Textual Inversion (TI).** Textual Inversion [13] introduces new tokens in the vocabulary and optimizes their embedding from the given images keeping SD frozen. We adapt the method for our task by introducing new tokens for the objects \(\{o_{i}\}\) and the states \(\{s_{j}\}\), and jointly optimize the embeddings of \(\{o_{i}\}\cup\{s_{j}\}\) by providing (image, prompt) pairs from our training data. As before, the prompt is simply constructed as "An image of \(o_{i}\) cut in \(s_{j}\) style". **DreamBooth.** Next, we adapt DreamBooth [53], which fine-tunes the diffusion model along with the state-specific tokens. In our experiments, we fine-tune one model for each state in the dataset, where only the state token is learned. Original DreamBooth optimizes the diffusion loss as well as a prior preservation loss [53]. We observed that the latter significantly deteriorates the performance thus we skip it. **SD + Fine-tuning (FT).** We also fine-tune SD. In this baseline, only the parameters in the UNet of the diffusion model are optimized while keeping the text encoder fixed. **SD + TI + FT.** Finally, we combine SD fine-tuning and Textual Inversion [13]. Specifically, on top of our SD + Fine-tuning baseline, we also adapt Textual Inversion by introducing new object tokens and state tokens and optimizing their embeddings along with the UNet parameters. ### Evaluation We use both qualitative and quantitative measures to evaluate the capabilities of different methods. This section explains the details of different evaluation metrics we used: **Patch FID.** Frechet Inception Distance (FID) [20] is a commonly used metric to assess the quality of generative models. Given a set of real images and a set of generated images, FID compares the mean and std of Inception-v3 features of the two sets. For each composition and generative model, we compute patch FID using all real and 16000 generated patches, and report the average number for the test pairs. We hypothesize that using patch FID gives more weight to the object-state patches, rather than the whole image, which includes almost 50% background pixels. We further calculate the lower bound for patch FID score by computing it between two sets of real images. Any score lower than that for this dataset can be disregarded as irrelevant. The determined lower bound for the patch FID score is 37.2. **Object/State Accuracy using a Classifier.** To evaluate the correctness of objects and states in the generated images, we train a classifier on real images for classifying objects and states independently. This classifier is built on top of CLIP-ViT-B/32 [48]. Classification logits are obtained by computing the cosine similarity between the image embedding and text embeddings of all possible state labels or object labels. To ensure the reliability of the classifier's results, we train it on the training set from a different dataset split, where all (object, state) combinations are present. **User Study.** We conducted a user study to evaluate the generated images. We took images from the test set as well as samples from our generative models and present them to 30 users. Each user was presented with 25 distinct images, randomly sampled with an even distribution from our models and the test set. After giving a tutorial to the users about the different objects and states present in our experiments, the users were asked to choose an appropriate object name and state label, as well as rate the image for realism on a scale of 1-5. We report the object and state accuracies as well as realism score in Table 2. The details of our user study design can be found in the appendix. Figure 5: **Compositional Generation Samples.** Ground Truth (GT) real images are shown in the first row for reference. Seven object-state combinations in the test set are displayed, each with two generated samples for each method. Please zoom in to see details. ### Results and Discussion **Qualitative Results.** Fig. 5 displays the generated images from various methods for seven (object, state) combinations in the test set. The first row of the figure exhibits the ground truth real images for reference. We observe that vanilla SD often generates correct objects in random states, while SD+TI frequently synthesizes images without displaying the object. DreamBooth performs better than SD+TI, but worse than a simple finetuning of SD. SD+FT and SD+FT+TI perform well in terms of state generation. **Quantitative Results.** Table 2 displays the performance of all baseline methods evaluated according to the metrics outlined in Section 4.2. Assessing image realism is a crucial evaluation metric for generative models; however, defining and measuring it can be challenging. Note that the patch FID values and user realism ratings do not align well. This is due to the disparity between the distribution of images in our dataset and that of typical occurrence of those objects in the real world. The patch FID metric measures the similarity between the generated images with those in our dataset, instead of the ones most typical in real world. In particular, our results indicate that SD achieves the worst patch FID score since it has not encountered our dataset before, whereas its user realism rating is more satisfactory. SD+TI has the lowest user realism rating and a poor patch FID score, which suggests that only training object/state embeddings is inadequate for generating high-quality images. DreamBooth receives a good user realism rating but a poor patch FID, indicating that the images it generates are realistic but not very similar to those in our proposed dataset. Finally, fine-tuning via both SD+FT and SD+FT+TI achieve better results for patch FID and user realism. We next evaluate the accuracy of objects and states in generated images. It is worth noting that the classification task on our dataset is intrinsically difficult, which leads to imperfect user accuracy on real images. In general, the accuracy scores from classifier closely align with one from users, indicating that the proposed classifier is suited for evaluating compositional generation. Our results show that SD achieves the best object accuracy but the worst state accuracy. This is possibly due to the lack of state variations in most existing large image datasets. SD+TI is the worst performer due to its limited learning capacity. On the other hand, DreamBooth, SD+FT, and SD+FT+TI attain better state accuracy. Among them, DreamBooth's object accuracy is slightly worse as it is particularly trained for states. SD+FT achieves high object accuracy, and SD+FT+TI attains the best state accuracy with the help of fine-tuning and textual inversion together. **Green Screen Removal.** One of the main challenges for understanding fine-grained object-state pairs with existing datasets such as MIT-states [25] is diverse backgrounds. Using them for training often leads to the model latching on to unwanted background details and missing out on the state understanding. Hence, we collected ChopNLearn with a clean green screen background for the benchmark tasks. While we acknowledge the limitations it poses to our trained models, we highlight that the green screen can potentially enhance our ability to generalize to diverse scenes. This can be achieved by segmenting out images and placing various backgrounds, along with scaled and rotated object-state images (Figure 4). As a proof-of-concept, we train a SD+FT+TI model on background-augmented images, and report the Patch FID, classifier object accuracy and state accuracy in Tab. 4. Note that here we employ a newly trained classifier that uses background-augmented images, and the patch FID scores are also computed based on these images. We further reference the lower bound of the patch FID as defined in Section 4.2. Due to the complex backgrounds introduced, the object accuracy and the patch FID of the new model are slightly compromised. However, it maintains a high and even improved state accuracy. This demonstrates the potential of the background-augmented ChopNLearn in enhancing fine-grained compositional image generation. ## 5 Compositional Action Recognition Human actions often change object states and different objects can have diverse visual transitions even when sub \begin{table} \begin{tabular}{l l c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Split 1} & \multicolumn{3}{c}{Split 2} & \multicolumn{3}{c}{Split 3} \\ \cline{3-14} & & \multicolumn{3}{c}{Start} & \multicolumn{3}{c}{End} & \multicolumn{3}{c}{Start} & \multicolumn{3}{c}{End} & \multicolumn{3}{c}{Start} & \multicolumn{3}{c}{End} \\ \cline{3-14} Model & Features & acc@1 & acc@3 & acc@1 & acce@3 & acc@1 & acce@3 & acc@1 & acce@3 & acc@1 & acc@3 & acc@1 & acc@3 \\ \hline AvgPool & I3D [6] & 9.5 & 23.7 & 4.7 & 14.2 & 8.3 & 21.9 & 5.2 & 19.8 & 15.9 & 28.5 & 4.8 & 22.3 \\ LSTM [22] & I3D [6] & 14.2 & 36.2 & 5.7 & 29.8 & 12.5 & 29.2 & 6.2 & 26.0 & 17.5 & 34.9 & 6.3 & 23.7 \\ Transformer [65] & I3D [6] & 23.7 & 49.0 & 10.9 & 44.3 & 27.5 & 46.2 & 14.6 & 44.2 & 20.6 & 42.9 & 11.1 & 44.4 \\ \hline AvgPool & MIL-NCE [39] & 11.1 & 31.6 & 4.8 & 28.4 & 9.4 & 17.7 & 5.2 & 13.5 & 14.2 & 41.4 & 12.8 & 41.4 \\ LSTM [22] & MIL-NCE [39] & 15.9 & 36.5 & 6.4 & 36.6 & 11.9 & 36.7 & 9.8 & 36.7 & 18.9 & 39.6 & 8.0 & 25.4 \\ Transformer [65] & MIL-NCE [39] & 50.9 & 85.7 & 47.7 & 76.2 & **56.2** & 82.3 & 52.7 & 88.5 & 41.1 & 74.6 & 42.9 & 77.7 \\ \hline STLT [47] & – & 2.8 & 15.5 & 1.4 & 8.4 & 1.4 & 13 & 1.4 & 11.6 & 4.2 & 14.1 & 1.4 & 11.3 \\ Transformer [65] & R3D [15] & 45.1 & 85.9 & 52.1 & 85.9 & 55.1 & 94.2 & **58.0** & 92.8 & 59.1 & 85.9 & 56.3 & 85.9 \\ CAF [47] & R3D [15] & **53.5** & **88.7** & **57.8** & **88.7** & 55.1 & **95.7** & **58.0** & **95.7** & **62.0** & **93.0** & **63.4** & **93.0** \\ \hline \hline \end{tabular} \end{table} Table 3: **Compositional action recognition results. “Start/End” denote the prediction results for the initial and the final state composition with the corrected object type. Bold and underline represent the top-1 and top-2 results.** jected to the same action type. To investigate this problem in a more intuitive manner, [36] introduced a new task of compositional action recognition, which targets at improving the robustness of models to handle the same actions with different objects involved. For example, given an action of 'taking something out from something', the model is trained on a limited set of objects and is tested on unseen types of objects to access its generalizability. Hence, despite the same underlying action, the object and visual features can be quite diverse. Similarly, the composition of the same action with different object types can look very distinctive. For instance, although cutting an carrot and a apple require similar knife movements, the resulting visual changes are distinct, with the former changing from a whole apple to a peeled apple, and the latter changing from a whole carrot to a peeled carrot. Therefore, we propose to use our dataset for the task of compositional action recognition, which can also be referred to as Compositional Zero-Shot Action Recognition, as the compositions of objects and states are unseen during training. **Task Description.** For this task, we consider each clip of a video as containing a single object with a single state transition. From the raw videos, which typically contain 2-3 transitions of object states per video, we segment the clips into isolated ones with only one transition. Examples of transitions include changing from a whole object to a peeled object or from a peeled object to a baton cut object. Similar to [36], we divide all object-final state compositions into two sets: seen compositions, which are used for training, and unseen compositions, which are used for testing. Following the approach used in the Compositional Image Generation task, we ensure that each object and state are seen at least once individually in the training set, but not together as a composition. The objective of the task is to predict the correct labels for the initial object-state composition \((o_{i},s_{j})\) and the final composition \((o_{i},s_{k})\), given a clip containing an object \(o_{i}\) transitioning from an initial state \(s_{j}\) to a final state \(s_{k}\). Note that the clip is considered correctly classified only if both the object and state labels are correct for both the initial and final compositions. ### Dataset Splits We create 3 different dataset splits as follows (more details are in the Appendix). All splits have disjoint train, test and validation samples, and are created with different constraint combinations: * **Split 1:** This split is a random selection of object-final state compositions with cross-view condition. We do not use any information from related groups. * **Split 2:** In this split, we use related group information for states, along with cross-view. based on related groups, if baton carrots is seen in training set, then julienne carrots can be part of test set. Since baton and julienne are part of the same related group, we can learn an object in one style and can generalize to another style from the same group in Section 3.1. * **Split 3:** This split includes information from both related groups for states and objects. We want to ensure that even if an object is not seen in its related group, a similar object is seen in the related group. For example, if broccoli is seen with large cuts, then cauliflower with large or small cuts can be in the test set. Hence different splits represent different complexity levels for compositional action recognition. **Evaluation.** We evaluate the accuracy of predicting both the initial and final compositions of objects and states in the test set. Only when the object and state are both correct, it is counted as a correct prediction. Specifically, we use two separate prediction heads for objects and states. We emphasize the need to evaluate composition as a whole, rather than just predicting the state, as the way an apple is cut can differ significantly from the way a bellpepper is cut. Therefore, accurately recognizing both the object and state is crucial for tasks related to understanding and generating videos of object states. We also recognize the importance of top@3 accuracy, since object states can sometimes be visually similar, leading to confusion in detecting the correct composition. For example, julienne apple can be visually very similar to julienne potato. ### Results To evaluate our proposed method, we establish baselines using both traditional architectures and features for video action classification, as well as comparing with recent works in compositional action recognition. As shown in Table 3, in the first section, we use pre-extracted I3D[6] features and conduct experiments by comparing simple average pooling, LSTM, and multi-layer Transformer [65] model. It shows that the Transformer model performs the best among these variants due to the great capacity of temporal modeling ability. In the second section, we also experiment with more recent pre-trained features MILNCE [39] along with transformer models, which outperforms I3D features. MIL-NCE [39] features are pre-trained on HowTo100M [40] with multimodal (RGB+narrations) setup, which is more robust for video downstream tasks. \begin{table} \begin{tabular}{l|c c|c|c} \hline Data & Classifier Acc. (\%) & Patch FID \(\downarrow\) & Patch FID \\ Background & Object \(\uparrow\) & State \(\uparrow\) & & Lower Bound \\ \hline Green Screen & 67.8 & 81.4 & 82.2 & 37.2 \\ Various & 46.3 & 82.3 & 133.6 & 46.4 \\ \hline \end{tabular} \end{table} Table 4: **Green screen removal evaluation.** Both rows employ the SD+FT+TI but are trained using images with varying backgrounds. Classifiers specific to each dataset are trained to assess Classifier Acc. Validation images used to calculate Patch FID differ between the two rows. Patch FID Lower Bound is computed by evaluating the patch FID on one-half of the validation images relative to the other half. For further details, refer to Section 4.3. In the final section of Table 3, we employ the state-of-the-art compositional video recognition model proposed in [47] and use pseudo labels of bounding boxes for each hand and object, as there are no ground-truth hand and object trajectories available. Specifically, the Spatial-Temporal Layout Transformer (STLT) [47] takes in the spatio-temporal locations and class labels for each bounding box as input, uses positional embeddings to project coordinates and class labels into features, and adds transformer layers to model spatial-temporal relationships. However, without any appearance information, STLT achieves low performance on all metrics. On the other hand, with the appearance features, which are extracted by inflated 3D ResNet50 [27] (R3D), it can achieve much higher performances than STLT. Finally, Cross-Attention Fusion (CAF) applies cross-attention [63] to fuse the layout (STLT) and appearance (R3D) branch embeddings, achieving the best results. It demonstrates that combining the layout and appearance information together can help predict object and state types more accurately. ## 6 Discussion We discuss the potential future use of ChopNLearn, while addressing the limitations and scope as well. **Long-term Video Parsing.** We use compositional state recognition to further understand the temporal dynamics [11, 16, 17, 18] with the aid of a video parsing graph construction as previously explored in Ego-Topo [42] and VideoGraph [23]. Each clip in the training set has one state transformation (top example in Figure 6). We visualize the class activation maps corresponding to the most salient intermediate state transitions with Grad-CAM [57], to learn the transition in each frame of the video for training data. This is illustrated as a graph for a training video. Having learned multiple single transformations, we can now extend this knowledge to understand long activities, with multiple transitions. As shown in Fig. 6, we can learn state changes for orange from large cut \(\rightarrow\) small cut using our training clip. Given a long unseen video with multiple clips, we can construct a state-transition graph to represent changes in state for a watermelon. Hence, by using an extensive array of videos, the process of learning transitions between individual states can be extended to encompass transitions between multiple states. This enables the creation of a self-supervised transition knowledge graph for comprehensive long-term video comprehension, as demonstrated in [11, 69]. **Limitations.** With advent of foundation models, few-shot generalization is an increasingly important task. In this work, we explore the potential of ChopNLearn for the research in compositional generation and recognition for highly complex and interdependent concepts. Admittedly, ChopNLearn is a small scale dataset with green screen background, which restricts the models trained on it to have specific biases. Nonetheless, this is the first attempt to understand how fine-grained states (cut styles) can be transferred to diverse objects. We explore this by using ChopNLearn as a test set for larger models, fine-tuning these models using ChopNLearn and trying them with or without a green screen background. We further see the potential of using ChopNLearn for benefiting the community in even more challenging tasks such as 3D reconstruction, video frame interpolation, state change generation, _etc_. ## 7 Conclusion In this paper, we propose ChopNLearn, a new dataset for measuring the ability of models to recognize and generate unseen compositions of objects in different states, a skill known as compositional generalization. We also introduce two tasks, Compositional Image Generation and Compositional Action Recognition, and benchmark the performance of state-of-the-art generative models and video recognition methods on these tasks. We show the challenges with the existing approaches and their failure in some cases in their ability to generalize to new compositions. However, these two tasks are just the tip of the iceberg. Understanding object states is important for multiple image and video tasks such as 3D reconstruction, future frame prediction, video generation, summarization, and parsing of long-term video. We hope that our dataset will help the computer vision community to propose and learn new compositional tasks for images, videos, 3D, and beyond. **Acknowledgements.** The authors would like to dedicate this paper to the memory of Vinoj Jayasundara. His creativity, contributions and enthusiasm for the field of Computer Vision will continue to inspire us. We would also like to thank Snehesh, Chahat, Kanishka, and Pulkit for their valuable conversations during data collection. This work was partially funded by DARPA SAIL-ON (W911NF2020009) program and NSF CAREER Award (#2238769) to AS. Figure 6: **Video parsing graph:** For a given video, we use Grad-CAM[57] on the intermediate frames to identify and visualize the class activation maps corresponding to the most salient states. Top: A training video clip has one transition of orange from large cut \(\rightarrow\) small cut. Bottom: We can learn single transitions from training data, to generalize transitions in a long video with multiple state changes and parse the video as a graph.
2309.07423
ChatGPT MT: Competitive for High- (but not Low-) Resource Languages
Large language models (LLMs) implicitly learn to perform a range of language tasks, including machine translation (MT). Previous studies explore aspects of LLMs' MT capabilities. However, there exist a wide variety of languages for which recent LLM MT performance has never before been evaluated. Without published experimental evidence on the matter, it is difficult for speakers of the world's diverse languages to know how and whether they can use LLMs for their languages. We present the first experimental evidence for an expansive set of 204 languages, along with MT cost analysis, using the FLORES-200 benchmark. Trends reveal that GPT models approach or exceed traditional MT model performance for some high-resource languages (HRLs) but consistently lag for low-resource languages (LRLs), under-performing traditional MT for 84.1% of languages we covered. Our analysis reveals that a language's resource level is the most important feature in determining ChatGPT's relative ability to translate it, and suggests that ChatGPT is especially disadvantaged for LRLs and African languages.
Nathaniel R. Robinson, Perez Ogayo, David R. Mortensen, Graham Neubig
2023-09-14T04:36:00Z
http://arxiv.org/abs/2309.07423v1
# ChatGPT MT: Competitive for High- (but not Low-) Resource Languages ###### Abstract Large language models (LLMs) implicitly learn to perform a range of language tasks, including machine translation (MT). Previous studies explore aspects of LLMs' MT capabilities. However, there exist a wide variety of languages for which recent LLM MT performance has never before been evaluated. Without published experimental evidence on the matter, it is difficult for speakers of the world's diverse languages to know how and whether they can use LLMs for their languages. We present the first experimental evidence for an expansive set of 204 languages, along with MT cost analysis, using the FLORES-200 benchmark. Trends reveal that GPT models approach or exceed traditional MT model performance for some high-resource languages (HRLs) but consistently lag for low-resource languages (LRLs), under-performing traditional MT for 84.1% of languages we covered. Our analysis reveals that a language's resource level is the most important feature in determining ChatGPT's relative ability to translate it, and suggests that ChatGPT is especially disadvantaged for LRLs and African languages. ## 1 Introduction Despite the majority of the world's languages being low-resource, current MT systems still perform poorly on them or do not include them at all. Some commercial systems like Google Translate1 support a number of LRLs, but many systems do not support any, and in either case the majority of LRLs are largely neglected in language technologies. Footnote 1: [https://translate.google.com](https://translate.google.com) In recent years, generative LLMs have shown increasingly impressive translation abilities (Radford et al., 2019; Brown et al., 2020). Even more recently, LLM tools like ChatGPT have become popular and accessible to end users. This marks an important shift, since a majority of LLM users are now consumers rather than researchers. The prospect of LLM translation is exciting, since theoretically, generative LLMs could support more languages than commercial systems like Google's.2 But only beginning steps have been made to test this hypothesis. While some studies outlined in SS4 have evaluated MT with recent LLMs, evaluation is still lacking for many languages. This brings up important questions, such as: _Can end users in need of MT for a variety of languages use ChatGPT? Are ChatGPT and other LLMs reliable translators? For which languages are they reliable?_ Initially we hypothesize that LLMs translate HRLs better than LRLs. But due to limited information about the training data and methods for powerful LLMs like ChatGPT (**GPT-3.5** and variants) and GPT-4, hypotheses like this must be experimentally verified. Footnote 2: Google Translate currently supports only 133 languages with systems deemed high enough quality for deployment. We attempt a significant expansion of experimental verification for such hypotheses by testing ChatGPT's performance on the FLORES-200 benchmark (Team et al., 2022), containing 204 language varieties.We emphasize that, rather than optimizing LLM MT for a few languages, we focus on helping end users of various language communities know how and when to use LLM MT. We expect that our contributions may benefit both direct end users, such as LRL speakers in need of translation, and indirect users, such as researchers of LRL translation considering ChatGPT to enhance specialized MT systems. In summary, we contribute: 1. MT scores on 203 languages for ChatGPT and comparisons with GPT-4, Google Translate, and NLLB (Team et al., 2022) 2. Evidence that LLMs are competitive with traditional MT models for many HRLs but lag for LRLs (with baselines outperforming ChatGPT on 84.1% of languages evaluated) 3. Evidence that few-shot prompts offer marginal benefits for LLM translation 4. A decision tree analysis of language features' correlation with LLM effectiveness in MT, suggesting ChatGPT is especially disadvantaged for LRLs and African languages 5. A cost comparison across MT systems Our experiments are motivated by the interests of LLM users speaking a variety of languages. In addition to evaluating a large language set (SS3), we chose to analyse language features (SS3.4), to draw generalizations for even more LRL speakers. We compare MT costs because they impact end users (SS3.7). We keep ChatGPT central to our analyses because of its current popularity among consumers. ## 2 Methodology We used data for 204 language varieties from FLORES-200 (Team et al., 2022). We used the 1012 _devtest_ sentences for our main experiments and the 997 _dev_ sentences for follow-up experiments. We queried the OpenAI API3 to translate our test set from English into the target languages. We explored ENG\(\rightarrow\)X translation only because the FLORES-200 English data was taken from Wikipedia. Thus OpenAI's GPT models were likely trained on those exact English sentences, making fair X\(\rightarrow\)ENG evaluation infeasible. Footnote 3: [https://platform.openai.com](https://platform.openai.com) ### Experimental setup We evaluated ChatGPT's (gpt-3.5-turbo) MT for our full language set. We compared with NLLB-MOE (Team et al., 2022) as our baseline, as it is the current state-of-the-art open-source MT model that covers such a wide variety of languages. NLLB is a discriminative transformer trained on supervised bi-text data (the traditional MT paradigm). We obtained scores for NLLB outputs of ENG\(\rightarrow\)X translation into 201 of the language varieties in our set (as reported by Team et al. (2022)). We used both zero- and five-shot prompts for ChatGPT MT. (See SS2.3.) Previous studies (Hendy et al., 2023; Gao et al., 2023; Moslem et al., 2023; Brown et al., 2020; Zhu et al., 2023) suggest that few-shot prompts produce slightly (albeit not consistently) better translations. But zero-shot prompts are more convenient and affordable for users. We also compare with results for subsets of our selected languages from two other MT engines. Google Translate API was an important baseline for our analysis because it is popular among end users. We also included it to represent commercial MT systems in our study. Because Google's API does not support all 204 of the FLORES-200 languages, we obtained results only for the 115 non-English languages it supports. Lastly, we obtained MT results from GPT-4, since it is a popular LLM and has been shown to outperform ChatGPT on MT (Jiao et al., 2023; Wang et al., 2023). Because the cost of GPT-4 use exceeds that of ChatGPT by 1900%, our resources did not permit its evaluation on all 203 non-English languages. Instead we selected a 20-language subset by picking approximately every 10th language, with languages sorted by chrF++ differentials between ChatGPT and NLLB (\(chrf_{GPT}-chrf_{NLLB}\)). We chose this criterion in order to have 20 languages with a range of relative ChatGPT performance and a variety of resource levels. We used only five-shot prompts for GPT-4. ### Implementation details We conducted all LLM experiments with gpt-3.5-turbo (ChatGPT) and gpt-4-0613 (GPT-4). We used top_p 1, temperature \(0.3\), context_length \(-1\), and max_tokens4\(500\). Footnote 4: Although some languages had higher token counts than others (see §3.4), we found that adjusting max_tokens had a minimal effect on MT performance. We thus decided to maintain the same value of max_tokens across all languages for experimental consistency. To evaluate the outputs, we used:5 Footnote 5: We excluded learned MT metrics like COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020), since they do not support many LRLs. **spBLEU**: BLEU (Papineni et al., 2002) is standard in MT evaluation. We find spBLEU scores (Goyal et al., 2022) via sacreBLEU (Post, 2018) with the SPM-200 tokenizer (Team et al., 2022). **chrF2++**: We use sacreBLEU's implemantation of chrF++ (Popovic, 2017). We adopt it as our main metric, as it overcomes some of BLEU's weaknesses, and refer to it as \(chrF\) for brevity. ### Zero- and few-shot prompts Previous works (Gao et al., 2023; Jiao et al., 2023) investigated LLM prompting to optimize MT performance. We adopt Gao et al. (2023)'s recommended prompts for both zero- and few-shot MT (Table 1). We are interested in multiple \(n\)-shot prompt settings because, as mentioned in SS2.1, they present different benefits to LLM users. We explored zero-shot (no in-context example), one-shot (1 example), and five-shot (5 examples). We employed both zero- and five-shot prompts in our main experiments over 203 languages, and we analyzed all three \(n\)-shot settings for a subset of languages on FLORES-200 _dev_ sets. The languages in FLORES-200 represent 22 language families. To experiment with multiple \(n\)-shot settings, we selected one language from each of the 12 families containing at least two members in the set. We chose four HRLs (\(\geq\)1M Wikipedia pages6), four LRLs (25K-1M pages), and four extremely LRLs (\(\leq\)25K pages). These languages also employ a variety of scripts. See Table 2. Footnote 6: Throughout the paper we use the “Total pages” count from [https://en.wikipedia.org/wiki/List_of_Wikipedias](https://en.wikipedia.org/wiki/List_of_Wikipedias), accessed 7 August 2023, as a proxy for the resource level of a language. ## 3 Results and Analysis ### Traditional MT generally beats LLMs Table 3 shows the number of languages we evaluated for each MT system, as noted in SS2.1, with average chrF and BLEU scores across those languages. The best performing model on average was (1) Google, then (2) NLLB, (3) GPT-4, and (4) ChatGPT. Unabridged results are in Table 11 in Appendix A. Supplementary materials can also be browsed on our repository.7 (Also see the interactive score visualizer on our Zeno browser.8) Footnote 7: [https://github.com/cmu-llab/gpt_mt_benchmark](https://github.com/cmu-llab/gpt_mt_benchmark) Footnote 8: [https://hub.zenonl.com/project/cabreralex/GPTX20MTX20Benchmark](https://hub.zenonl.com/project/cabreralex/GPTX20MTX20Benchmark) Table 4 shows a meaningful subset of scores: chrF for the 20 languages evaluated on both LLM systems. Of the 11 languages evaluated on all four systems, Google performed best for 10 of them. Notably, GPT-4 surpassed NLLB in five languages and Google in one (Moroccan Arabic, acm_Arab). On the 20 languages for which we tested it, GPT-4. \begin{table} \begin{tabular}{l l} Shot & Prompt \\ \hline zero & This is an English to [TGT] translation, please provide the [TGT] translation for this sentence. Do not provide any explanations or text apart from the translation. \\ & [SRC]: [src-sentence] \\ five & This is an English to [TGT] translation, please provide the [TGT] translation for these sentences: \\ & [SRC]: [src-sentence] [TGT]: [tgt-sentence] \\ & [SRC]: [src-sentence] [TGT]: [tgt-sentence] \\ & [SRC]: [src-sentence] [TGT]: [tgt-sentence] \\ & [SRC]: [src-sentence] [TGT]: [tgt-sentence] \\ & Please provide the translation for the following sentence. \\ & Do not provide any explanations or text apart from the translation. \\ & [SRC]: [src-sentence] \\ & [TGT]: \\ \end{tabular} \end{table} Table 1: Prompts used for zero- and five-shot settings \begin{table} \begin{tabular}{l c c c c} **Lang.** & **GPT-4** & **ChatGPT** & **Google** & **NLLB** \\ \hline sw\_Latin & 24.1 & 6.7 & - & 43.3 \\ sna\_Latin & 29.2 & 16.3 & **44.4** & 43.4 \\ ckb\_Arab & 33.1 & 24.8 & **47.7** & 47.2 \\ mag\_Deva & 44.6 & 39.9 & - & 58.5 \\ ibo\_Latin & 27.7 & 16.3 & **43.5** & 41.4 \\ hau\_Latin & 40.3 & 22.4 & **53.2** & 53.5 \\ pbt\_Arab & 26.7 & 21.1 & - & 39.4 \\ tam\_Taml & 42.7 & 34.5 & **55.8** & 53.7 \\ kat\_Geor & 41.4 & 33.5 & **51.4** & 48.1 \\ gle\_Latin & 53.0 & 47.5 & **60.1** & 58.0 \\ kmr\_Latin & 34.3 & 27.4 & **40.0** & 39.3 \\ war\_Latin & 54.0 & 49.5 & - & 57.4 \\ aip\_Arab & 48.4 & 47.5 & - & 51.3 \\ lini\_Latin & 45.1 & 42.7 & - & 47.9 \\ ukr\_Cyrl & 56.3 & 55.4 & **58.6** & 56.3 \\ fra\_Latin & 71.7 & 71.3 & **72.7** & 69.7 \\ lvs\_Latin & 57.3 & 55.2 & - & 54.8 \\ ron\_Latin & **65.3** & 64.2 & 65.0 & 61.3 \\ ltp\_Latin & **49.5** & 39.2 & - & 41.6 \\ acm\_Arab & **46.5** & 46.1 & - & 31.9 \\ \hline \end{tabular} \end{table} Table 4: chrF (\(\uparrow\)) scores across models for all languages we used to evaluate GPT-4. Best scores are **bold**. ChatGPT scores here are 5-shot, to compare with GPT-4. \begin{table} \begin{tabular}{l c c} **Language** & **Code** & **Family** & **Script** & **Wiki. \#** \\ \hline French & fra & Indo-European & Latin & 12.7M \\ Chinese & zno & Sino-Tibetan & Hans & 7.48M \\ Turkish & tur & Turkic & Latin & 2.48M \\ Finnish & fin & Uralic & Latin & 1.46M \\ Tamil & tam & Dravidian & Taml & 496K \\ Tagalog & tgl & Austruscancan & Latin & 239K \\ Kiswahili & ssh & Niger-Congo & Latin & 167K \\ Amharic & amh & Afrosaatic & Ethi & 46.2K \\ Santali & sat & Austrosaatic & 01ck & 20.0K \\ Lao & lao & Kra-Dai & Lao & 14.0K \\ Papiamento & pap & Creole & Latin & 6.84K \\ Luo & luo & Nilo-Saharan & Latin & 0 \\ \end{tabular} \end{table} Table 2: Diverse subset of languages experiments with few-shot settings. **Wiki. #** is the number of Wikipedia pages in the language. \begin{table} \begin{tabular}{l c c c} **GIT20** & **GPT-4** & **ChatGPT** & **Google** & **NLLB** \\ \hline **Saw\_Latin** & 24.1 & 6.7 & - & 43.3 \\ sna\_Latin & 29.2 & 16.3 & **44.4** & 43.4 \\ ckb\_Arab & 33.1 & 24.8 & **47.7** & 47.2 \\ mag\_Deva & 44.6 & 39.9 & - & 58.5 \\ ibo\_Latin & 27.7 & 16.3 & **43.5** & 41.4 \\ hau\_Latin & 40.3 & 22.4 & **53.2** & 53.5 \\ pbt\_Arab & 26.7 & 21.1 & - & 39.4 \\ tam\_Taml & 42.7 & 34.5 & **55.8** & 53.7 \\ kat\_Geor & 41.4 & 33.5 & **51.4** & 48.1 \\ gle\_Latin & 53.0 & 47.5 & **60.1** & 58.0 \\ kmr\_Latin & 34.3 & 27.4 & **40.0** & 39.3 \\ war\_Latin & 54.0 & 49.5 & - & 57.4 \\ aip\_Arab & 48.4 & 47.5 & - & 51.3 \\ lini\_Latin & 45.1 & 42.7 & - & 47.9 \\ ukr\_Cyrl & 56.3 & 55.4 & **58.6** & 56.3 \\ fra\_Latin & 71.7 & 71.3 & **72.7** & 69.7 \\ lvs\_Latin & 57.3 & 55.2 & - & 54.8 \\ ron\_Latin & **65.3** & 64.2 & 65.0 & 61.3 \\ tpl\_Latin & **49.5** & 39.2 & - & 41.6 \\ acm\_Arab & **46.5** & 46.1 & - & 31.9 \\ \hline \end{tabular} \end{table} Table 3: Languages evaluated, average chrF, and average BLEU for each MT system. Best scores are **bold**. 4 improved over ChatGPT by 6.5 chrF on average. The standard deviation of performance difference with NLLB (\(chrF_{GPT}-chrF_{NLLB}\)) was 8.6 for GPT-4, compared with ChatGPT's 12.7 for the same languages, suggesting a more consistent advantage across language directions. GPT-4 offered larger improvements for LRLs, whereas HRL performance plateaued between the LLMs. Previous studies have found GPT-4 improving multilingual capabilities over ChatGPT on a range of tasks (Xu et al., 2023; Zhang et al., 2023; OpenAI, 2023). This may account for its superior MT performance. Google Translate outperformed all other systems in chrF on 100 of the 115 languages for which we evaluated it, with an average improvement of 2.0 chrF points over the next best system for each language. (See Appendix A for unabridged results.) Google's was the best performing MT system overall, though NLLB has broader language coverage. NLLB outperformed ChatGPT in chrF on 169 (84.1%) of the 201 languages for which we obtained scores for both, with NLLB scoring an average of 11.9 chrF points higher than the better \(n\)-shot ChatGPT setting for each language. This trend is corroborated by Zhu et al. (2023). Table 5 has both BLEU and chrF scores from both systems for the five languages with the most negative chrF deltas (\(chrF_{GPT}-chrF_{NLLB}\)) on top, followed by the five languages with the highest positive deltas on bottom. For many of the subsequent sections of this paper we focus on comparing ChatGPT and NLLB, since we evaluted them on the most languages. ### ChatGPT under-performs for LRL Using Team et al.'s (2022) resource categorization, we find that ChatGPT performs worse on LRLs than HRLs, corroborating findings of previous works (Jiao et al., 2023; Zhu et al., 2023). There is a strong positive correlation between ChatGPT and NLLB chrF scores, but the correlation is higher for HRLs (\(\rho\)=0.85) than LRLs (\(\rho\)=0.78), indicating that ChatGPT struggles to keep up with NLLB for LRLs. Figure 1 shows scatter plots where dots represent languages, with ChatGPT's (positive or negative) _relative improvement_ over NLLB chrF (\(\frac{chrF_{GPT}-chrF_{NLLB}}{chrF_{NLLB}}\)) on the y-axis. When languages are grouped by family or script, some trends are apparent (in part because we ordered groups by descending average scores). For example, ChatGPT fairs better with Uralic and Indo-European languages and clearly worse with Niger-Congo and Nilo-Saharan languages. However, the clearest natural correlation appears when languages are grouped by resource level, approximated by number of Wikipedia pages (Figure 1, bottom). Note the _relative improvement_ (y-axis) is typically negative since ChatGPT rarely outperformed NLLB. In the five-shot setting, ChatGPT outperformed NLLB on 47% of the HRLs designated by Team et al. (2022), but only on 6% of the LRLs. These findings contrast with what is commonly observed in multilingual MT models (Liu et al., 2020; Fan et al., 2020; Siddhant et al., 2022; Bapna et al., 2022; Team et al., 2022), where LRLs benefit the most. This highlights the need to investigate how decoder-only models may catch up with encoder-decoder models in low-resource applications. It underscores the importance of smaller specialized models when large multitask models cannot overcome low-resource challenges. ### Few-shot prompts offer marginal improvement Our main experiments suggested that \(n\)-shot setting had only a modest effect on MT performance. We conducted a more concentrated study of \(n\)-shot prompts using \(dev\) sets for the 12 languages in Table 2. Results in Table 6 show five-shot prompts performing best. For some LRLs, this was simply a result of ChatGPT's failure to model the language. In Santali's case, for example, zero-shot ChatGPT was unable to produce the Ol Chiki script at all. In the five-shot setting, it was able to imitate the script characters from the context, but without any coherence or accuracy. Excepting Santali as an outlier, five-shot settings offered generally marginal improvements over zero-shot (the most cost-effective \begin{table} \begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{ChatGPT} & \multicolumn{2}{c}{NLLB} \\ **Lang.** & **BLEU** & **chrF** & **BLEU** & **chrF** \\ \hline srp\_Cylr & 1.36 & 3.26 & **43.4** & **59.7** \\ kon\_Latin & 0.94 & 8.50 & **18.9** & **45.3** \\ tso\_Latin & 2.92 & 15.0 & **26.7** & **50.0** \\ kac\_Latin & 0.04 & 2.95 & **14.3** & **37.5** \\ nso\_Latin & 3.69 & 16.7 & **26.5** & **50.8** \\ \hline jpn\_Jpan & **28.4** & **32.9** & 20.1 & 27.9 \\ nno\_Latin & **37.1** & **58.7** & 33.4 & 53.6 \\ zho\_Hans & **36.3** & **31.0** & 26.6 & 22.8 \\ zho\_Hant & **26.0** & **24.4** & 12.4 & 14.0 \\ acm\_Arab & **28.2** & **44.7** & 11.8 & 31.9 \\ \end{tabular} \end{table} Table 5: Lowest (top) and highest (bottom) chrF differences between zero-shot ChatGPT and NLLB. Best scores for each metric in **bold** (with BLEU **blue**). of the settings), with an average improvement of only 1.41 chrF across all 12 languages (0.31 if we exclude Santali). Zero-shot prompts actually produced the best chrF score for six of the 12 languages. The one-shot setting performed worst. We noted this trend of few-shot contexts offering only meager and inconsistent improvements throughout our experiments, with five-shot MT improving on zero-shot by only 0.88 average chrF across all 203 language directions. (See Appendix A.) ### Importance of language features We were interested in which language features determined LLMs' effectiveness compared to traditional MT. Analyzing this may reveal trends helpful to end users deciding which MT system to use, especially if their language is not represented here but shares some of the features we consider. In this section we focus on comparing ChatGPT and NLLB, since we evaluated the most languages with them. We focus on zero-shot ChatGPT, as it is the most common and convenient setting for end users. We encoded each of the 203 languages in our set as a _feature vector_. In these language _feature vectors_ we included **four numerical features**: number of Wikipedia pages in the language (wiki_ct), size of the language's bi-text corpus in the Oscar MT database9 (oscar_ct) (Abadji et al., 2022), percentage of ASCII characters10 in the FLORES-200 _dev_ set for the language (ascii_percentage), and average number of tokens per _dev_ set sentence in FLORES-200 with ChatGPT's tokenizer \begin{table} \begin{tabular}{l|c c|c c|c c} & \multicolumn{2}{c|}{0-shot} & \multicolumn{2}{c|}{1-shot} & \multicolumn{2}{c}{5-shot} \\ & **BLEU** & **chrF** & **BLEU** & **chrF** & **BLEU** & **chrF** \\ \hline fra & 55.4 & **71.3** & 50.4 & 70.3 & 55.4 & 71.2 \\ zho & 30.0 & 29.9 & 28.2 & 30.8 & **30.7** & **31.1** \\ fin & 34.6 & 56.6 & 31.7 & 56.3 & 34.6 & **56.7** \\ tur & 38.2 & **58.6** & 34.8 & 57.6 & **38.3** & 58.6 \\ tgl & 35.9 & **60.2** & 35.2 & 59.6 & **36.1** & 60.1 \\ tam & **13.8** & **35.3** & 11.7 & 34.3 & 11.9 & 34.6 \\ swh & 39.7 & **60.6** & 36.0 & 59.5 & **40.0** & 60.5 \\ amh & 3.4 & 10.1 & 3.2 & 9.6 & **3.9** & **10.6** \\ pap & 26.6 & 51.5 & 29.3 & 54.1 & **34.8** & **56.1** \\ lao & 4.8 & 21.6 & 4.4 & 20.8 & **5.3** & **22.1** \\ luo & **0.8** & **7.6** & 0.2 & 4.6 & 0.2 & 5.2 \\ sat & 0.0 & 0.3 & 2.2 & 11.3 & **3.0** & **13.8** \\ \end{tabular} \end{table} Table 6: Three \(n\)-shot settings for 12 diverse languages Figure 1: ChatGPT _relative improvement_ over NLLB chrF, with languages organized by family, script, and number of Wikipedia pages. Red stars represent averages per group. In the bottom plot, languages are grouped into quartiles of equal size (with dotted lines at the Q1, median, and Q3). More expansive visualizations with language labels for each value can be found in Appendix C. (token_ct). We also included **two categorical features**: language family (family) and script the language was written in (script); and **one binary feature**: the FLORES resource designation of the language-with 1 for high-resource and 0 for low-resource (hi/lo). Before analysis, we one-hot encoded the two **categorical features** into 48 binary features like family_Niger-Congo and script_Latin. We selected token_ct as a feature because we observed languages in low-resource scripts having many tokens. For example, ChatGPT's tokenizer encodes multiple tokens for every character in Ol Chiki script. This tendency for GPT models with low-resource scripts has been noted in previous studies [1]. We fit a decision tree with these _feature vectors_ to regress on ChatGPT's _relative improvement_ over NLLB in chrf (\(\frac{chrf_{GPT}-chrf_{NLLB}}{chrf_{NLLB}}\)), for each of the 201 languages with NLLB scores. When we used max_depth 3, the tree in Figure 2 was learned. Languages are delimited first by wiki_ct; then LRLs are separated into Niger-Congo languages and others, while HRLs are delimited by token_ct. The only group where ChatGPT beat NLLB is of languages with more than 58,344 Wikipedia pages, fewer than 86 tokens per average sentence, and less than 15.5% ASCII characters. This group contains some East Asian HRLs. The group where ChatGPT was least advantaged contains Niger-Congo languages with fewer than 3,707 Wikipedia pages. We also fit a random forest regressor with the same features and labels to find feature importance values. Only ten features had importance \(\geq 0.01\), shown in Table 7. The most important feature by far was wiki_ct. (This feature correlates strongly with ChatGPT's _relative improvement_, \(\rho=0.68\).) family_Niger-Congo was much more important than any other family feature. No script feature had an importance exceeding \(0.01\). In general, features for resource level and tokenization were more important than family or script. ChatGPT has a blind spot not only for Niger-Congo languages, but for African languages in general. Figure 1 shows ChatGPT is least advantaged for the two exclusively African families, Niger-Congo and Nilo-Saharan; and the two exclusively African scripts, Tifinagh (Tfng) and Ge'ez (Ethi). ### Impact of script Prior research suggests that ChatGPT output quality is sensitive to language script [1]. Our own analysis in SS3.4 actually suggests that script is the least important language feature in predicting ChatGPT's MT effectiveness. However, differences in performance are clear when comparing scripts used for the same language. Table 8 shows one script typically outperforming the other, by an average of 14.3 chrF points for zero-shot. Five-shot contexts narrowed the gap slightly to 12.0. Although transliteration is a deterministic process for many languages, these performance gaps suggest that ChatGPT has not implicitly learned it as part of a translation task. We hypothesize that ChatGPT's observed sensitivity to script in earlier studies may be particular to the languages and tasks evaluated. ### LLMs often get the language wrong LLMs' performing worse than NLLB may be due in large part to their translating into the wrong language. Using FLORES-200's _dev_ data, we trained \begin{table} \begin{tabular}{l c} \hline **feature** & **importance** \\ \hline wiki\_ct & 0.514 \\ token\_ct & 0.157 \\ ascii\_percentage & 0.104 \\ family\_Niger–Congo & 0.054 \\ oscar\_ct & 0.040 \\ family\_Afroasiatic & 0.025 \\ family\_Indo-European & 0.025 \\ family\_Sino-Tibetan & 0.022 \\ family\_Creole & 0.012 \\ family\_Nilo-Saharan & 0.011 \\ \hline \end{tabular} \end{table} Table 7: Ten most important language features to predict ChatGPT’s effectiveness relative to NLLB \begin{table} \begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{**BLEU**} & \multicolumn{2}{c}{**chrF**} \\ **Lang.** & 0-shot & 5-shot & 0-shot & 5-shot \\ \hline ace\_Arab & 1.27 & 2.26 & 8.41 & 9.75 \\ **ace\_Latin** & **4.98** & 4.35 & **19.82** & 17.96 \\ \hline **arb\_Arab** & 37.60 & **37.85** & 53.79 & **53.81** \\ arb\_Latin & 5.33 & 8.38 & 22.79 & 26.92 \\ \hline **bi\_Arab** & 1.96 & 3.05 & 10.43 & 13.24 \\ **bjn\_Latin** & 10.96 & **12.29** & 35.92 & **37.98** \\ \hline **kas\_Arab** & **3.99** & 3.30 & **15.51** & 14.33 \\ kas\_Deva & 2.31 & 2.68 & 12.91 & 13.91 \\ \hline knc\_Arab & 0.51 & 1.06 & 5.26 & 4.67 \\ **knc\_Latin** & **2.61** & 0.91 & **13.38** & 8.11 \\ \hline min\_Arab & 1.56 & 3.49 & 10.06 & 14.88 \\ **min\_Latin** & 11.51 & **13.07** & 36.99 & **38.43** \\ \hline **taq\_Latin** & 0.82 & 0.28 & 8.18 & 6.24 \\ **taq\_Tng** & 0.62 & 1.37 & 5.23 & **8.31** \\ \hline **zho\_Hans** & 36.33 & **36.51** & 31.03 & **31.89** \\ zho\_Hant & 29.30 & 30.38 & 24.82 & 26.02 \\ \hline \end{tabular} \end{table} Table 8: ChatGPT performance on languages with multiple scripts. Each better scoring script is **bold**. a logistic regression language identifier for 100 epochs. Language identification accuracies for four of the models we evaluated are in Table 9. Zero-shot ChatGPT only translated on target 72% of the time. This expectedly improved with five-shot prompts, and GPT-4 performed even better, still just shy of NLLB. LLMs' tendency to translate off target is corroborated by Zhu et al. (2023). ### Cost comparison Our results suggest that GPT-4 is a better translator than ChatGPT. However in considering the needs of MT end users, it would be remiss not to consider the respective costs of the systems evaluated. GPT-4's high cost (roughly 2000% that of ChatGPT's) prohibited us from evaluating it on all FLORES-200 languages. In general, using few-shot prompts for LLMs is more costly than zero-shot prompts, since users are charged for both input and output tokens. And for this same reason, some languages are more costly than others in LLM MT. Previous work has found that Google Translate has associated costs comparable to those of five-shot ChatGPT (Neubig and He, 2023). NLLB is the least expensive system we evaluated. We estimated cost values for each MT system and language: the expense, in USD, of translating the full FLORES-200 _devtest_ English set into the language. We estimated costs of GPT models using the prompts employed in our experiments, the tiktoken tokenizer11 used by both models, and inference prices from OpenAI's website.12 Conveniently, Google Translate costs nothing for the first 500K input characters. But since frequent MT users may have already expended this allowance, we calculated costs from their rates beyond the first 500K.13 As the full NLLB-MOE model (54.5B parameters) is difficult to run on standard computing devices, Team et al. (2022) also provided a version with only 3.3B parameters that achieves similar performance. Since users commonly opt for the smaller model, and since the performance difference does not impact our estimates significantly, we estimated the costs to run the 3.3B-parameter NLLB model using a single GPU on Google Colab. Details of our estimation method are in Appendix B.1. Table 10 contains the average cost for each system across the languages we evaluated with it. Footnote 11: [https://github.com/openai/tiktoken](https://github.com/openai/tiktoken) Footnote 12: [https://openai.com/pricing](https://openai.com/pricing) Footnote 13: [https://cloud.google.com/translate/pricing](https://cloud.google.com/translate/pricing) Figure 3 displays chrF scores for the 11 languages on which we evaluated all four MT systems (top), and the same scores divided by the approximate cost of each model (bottom). Bars for GPT-4 drop significantly in the bottom chart because of its high cost. Note from the top chart that \begin{table} \begin{tabular}{l c} **model** & **cost** \\ \hline NLLB & \$0.09 \\ ChatGPT (0-shot) & \$0.35 \\ ChatGPT (5-shot) & \$1.32 \\ Google & \$2.66 \\ GPT-4 (5-shot) & \$25.93 \\ \end{tabular} \end{table} Table 10: Estimated cost in USD to translate FLORES-200 _devtest_ ENG\(\rightarrow\)X with each system, averaged across all languages we evaluated with each \begin{table} \begin{tabular}{l c} **model** & **lang. ID acc.** \\ \hline ChatGPT (0-shot) & 72\% \\ ChatGPT (5-shot) & 83\% \\ GPT-4 (5-shot) & 90\% \\ NLLB & 91\% \\ \end{tabular} \end{table} Table 9: Proportion of the time each model translated into the correct target language Figure 2: Decision tree predicting ChatGPT _relative improvement_ over NLLB chrF, from language features. Google Translate scores the best, but the bottom chart shows that NLLB has the best scores for its price. Zero-shot ChatGPT also tops five-shot in the bottom chart, suggesting that while few-shot prompts provide modest score improvements, they may not be worth the extra cost. See Appendix B for full visualizations with all 203 languages. ## 4 Related Work We are not the first researchers to explore LLM MT. However, most existing studies do not provide benchmarks for a large number or languages. Wang et al. (2023) studied GPT model discourse MT, but only for four languages. Gao et al. (2023) studied prompt engineering for GPT model MT, a helpful precursor to our work, but only for three languages. Moslem et al. (2023) probed the abilities of GPT models for adaptive and domain-appropriate MT and term extraction, only including six languages in five directions. Jiao et al. (2023) produced MT benchmarks for ChatGPT and GPT-4, but only for five languages, none of them LRLs.14 They corroborated our findings that GPT models lag behind traditional MT models, but that GPT-4 outperforms ChatGPT. Hendy et al. (2023) explored 18 language pairs in a similar study, including four LRLs, but they focused more on MT performance across text domains, in-context learning, and reasoning than on multilingual benchmarks. Footnote 14: In this section, we define LRLs as languages having fewer than 1M Wikipedia pages. In all the heretofore mentioned works combined, researchers explored only 18 languages, including five LRLs. This few-language approach does not address the needs of LLM users seeking to translate any languages other than the small few represented. In a work most comparable to our own, Zhu et al. (2023) attempted to address this issue. They provided benchmarks comparing LLMs and traditional MT models across 102 languages, including 68 LRLs. Their results corroborate our own conclusions that LLMs lag behind traditional MT models, especially for LRLs. However, their analysis focuses primarily on few-shot learning and prompt engineering, including some topics somewhat removed from end user needs (such as the viability of nonsensical prompts in few-shot settings). Our work differs from existing studies in our focus on end users. We include more languages than any existing work (**204** languages, including **168** LRLs), to address the needs of various LRL communities. Our analysis suggests which language features predict LLM effectiveness, to help end users make hypotheses even about languages not represented in our study. We evaluate monetary costs, since they are a concern for LLM users. ## 5 Conclusion We provide benchmarks for LLM ENG\(\rightarrow\)X MT performance across 203 languages, with comparisons to state-of-the-art commercial and open-source MT models. For many HRLs, LLMs like ChatGPT perform competitively with these traditional models. But for LRLs, traditional MT remains dominant, despite LLMs' increased parameter size. Our decision-tree analysis reveals language features that predict ChatGPT's translation effectiveness relative to NLLB, finding that ChatGPT is especially disadvantaged for LRLs and African languages, and that the number of Wikipedia pages a language has is a strong predictor of ChatGPT's effectiveness in it. We present evidence that few-shot learning offers generally marginal improvements for ENG\(\rightarrow\)X MT, which may not justify its additional cost. We provide MT users with scores and cost estimates for four LLM and traditional MT systems, to help them determine which to use for their languages. Future work in this vein may include more translation directions (e.g. X\(\rightarrow\)ENG and non-English-centric), and human evaluation of LLM MT outputs to reveal trends along dimensions like fluency and accuracy. We open-source software and outputs of the models we evaluated on our repository. Figure 3: chrf scores for the 11 languages on which we evaluted all MT systems (top), followed by the same scores divided by the estimated cost of each system for each language (bottom) ### Limitations We acknowledge limitations of using ChatGPT models for research. Since they are closed-source models, there is much we do not know about their architectural and training details, which can impact our understanding of their capabilities and biases. For instance, OpenAI's implementation of mechanisms to prevent the generation of harmful or toxic content may inadvertently impact the quality of the model's output. This can be a concern when evaluating the reliability and accuracy of the results. OpenAI continuously updates and deprecates models behind the ChatGPT API, so our assessment may not be immaculate for future versions. While FLORES-200 is large and diverse, it is likely not representative of the vast array of languages worldwide. Some low-resource sets within FLORES-200 may contain noisy or corrupted data, potentially affecting the validity of the automatic metrics we employ in our reporting of scores. Additionally, FLORES-200 sets were translated from English Wikipedia. We avoided any X\(\rightarrow\)ENG translation directions, since it is likely that GPT models were trained on English Wikipedia. However, the semantic proximity of the other language sets to the original English source could potentially provide an advantage to these models in generating them. We also acknowledge the absence of non-English-centric translation directions from this study; we leave this for future work. Lastly, the unavailability of semantic MT evaluation techniques like COMET (Rei et al., 2020) or BLEURT (Sellam et al., 2020) for LRLs hinders our ability to conduct comprehensive semantic evaluations and may leave some aspects of the translation quality unexplored. Human evaluation (which we leave for future work) may also reveal much in this area. These limitations surrounding model transparency, representative data, and evaluation should be taken into account when interpreting the findings of this work. Future studies may benefit from addressing these challenges to enhance the robustness and reliability of MT conclusions. ## Ethics Statement The new prominance of LLMs in language technologies has numerous ethical implications. This study makes it apparent that even powerful LLMs like ChatGPT have significant limitations, such as an inability to translate a large number of low-resource languages. It also suggests that although these LLMs are trained on large and diverse data sets, they still have implicit biases, such as a clear disadvantage in MT for African languages. We hope to stress the importance of acknowledging and publicizing the limits and biases of these LLMs. This is especially relevant because a majority of LLM users may not be familiar or experienced with artificial intelligence (AI) engineering practices, and the commercial entities providing LLMs often have a monetary incentive to deliberately downplay the models' limitations. This can lead to unethical exploitation of users, who may attempt to use LLMs in applications where their limitations and biases can cause harm. Part of our goal in this work is to bring these discussions to the forefront of AI research. Ethical considerations like these should be a top concern for AI researchers, especially when many recent AI advancements are piloted by powerful commercial corporations. We hope also to acknowledge some of the ethical considerations involved in our own research. As we strive to develop improved open-source and accessible translation systems, it is essential to acknowledge that some language communities may have reservations about having their languages translated. Another crucial point is that utilizing the FLORES-200 test set in this research may inadvertently contribute to its incorporation into OpenAI's training data. OpenAI's current position is that API requests are not used for training (Schade, 2023), but if this position were altered or disregarded, it could compromise the reliability of this test set for future GPT iterations. (This is a consideration for many commercial LLMs, though we only used OpenAI's in the current work.) This scenario has a potential negative impact on the MT community, since many researchers depend on FLORES-200 and other MT benchmarks for large, diverse, high-quality data to conduct system comparisons. ## Acknowledgements We thank Simran Khanuja for her help in running our Google Translate baseline and her general support. We also thank Alex Cabrera for his help developing our Zeno browser. This material is based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. This work was also supported in part by the National Science Foundation under grant #2040926, a grant from the Singapore Defence Science and Technology Agency.
2309.16248
Spider4SPARQL: A Complex Benchmark for Evaluating Knowledge Graph Question Answering Systems
With the recent spike in the number and availability of Large Language Models (LLMs), it has become increasingly important to provide large and realistic benchmarks for evaluating Knowledge Graph Question Answering (KGQA) systems. So far the majority of benchmarks rely on pattern-based SPARQL query generation approaches. The subsequent natural language (NL) question generation is conducted through crowdsourcing or other automated methods, such as rule-based paraphrasing or NL question templates. Although some of these datasets are of considerable size, their pitfall lies in their pattern-based generation approaches, which do not always generalize well to the vague and linguistically diverse questions asked by humans in real-world contexts. In this paper, we introduce Spider4SPARQL - a new SPARQL benchmark dataset featuring 9,693 previously existing manually generated NL questions and 4,721 unique, novel, and complex SPARQL queries of varying complexity. In addition to the NL/SPARQL pairs, we also provide their corresponding 166 knowledge graphs and ontologies, which cover 138 different domains. Our complex benchmark enables novel ways of evaluating the strengths and weaknesses of modern KGQA systems. We evaluate the system with state-of-the-art KGQA systems as well as LLMs, which achieve only up to 45\% execution accuracy, demonstrating that Spider4SPARQL is a challenging benchmark for future research.
Catherine Kosten, Philippe Cudré-Mauroux, Kurt Stockinger
2023-09-28T08:41:08Z
http://arxiv.org/abs/2309.16248v2
# Spider4SPARQL: A Complex Benchmark for Evaluating Knowledge Graph Question Answering Systems ###### Abstract With the recent spike in the number and availability of Large Language Models (LLMs), it has become increasingly important to provide large and realistic benchmarks for evaluating Knowledge Graph Question Answering (KBQA) systems. So far the majority of benchmarks rely on pattern-based SPARQL query generation approaches. The subsequent natural language (NL) question generation is conducted through crowdsourcing or other automated methods, such as rule-based paraphrasing or NL question templates. Although some of these datasets are of considerable size, their pitfall lies in their pattern-based generation approaches, which do not always generalize well to the vague and linguistically diverse questions asked by humans in real-world contexts. In this paper, we introduce Spider4SPARQL - a new SPARQL benchmark dataset featuring 9,693 previously existing manually generated NL questions and 4,721 unique, novel, and complex SPARQL queries of varying complexity. In addition to the NL/SPARQL pairs, we also provide their corresponding 166 knowledge graphs and ontologies, which cover 138 different domains. Our complex benchmark enables novel ways of evaluating the strengths and weaknesses of modern KGQA systems. We evaluate the system with state-of-the-art KGQA systems as well as LLMs, which achieve only up to 45% execution accuracy, demonstrating that Spider4SPARQL is a challenging benchmark for future research. Benchmark for Question Answering over Knowledge Graphs, Language Models, Performance Evaluation ## I Introduction Building systems for querying databases or knowledge graphs in natural language has been an important research topic over the last few decades [1, 2, 3, 4]. Typical examples of such Text-to-SQL (also known as NL-to-SQL) or Text-to-SPARQL (also known as Knowledge Graph Question Answering (KGQA)) systems either use rule-based or machine learning-based approaches [5, 6, 7]. The recent success of large language models has even further accelerated the race of building such systems as well as the need for datasets specifically designed for tasks like natural language (NL) to query language translation. One of the first benchmarks designed for translating natural language to a query language was WikiSQL [8]. Soon after, benchmarks for other query languages emerged, such as LC-QuAD 1.0 [9] comprised of 5,000 NL/SPARQL pairs. Although these benchmarks were a step in the right direction, they lack queries that match the complexity and technical difficulty of queries from today's real-world knowledge graph applications. For example, LC-QuAD 1.0 queries only support single projections in the SELECT statement and COUNT aggregations, while WikiSQL only supports queries on a single table. LC-QuAD 2.0 [10] was the answer to the simplicity and small-scale of LC-QuAD 1.0. Although it is significantly larger than the original LC-QuAD dataset containing 30,000 NL/SPARQL pairs, it still lacks the necessary complexity that would allow this dataset to be used to train a real-world natural language interface for knowledge graphs. Moreover, an evaluation of DBNQA [11], which has 894,499 NL/SPARQL pairs, shows that despite its impressive size, it still lacks both NL question and SPARQL query complexity and has a comparatively small vocabulary size. Real-world applications need to be able to execute queries that use different types of aggregations, different set operations and, most importantly, multi-hop queries. As of today, there is a clear lack of KGQA benchmark datasets that generalize well to many domains and contain queries with the necessary complexity for training systems that can be used in real-world situations. Hence, we introduce Spider4SPARQL - a _new SPARQL benchmark dataset featuring 10,181 previously existing manually generated NL questions and 5,693 unique, novel, and complex SPARQL queries of varying complexity_. Our benchmark is based on Spider [12], which has become the de-facto benchmark for evaluating Text-to-SQL systems and has accumulated over 100 submissions to its leaderboard1. The Spider dataset comprises 10,181 NL/SQL pairs created from 200 distinct databases and covers a range of topics from flights to restaurants. Footnote 1: See Spider 1.0: Yale Semantic Parsing and Text-to-SQL Challenge [https://yale-lily.github.io/spider](https://yale-lily.github.io/spider) We have built our novel benchmark dataset Spider4SPARQL based on the publicly available databases (train set, test set) and 9,693 NL/SQL pairs from the Spider dataset that can be executed on our 166 automatically generated knowledge
2309.17156
Age Group Discrimination via Free Handwriting Indicators
The growing global elderly population is expected to increase the prevalence of frailty, posing significant challenges to healthcare systems. Frailty, a syndrome associated with ageing, is characterised by progressive health decline, increased vulnerability to stressors and increased risk of mortality. It represents a significant burden on public health and reduces the quality of life of those affected. The lack of a universally accepted method to assess frailty and a standardised definition highlights a critical research gap. Given this lack and the importance of early prevention, this study presents an innovative approach using an instrumented ink pen to ecologically assess handwriting for age group classification. Content-free handwriting data from 80 healthy participants in different age groups (20-40, 41-60, 61-70 and 70+) were analysed. Fourteen gesture- and tremor-related indicators were computed from the raw data and used in five classification tasks. These tasks included discriminating between adjacent and non-adjacent age groups using Catboost and Logistic Regression classifiers. Results indicate exceptional classifier performance, with accuracy ranging from 82.5% to 97.5%, precision from 81.8% to 100%, recall from 75% to 100% and ROC-AUC from 92.2% to 100%. Model interpretability, facilitated by SHAP analysis, revealed age-dependent sensitivity of temporal and tremor-related handwriting features. Importantly, this classification method offers potential for early detection of abnormal signs of ageing in uncontrolled settings such as remote home monitoring, thereby addressing the critical issue of frailty detection and contributing to improved care for older adults.
Eugenio Lomurno, Simone Toffoli, Davide Di Febbo, Matteo Matteucci, Francesca Lunardini, Simona Ferrante
2023-09-29T11:44:18Z
http://arxiv.org/abs/2309.17156v1
# Age Group Discrimination via Free Handwriting Indicators ###### Abstract The growing global elderly population is expected to increase the prevalence of frailty, posing significant challenges to healthcare systems. Frailty, a syndrome associated with ageing, is characterised by progressive health decline, increased vulnerability to stressors and increased risk of mortality. It represents a significant burden on public health and reduces the quality of life of those affected. The lack of a universally accepted method to assess frailty and a standardised definition highlights a critical research gap. Given this lack and the importance of early prevention, this study presents an innovative approach using an instrumented ink pen to ecologically assess handwriting for age group classification. Content-free handwriting data from 80 healthy participants in different age groups (20-40, 41-60, 61-70 and 70+) were analysed. Fourteen gesture- and tremor-related indicators were computed from the raw data and used in five classification tasks. These tasks included discriminating between adjacent and non-adjacent age groups using Catboost and Logistic Regression classifiers. Results indicate exceptional classifier performance, with accuracy ranging from 82.5% to 97.5%, precision from 81.8% to 100%, recall from 75% to 100% and ROC-AUC from 92.2% to 100%. Model interpretability, facilitated by SHAP analysis, revealed age-dependent sensitivity of temporal and tremor-related handwriting features. Importantly, this classification method offers potential for early detection of abnormal signs of ageing in uncontrolled settings such as remote home monitoring, thereby addressing the critical issue of frailty detection and contributing to improved care for older adults. Ageing Handwriting Ecological Home Monitoring Smart Ink Pen Machine Learning ## 1 Introduction The worldwide increase in the elderly population is expected to grow the prevalence of frailty in older people [1]. Frailty is a clinical syndrome with a higher prevalence in older adults, defining a progressive decline, together with increased vulnerability to stressful factors and an increased risk of mortality [2]. Frailty leads to hospitalisation and admission to long-term care, with a significant impact on public care systems [3, 4] and a poor quality of life for those directly affected [5, 6]. At present, there is no universally accepted way of identifying frailty and a standardised definition of the conditions remains an open point [7]. A consistent concept in the literature is that frailty is a complex and dynamic process. Complex because it involves both physical and cognitive systems, and dynamic because individuals tend to progress towards states of increasing severity of frailty [8]. Early detection of frailty is therefore key to slowing and preventing the worsening of the syndrome until it reaches the severe, irreversible stage of pre-death [9]. However, detection of early symptoms is hampered by the similarity to normal ageing and the diversity of the phenotype [10]. According to Fried and colleagues [2], frailty can be recognised when at least three of the following five signs are present: weakness, slow gait, low physical activity, fatigue and weight loss. If only one or two symptoms are present, a person could be considered pre-trail [2]. The frailty index [11, 12] is another proposed method to diagnose frailty. However, thresholds for frailty or pre-frailty are not universally agreed among practitioners. Early intervention is the most effective solution for preventing the worst consequences of frailty. Therefore, much attention should be paid to vulnerable people aged 65 years and older who are at risk of becoming pre-frail [13]. However, the scarcity of medical resources often leads to a delayed diagnosis of frailty. To avoid this risk, an emerging solution consists of remote monitoring technologies used to continuously track the health status of community-dwelling seniors [14]. To detect early signs of decline, particular attention has been paid to the monitoring of daily activities [15]. Indeed, in older adults, any variation in the performance of daily tasks may conceal meaningful information about decline [16]. Among daily tasks, handwriting may be an optimal candidate for remote monitoring because it is a high-level skill that involves several cerebral and motor districts [17]. Therefore, it undergoes significant variations with physiological or pathological age-related decline [18] and with specific aspects of the frailty phenotype [19]. Indeed, the quantitative analysis of handwriting has been observed to be sensitive to several neuro-motor disorders, including Parkinson's disease [20], dystonia [21], Huntington's disease [22] and essential tremor [23]. The limitations of home-based handwriting monitoring lie in the devices available for data collection. Most studies in the literature have used commercially available tablets and digitising surfaces to study writing activities; however, the diffuse technological illiteracy of older adults makes their everyday use rather intrusive [18; 19; 24]. Furthermore, most previous research has analysed handwriting in controlled settings, i.e. using a standard writing protocol or selecting predefined writing sequences [23]. Instead, the home environment represents an uncontrolled context in which the results of standard tests cannot be assumed to be valid without supervision [25]. In a recent work of our research group [26], we presented an instrumented ink pen for the automatic acquisition and quantitative analysis of handwriting to allow ecological home monitoring of writing activity [27]. The tool can be used for everyday paper writing tasks and the data collection is fully automated. It does not require any further intervention by the user and therefore meets the requirements of ecological validity. We have previously investigated the reliability of handwriting and tremor indicators in healthy subjects of different ages. We then demonstrated the ability of handwriting and tremor indicators to discriminate age groups in semi-uncontrolled (i.e, the acquisitions were supervised by an operator, while the content was left free to the subjects) conditions using paper-and-pen free writing tasks. Correctly assigning a subject to his or her age group through free writing analysis can be a powerful tool for detecting abnormalities associated with age-related decline [28]. Especially for pre-fail individuals, a potential affinity of their writing parameters with those generally observed in a category of older subjects could be a sign of an amplified consequence of normal ageing and be interpreted as a prompt for further investigation. In this work, we studied the handwriting indicators ability of [26] in the classification of four age groups of healthy subjects, performing two types of unconstrained writing tasks. The paper is structured as follows: Section 2 presents the instrument, experimental protocols, data processing and classification algorithm used in this work. Section 3 presents the results and Section 4 discusses them. Finally, Section 5 expresses the novelty and possible research improvements of the work. ## 2 Method ### The smart ink pen We used the smart ink pen, shown in Fig. 1, developed in the European project MoveCare [15; 29], to collect handwriting data. The device consists of an ink pen equipped with an inertial measurement unit (IMU) to record movement and a miniaturised load cell to record the normal force applied on the tip [29]. The main advantage of this device is that it tests handwriting in a condition as close as possible to the normal situation, giving the typical feel of writing on paper. The pen is designed to automatically start collecting signals when it is moved to write, and the stored data can be accessed via Bluetooth connection. All electronic components and the data storage mechanism are hidden from the user to ensure transparent use. This feature is particularly important when interacting with older adults who may be reluctant to use new technologies [24]. The pen captures eight time series during handwriting: time stamps, 3-axis linear acceleration signals, 3-axis angular rates and the force applied on the pen tip. All signals are sampled at 50 Hz. ### Participants and protocol We recruited 80 healthy participants aged between 20 and 90 years. Any diagnosis of neurological, vascular or musculoskeletal disorders of the upper limbs was an exclusion criterion. Subjects over 65 years of age were included after verification of a Mini-Mental State Examination (MMSE) [30] score greater than 25. All subjects wrote a free text (_Text_, up to 10 lines) and a shopping list (_List_, up to 8 words). The tasks had no specific constraints to make them very similar to everyday writing. The Ethics Committee of the Politecnico di Milano approved the study protocol (n. 10/2018). ### Calculation of handwriting and tremor indicators A set of 14 parameters related to handwriting kinematics and dynamics and to tremor were extracted from the raw data collected during each of the two writing tasks. The calculation was implemented in Matlab(r) R2020b (Mathworks(r), Natick, MA USA)1. The following indicators were calculated: Footnote 1: See Lunardini et al. [26] for a detailed description of the indicators * _Temporal handwriting measures_. Starting from the writing force signal, we divided handwriting into strokes, defined as the writing segments where the pen tip was in contact with the paper surface (non-zero force tracts). We then considered the averaged stroke duration within a writing task as the mean on-sheet time (\(OnSheet\)). Similarly, we kept the averaged duration of the non-writing segments (zero-force tracts) as the mean in-air time (\(InAir\)). The in-air time intervals longer than 2 seconds were excluded as we treated them as pauses. The ratio of the latter to the former was defined as the air-sheet time ratio (\(AirSheetR\)). These temporal parameters have been shown to grow with users' age [31]. * _Pen Tilt_. The tilt angle of the pen was calculated using the sensor fusion algorithm described in [26]. We retained the mean (\(Tilt_{Mean}\)), coefficient of variation (\(Tilt_{CV}\)) and variance (\(Tilt_{Var}\)) of the tilt angle signal during writing (pauses excluded). We considered an angle of 90\({}^{\circ}\) for the pen in vertical position. Previous studies have also included pen tilt to characterise handwriting in different conditions [32; 33]. * _Writing Force_. Mean writing force (\(Force\)) was calculated by averaging the force signal over all strokes recorded during the writing task. The mean number of force changes (\(NCF\)), calculated as the average number of local maxima and minima within a stroke, was also retained as a measure of force variability. Force and force variability have been shown to change with age in handwriting [34]. * _Writing Smoothness_. We calculated the number of acceleration changes (\(NCA\)) as the average number of local minima and maxima in the 3D acceleration signal over all strokes. This quantity was observed to decrease with age [20]. To extract tremor, we divided the linear acceleration recorded during a writing task into 500 sample segments [35]. We computed the power spectrum for each segment using the Hilbert-Huang transform (HHT) [36], which has been preferred in the literature for the study of voluntary tremor over the standard Fourier transform [37]. The following tremor indicators were then calculated: * _Tremor frequency_. We obtained the mean modal frequency (\(F_{modal}\)) by averaging the frequencies of the highest peak in the power spectrum over all the segments [38]. * _Tremor Amplitude_. We calculated the root mean square (RMS) of the tremor signal and retained the mean \(RMS\) by averaging the root mean square of the power spectrum over all segments. * _Tremor entropy_. We considered the approximate entropy measure (\(ApEn\)), as in our previous study [26]. The entropy value (between 0 and 2) measures the unpredictability of the acceleration signals, which can be influenced by the higher or lower regularity of the tremor components. Entropy has been measured to decrease with age and pathology [39]. * _Nonlinear characteristics of tremor_. We applied the recurrence quantification analysis (RQA) to the acceleration signals. As in [35], we retained the recurrence ratio (\(RR\)) to measure the tendency of the tremor dynamics to express repeated patterns in time and the percentage of determinism (\(DET\)) to estimate the predictability of the gestures during handwriting. Figure 1: A digital rendering of the pen with its internal components and the IMU reference frame orientation. ### Classification tasks Following the protocol described in Section 2.2, we defined \(D_{T}\) the Text dataset and \(D_{L}\) the List dataset, both consisting of 80 samples and 15 attributes (14 indicators and the group label). We also created the \(D_{TL}\) dataset, consisting of 80 samples per 29 attributes (28 indicators of both tasks and the group label), by merging the two. Given the set of four ordered age intervals \(A=\{YY\in[20,40),EY\in[40,60),EF\in[60,70),EE\in[70,95)\}\) and the set of writing task data \(W=\{T,L,TL\}\), we define \(D_{w}^{a}\) with \(a\in A\), \(w\in W\) as the data set composed of the 20 samples and computed from the group \(a\) over the task \(w\). We investigated the ability of the handwriting features to discriminate between subjects of different age groups. The indicators measure high-level phenomena underlying the complex handwriting process, which is strongly influenced by ageing. Therefore, we used machine learning classification techniques to account for the multivariate and non-linear nature of the problem. Two different machine learning algorithms were chosen to compare different classification logics. We used Logistic Regression as a baseline performance measure, as it is one of the simplest and most commonly used linear classifiers. The second was a more recent boosting algorithm called Catboost [40]. This algorithm is known to achieve remarkable performance while avoiding data overfitting, even with small datasets [41]. Since the goal of our analysis is the detection of age-related anomalies in handwriting data, we focus on binary classification tasks to discriminate between age group differences. In detail, we computed two pools of classification tasks: the first one is between adjacent groups by age, i.e. \(YY\)vs\(EY\), \(EY\)vs\(EE\) and \(EF\)vs\(EE\). The second is between \(YY\), \(EY\) and the \(EE\) group. The first task pool is to evaluate the performance of the classifiers in discriminating between groups within adjacent age ranges. Models that perform well on these tasks are expected to be more sensitive to minimal changes in handwriting due to the age decline process. The second pool of tasks was designed to assess the greater ability of the models to detect more relevant changes in more distant age groups. For each task we applied a data normalisation in the range [0,1]. The samples were then labelled 1 for the oldest group and 0 for the youngest. In this way, the machine learning algorithms learned to predict the probability of the sample belonging to the correct class. For each experiment, we evaluated the models according to a wide range of classification metrics: Accuracy, Precision, Recall, F1 and Area Under the ROC Curve (ROC-AUC). For monitoring purposes, Precision is the most important metric as it measures how robust the classifier is in determining the true positives. To obtain a less biased performance estimate, we evaluated both Logistic Regression and Catboost with default parameters by Leave-One-Out (LOO) cross-validation with early stopping set to 20 epochs. The full pipeline is shown in Fig. 2: after collecting the subjects' data and extracting the indicators, they are preprocessed and prepared to be learned by the proposed models. This learning phase involves the use of the LOO cross-validation technique mentioned above, which provides an estimate of the performance on unlearned data and the best number of learning iterations for each model evaluated. Finally, a model is trained on the entire dataset for a number of epochs equal to the average of those just found. This is done for the sole purpose of interpreting and ranking the most important features through model explanation techniques. ### Model explanation techniques We used a model explanation technique to overcome the limitations of the black-box nature of the Catboost classification algorithm and to obtain precise information about the model's decisions, i.e. the importance and role of the handwriting indicators in predicting the subject's age group. We used SHAP [42, 43], a model explanation library based on game theory that computes the Shapley values [44] of the features according to their impact on its predictions. In a binary classification task, SHAP first computes the baseline prediction value, i.e. the mean value predicted by the model given the observed samples, and then assigns a real number to weight each feature according to its average contribution in Figure 2: The data processing, classification and model explanation workflow. feature coalitions, i.e. its Shapley value. It is then possible to explore the role of each feature in the classification of individual samples, independent of the fact that the model has learned them during the training step. The sample prediction represents the sum of the feature contribution starting from the baseline. If a feature has a positive influence, it influences the prediction in favour of class 1 and vice versa. This step was useful to understand, for each sample and age group, how much each indicator leads the model to predict class 0 or 1. ## 3 Results All participants were divided into four groups defined by age: group YY between 20 and 39 (12 males, 8 females, mean age 27.4\(\pm\)2.4), group EY between 40 and 59 (12 males, 8 females, mean age 57. 7\(\pm\)6.28), group EF between 60 and 69 (10 men, 10 women, mean age 65.45\(\pm\)2.2), and subjects older than 70 (6 men, 14 women, mean age 80.2\(\pm\)7) were included in group EE. Each group contained 20 subjects. Tab. 1 and Tab. 2 report the performance metrics for each classification task and dataset for the Logistic Regression and the Catboost, classifier respectively. As expected, the Catboost algorithm performed best. The detailed results of 3 classification tasks are shown in Fig. 3: \(EY\)vs\(EF\) in the first column of the figure, \(EF\)vs\(EE\) in the second column and \(EY\)vs\(EE\) in the third column. The first and second tasks involved the most interesting class for monitoring purposes (the EF, with individuals in the 60-69 age range) and its two closest classes in terms of age ranges (40-59 and 70+ respectively). The third task was instead designed to assess how much the age gap of 60-69 years improved the binary classification between the younger and older groups of individuals. Row (a) shows the ROC-AUC performance obtained with Catboost trained and evaluated on the Text, List and Text+List datasets. In all cases, the results of the Text dataset achieved the highest ROC-AUC. For these reasons, the plots in rows (b), (c) and (d) show the results of the Text dataset only. Row (b) shows the confusion matrices and Rows (c) and (d) show the final SHAP feature ranking models, trained on the full task datasets and tuned via LOO cross-validation. While row (c) shows the absolute influence of the features, row (d) shows the same ranking and explains how the learned samples were predicted according to their feature values. Each point in the figures in row (d) represents the Shapely value of the feature for a particular sample. The blue-red colour scale indicates the value of the indicator (low to high), and the negative Shapely values pushed the prediction towards class 0 (the youngest group), while the positive values favoured the classification of the subject in class 1 (the oldest group). The results of these three tasks are detailed in the following subsections. ### EY vs EF In the \(EY\) vs \(EF\) task, the ROC curves show that the Catboost models trained on the Text data perform best with an AUC of 98.0%. The corresponding confusion matrix for the text data shows that there are no false negatives, which translates into a recall of 100%. In total there are 4 false positives. According to the Shapley values of the final model, this task was strongly influenced by the \(Tilt_{Mean}\), \(ApEn\) and \(Force\) indicators. In particular, high values of \(Tilt_{Mean}\), \(ApEn\) and \(Force\) were associated with younger subjects belonging to age class \(EY\). \begin{table} \begin{tabular}{|c|c c c|c c c|c c c|c c c|} \hline & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c|}{Precision} & \multicolumn{3}{c|}{Recall} & \multicolumn{3}{c|}{F1} & \multicolumn{3}{c|}{ROC-AUC} \\ & Text & List & Text+List & Text & List & Text+List & Text & List & Text+List & Text & List & Text+List \\ YY vs EF & 57.5 & 65.0 & 65.0 & 58.8 & 68.8 & 50.0 & **55.0** & **55.0** & 54.1 & **61.1** & **61.1** & 56.5 & **68.2** & 68.0 \\ EY vs EF & **67.5** & 65.0 & 62.5 & 68.4 & 62.5 & 61.9 & 65.0 & **75.0** & 65.0 & 66.7 & **68.2** & 63.4 & 72.5 & 65.8 & **73.5** \\ EF vs EE & 72.5 & **77.5** & 72.5 & 69.6 & **78.9** & 71.4 & **80.0** & 75.0 & 75.0 & 74.4 & **76.9** & 73.2 & 80.5 & 81.2 & **83.8** \\ YY vs EE & 90.0 & **92.5** & **92.5** & 94.4 & 94.7 & **100** & 85.0 & **90.0** & 85.0 & 89.5 & **92.3** & 91.9 & 93.0 & 94.2 & **98.2** \\ EY vs EE & 85.0 & **87.5** & 85.0 & 88.9 & **89.5** & 85.0 & 80.0 & **85.0** & **85.0** & 84.2 & **87.2** & 85.0 & 88.8 & **95.8** & 95.0 \\ \hline \end{tabular} \end{table} Table 1: Logistic regression scores evaluated by LOO cross-validation for age group binary classification. For each classification task, the best scores are highlighted in bold. \begin{table} \begin{tabular}{|c|c c c|c c c|c c c|c c c|} \hline & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c|}{Precision} & \multicolumn{3}{c|}{Recall} & \multicolumn{3}{c|}{F1} & \multicolumn{3}{c|}{ROC-AUC} \\ & Text & List & Text+List & Text & List & Text+List & Text & List & Text+List & Text & List & Text+List & Text & List & Text+List \\ YY vs EF & 82.5 & **85.0** & 80.0 & **84.2** & 81.8 & 83.3 & 80.0 & **90.0** & 75.0 & 82.1 & **85.7** & 78.9 & 93.0 & **96.8** & 95.5 \\ EY vs EF & **90.0** & 82.5 & 85.0 & 83.3 & **84.2** & 81.8 & **100.0** & 80.0 & 90.0 & **90.0** & 82.1 & 85.7 & **98.0** & 92.2 & 96.2 \\ EF vs EE & **90.0** & **90.0** & 85.0 & **94.4** & **94.4** & 85.0 & **85.0** & **85.0** & 85.0 & **89.5** & **89.5** & 85.0 & **98.1** & 97.0 & 95.5 \\ YY vs EE & **97.5** & **90.0** & **97.5** & 95.2 & 94.4 & **100.0** & **100.0** & 85.0 & 95.0 & **97.6** & 89.5 & 97.4 & 99.5 & 98.8 & **100.** \\ YY vs EE & **92.5** & **92.5** & 87.5 & **94.7** & **94.7** & 89.5 & **90.0** & **90.0** & 85.0 & **92.3** & **92.3** & 87.2 & **99.5** & 98.4 & 98.2 \\ \hline \end{tabular} \end{table} Table 2: Catboost scores evaluated by LOO cross-validation for age group binary classification. For each classification task, the best scores are highlighted in bold. ### EF vs EE In the \(EE\) vs \(EF\) task, the ROC curves show that the Catboost models trained on the Text data achieve the best results with an AUC of 98.12%, meaning that it is also possible to discriminate between the two oldest classes. The corresponding confusion matrix for the Text data shows the same behaviour as in the previous task, with only 1 false positive and 3 false negative predictions, resulting in an accuracy of 90% and a precision of 94.4%. According to the Shapley values of the final model, the \(InAir\) indicator proved to be the core feature for this binary classification. The model strongly related high values of the \(InAir\) indicator to the oldest class \(EE\). ### EY vs EE Finally, in the \(EY\) vs \(EE\) task, the ROC curves show that the Catboost model trained on the text data performs best with an AUC of 99.5%, meaning that the predictions of the unseen data are almost perfectly accurate. The corresponding confusion matrix shows only 1 false positive and 2 false negatives, resulting in a balanced F1 score of 92.3%. According to the Shapley scores of the final model, this task was heavily influenced by \(InAir\) and \(ApEn\), and less so by \(F_{modal}\) and \(DET\). The high values of \(InAir\) and \(DET\) pushed predictions in favour of the oldest class \(EE\), while high values of \(ApEn\) and \(F_{modal}\) were more associated with the youngest subjects belonging to class \(EY\). ## 4 Discussion In this paper we have demonstrated the utility of quantitative analysis of handwriting in discriminating between healthy subjects of different age groups. We used a novel smart ink pen to collect handwriting data during tasks that mimicked everyday writing. In fact, participants were asked to write a short free text and a shopping list without any constraints on content or writing modality. This particular setting was chosen to maximise ecological validity, with the ultimate goal of using our findings to develop, in the future, home-based solutions dedicated to the early detection of decline in seniors. Therefore, particular attention was paid to the correct classification of the individual's age, as the association of their handwriting characteristics with an older age group could be interpreted as a clinically relevant anomaly [45]. One of the most powerful and advanced machine learning classification algorithms, i.e. Catboost, and a more traditional one, i.e. Logistic Regression, were used to carry on four different binary classification tasks, based on the set of handwriting indicators we computed from the raw free text and shopping list data. Our results showed that the Catboost algorithm outperformed Logistic Regression in almost all the tasks and datasets we considered. The improvements were sensibly more significant in the classifications between groups with close age ranges (the first pool of tasks), where the differences in individuals' handwriting were expected to be smaller. This confirmed Catboost's superior sensitivity to changes in handwriting indicators in terms of a baseline estimator. In the first pool of tasks, we considered the inter-group classifications with individuals in close age ranges (i.e. \(YY\)vs\(EY\), \(EY\)vs\(EF\) and \(EF\)vs\(EE\)). The aim of these tasks was to test the sensitivity of the models to small variations in handwriting performance that might be expected between healthy individuals with small age differences [46]. Very good to excellent performances (accuracy between 82.5% and 90%, precision from 81.8% to 94.4%, recall from 75% to 100% and ROC-AUC from 92.2% to 99.5%) were obtained in the classification of the first pool, considering all three datasets composed by the indicators calculated from the Text, List and combined Text-List data. These results showed the good ability of the models to detect slight handwriting variations in healthy subjects. Therefore, we could expect a high sensitivity to the changes in handwriting data due to an abnormal or pathological ageing decline [46]. In the second pool of tasks, we looked instead at classifications between more distant age groups. As expected, scores were generally higher on these tasks, as the differences in handwriting should have been more pronounced. In the classification between \(YY\)-\(EE\), i.e. the more distant classes, the best accuracy (97.5%) was achieved using only the Text indicators and the combined set of Text and List indicators. Perfect precision and ROC-AUC (100%) were obtained using the Text+List data and perfect recall using the Text data alone. The last setting, \(EY\)vs\(EE\), showed high evaluation metrics with both Text and List data, all over 92.3%. For the classification between the younger groups, \(YY\) and \(EY\), i.e. 20-39 and 40-59 years of age, the list turned out to be the data on which the model performed better in terms of accuracy, recall, F1 and ROC-AUC. For the other adjacent groups of tasks, \(EY\)-\(EF\) and \(EF\)-\(EE\) (i.e. 40-59 vs. 60-69 years and 60-69 vs. 70+ years), the best performance was obtained with the text in almost all cases. This slight task dependency of the classification results might be related to some differences between the Text and List tests. Also, the writing dynamics might have been partly influenced by the type of task, since each item in the List was written in a new line and with single words, as articles and conjunctions were less frequent if not absent. In addition, writing a free text generally required a greater cognitive effort, which might explain the higher classification performances in \(EY\)-\(EF\) and \(EF\)-\(EE\). Nevertheless, the differences in the results were small and, given the relatively small number of samples, chance factors could not be excluded. Further Figure 3: Classification performance and model explanation plots for the EY vs EF, EF vs EE and EY vs EE tasks: the ROC-AUC metrics achieved by the Text, List and Text+List indicators are in row a); the confusion matrices are in row b); rows c) and d) report the absolute average Shapely values and the Shapely value of the features for each sample, respectively. Group EY includes subjects aged between 40 and 59 years, group EF includes subjects aged between 60 and 69 years, and group EE includes subjects aged over 70 years. research is planned. However, the results suggest that both data collection methods are still valid and contain intrinsic age-related information. We further analysed our experiments using the SHAP model explanation technique. It was useful to understand the impact of each handwriting indicator in the different tasks and to see their behaviour. In this paper we detailed the results and analysis of three classifications: the first two belonged to the first pool and included the class of individuals in the range 60-69 years (\(EF\)), which is the more critical for the purpose of early detection of decline; the third consisted in the classification between individuals in the age ranges 40-59 and 70+, and it aimed at showing the more marked difference in handwriting characteristics between the two classes. In the first classification, the groups aged 40 to 59 and 60 to 69 (\(EY\) and \(EF\)) were considered. These two groups represent, respectively, a population of healthy subjects in which the effects of age decline should be absent, and a population in which a decline in physical or cognitive functionality may be at an early stage [13, 47]. As shown in Fig. 3 row (b), the handwriting indicators were able to correctly classify all individuals in the 60-69 age range (with a recall score of 100%), while four subjects in the 40-59 age range were misclassified (precision score equal to 84.2%). Our results confirm previous findings in the literature, where it has been observed that handwriting varies significantly in middle, younger and older adults [48]. According to Walton [20], handwriting characteristics can be stable in healthy subjects for at least 5 years. In fact, the four false positives were all over the age of 52. Two of them were over 55. Therefore, it was likely that their handwriting characteristics results were closer to the older group. The model explanation (Fig. 3, lines c and d) showed that both handwriting dynamics and tremor features were among the more influential. The tilt of the pen (\(Tilt_{Mean}\)) was the most important feature in the \(EY\) vs \(EF\) classification. According to Marzinotto et al. [48], a higher pen tilt (on the right) is typical in middle-aged adults (\(EF\)). The approximate entropy (\(ApEn\)) also played a significant role, indicating a lower predictability of the handwriting time series of the younger class. This result is in line with the findings of our previous work [26], where, using similar experimental settings, significant differences between age groups were found. The trend of decreasing entropy with age was consistent with previous literature studying resting and postural tremor in younger and older adults [49, 38, 39]. Although its variation was not statistically significant between different age groups in Lunardini et al. [26], in the current study writing force (\(Force\)) emerged as the third most predictive feature in the \(EY\)-\(EF\) classification. The predictions were shifted towards the older group (\(EF\)) when the force values were lower. This was in line with the study by Engel Yeger et al. [50] in 2012, Caligiuri [51] in 2014 and Marzinotto et al. [48] in 2016. In the following four features, sorted by decreasing importance, we found two frequency domain and two temporal parameters. The modal frequency had no significance in the statistical group differences in our previous work [26], but it affects the classification \(EY\) vs \(EF\), linking higher values to the older class. The same behaviour was found for \(RMS\). Previous studies in the literature show that some neurological conditions, such as Parkinson's disease, could affect the modal frequency [52], while no apparent age effect on this parameter has been shown. The effect of the temporal indicators (\(InAir\) and \(AirSheetR\)) was considerable, confirming the tendency of the older class to have more prolonged non-writing moments, found in our previous work [26], and others [31, 53]. The second classification, between the groups aged 60 to 69 and 70+ (\(EF\) and \(EE\)), was the most relevant for investigating the suitability of our approach in the scenario of early detection of decline. In a normal ageing process, physical or cognitive decline is expected to be more consistent in the older group of people aged 70+ [13, 47]. Therefore, whenever an individual in the younger group (aged 60-69) is associated with the older group, it could be interpreted as a sign of abnormal decline. In this task, the handwriting indicators were used to discriminate individuals in \(EF\) from those in \(EE\) with high performance scores. Our results showed that the \(EF\)-\(EE\) classifier may be suitable for use in the monitoring of decline due to its high precision of 94.4%. Only 1 subject out of 20 was wrongly classified as older, while the false negatives were 3 (Fig. 3, row (b)). The model explanation (Fig. 3, row (c) and (d)) showed that the in-air time parameter (\(InAir\)) was much more influential in the classification than all the others. As for the other tasks, higher \(InAir\) were associated with individuals of the older class. Modal frequency was the second indicator of importance. The other indicators had quite similar effects, with frequency and non-linear features in higher positions. The tilt of the pen (\(Tilt_{Mean}\)) ended up among the last important indicators, although its variation (\(Tilt_{cv}\)) resulted in having a more significant impact. Nevertheless, all the indicators retained the same behaviour as in the previous tasks, thus confirming the consistency of the variations in handwriting measures with age. The third classification was between the \(EY\) and the \(EE\) groups, with individuals in the ranges 40-59 and 70+ years of age. The level of decline was expected to be very different among the healthy subjects' populations included in this task. As a consequence, the ability of the model in discriminating between these classes of individuals using the handwriting indicators resulted indeed increased. The Accuracy score was equal to 92.5%, and the Precision was notably higher, with 94.7%, at the expense of a minor Recall, equal to 90%, with respect to the previous task. In fact, only one subject in \(EF\) and two in \(EE\) were wrongly classified. The model explanation (Fig. 3, rows (c) and (d)) showed almost the same indicators among the most relevant, however, some meaningful differences appeared. The \(Tilt_{Mean}\) dropped from the first position in the classification \(EY\)-\(EF\) to the sixth position in \(EY\)-\(EE\) in the impact ranking while still keeping the same behaviour. In this task, \(InAir\) emerged once again to have the highest impact, with the same trend showing higher values in the older groups. With respect to the first classification task, the writing force dropped from the third position to the penultimate position, while determinism \(DET\) raised to the fourth place, with the same impact of \(F_{modal}\). Determinism was likely to increase with age as the influence of the predictable tremor components became more persistent. All the handwriting indicators showed the same behaviour, in terms of value distribution, in the classifications of \(EY\) with \(EF\) and \(EE\). Significant changes were found in the impact level of the tremor features, which resulted in more determinant in the discrimination between the two more distant groups, \(EY\) and \(EE\). The model explanation revealed that the impact of the handwriting indicators was task dependent, i.e., it changed according to the age ranges we considered in the classifications. These differences in the feature importance highlighted the complexity of the age-driven decline in handwriting as the sensitivity of some indicators showed age-dependency. However, the behaviour of the indicators in the different age intervals was consistent with the previous findings in literature in populations of healthy subjects. This result reinforced the interpretation of the models, giving the possibility to understand their decisions as they relied to known handwriting-related quantities. ## 5 Conclusion In conclusion, this work showed the quantitative analysis of handwriting to classify individuals belonging to different age groups. Age-classifiers with high precision score may offer a novel and non-invasive instrument for the domestic monitoring of handwriting in elderly and frail individuals. Our findings interest is enhanced by the innovative data acquisition modality we used to collect the subject's writing data, allowing the ecological assessment of daily-life handwriting. Age differences can be used to detect anomalies in handwriting, which may indicate an abnormal decline in individuals with the risk of developing pathological conditions. Moreover, more precise information about the nature of the conditions could be achieved by investigating more pathological-related handwriting changes, as Parkinson's disease and dementia, and developing illness-specific classifiers. ## Acknowledgment This work was supported by the European projects MOVECARE (Grant Agreement: 732158) and ESSENCE (Grant Agreement: 101016112).
2309.08076
Banach spaces of $\mathcal I$-convergent sequences
We study the space $c_{0,\mathcal{I}}$ of all bounded sequences $(x_n)$ that $\mathcal{I}$-converge to $0$, endowed with the sup norm, where $\mathcal{I}$ is an ideal of subsets of $\mathbb{N}$. We show that two such spaces, $c_{0,\mathcal{I}}$ and $c_{0,\mathcal{J}}$, are isometric exactly when the ideals $\mathcal{I}$ and $\mathcal{J}$ are isomorphic. Additionally, we analyze the connection of the well-known Kat\v{e}tov pre-order $\leq_K$ on ideals with some properties of the space $c_{0,\mathcal{I}}$. For instance, we show that $\mathcal{I}\leq_K\mathcal{J}$ exactly when there is a (not necessarily onto) Banach lattice isometry from $c_{0,\mathcal{I}}$ to $c_{0,\mathcal{J}}$, satisfying some additional conditions. We present some lattice-theoretic properties of $c_{0,\mathcal{I}}$, particularly demonstrating that every closed ideal of $\ell_\infty$ is equal to $c_{0,\mathcal{I}}$ for some ideal $\mathcal{I}$ on $\mathbb{N}$. We also show that certain classical Banach spaces are isometric to $c_{0,\mathcal{I}}$ for some ideal $\mathcal{I}$, such as the spaces $\ell_\infty(c_0)$ and $c_0(\ell_\infty)$. Finally, we provide several examples of ideals for which $c_{0,\mathcal{I}}$ is not a Grothendieck space.
Michael A. Rincón-Villamizar, Carlos Uzcátegui Aylwin
2023-09-15T00:16:47Z
http://arxiv.org/abs/2309.08076v1
# Banach spaces of \(\mathcal{I}\)-convergent sequences ###### Abstract. We study the space \(c_{0,\mathcal{I}}\) of all bounded sequences \((x_{n})\) that \(\mathcal{I}\)-converge to \(0\), endowed with the sup norm, where \(\mathcal{I}\) is an ideal of subsets of \(\mathbb{N}\). We show that two such spaces, \(c_{0,\mathcal{I}}\) and \(c_{0,\mathcal{J}}\), are isometric exactly when the ideals \(\mathcal{I}\) and \(\mathcal{J}\) are isomorphic. Additionally, we analyze the connection of the well-known Katetov pre-order \(\leq_{K}\) on ideals with some properties of the space \(c_{0,\mathcal{I}}\). For instance, we show that \(\mathcal{I}\leq_{K}\mathcal{J}\) exactly when there is a (not necessarily onto) Banach lattice isometry from \(c_{0,\mathcal{I}}\) to \(c_{0,\mathcal{J}}\), satisfying some additional conditions. We present some lattice-theoretic properties of \(c_{0,\mathcal{I}}\), particularly demonstrating that every closed ideal of \(\ell_{\infty}\) is equal to \(c_{0,\mathcal{I}}\) for some ideal \(\mathcal{I}\) on \(\mathbb{N}\). We also show that certain classical Banach spaces are isometric to \(c_{0,\mathcal{I}}\) for some ideal \(\mathcal{I}\), such as the spaces \(\ell_{\infty}(c_{0})\) and \(c_{0}(\ell_{\infty})\). Finally, we provide several examples of ideals for which \(c_{0,\mathcal{I}}\) is not a Grothendieck space. ## 1. Introduction An ideal on \(\mathbb{N}\) is a collection \(\mathcal{I}\) of subsets of \(\mathbb{N}\) closed under finite unions and taking subsets of its elements. A sequence \((x_{n})\) in \(\mathbb{R}\) is said to be \(\mathcal{I}\)-convergent to \(x\in\mathbb{R}\), denoted as \(\mathcal{I}\)-\(\lim x_{n}=x\), if for each \(\varepsilon>0\), the set \(\{n\in\mathbb{N}:\,|x_{n}-x|\geq\varepsilon\}\) belongs to \(\mathcal{I}\). When \(\mathcal{I}\) is \(\mathsf{Fin}\), the ideal of finite subsets of \(\mathbb{N}\), we have the classical convergence in \(\mathbb{R}\). The \(\mathcal{I}\)-convergence was introduced in [17], although many authors had already studied this concept in particular cases and in different contexts (see, for instance, [2, 8, 9, 18]). The main goal of this paper is to study the following space: \[c_{0,\mathcal{I}}=\{(x_{n})\in\ell_{\infty}:\,\mathcal{I}-\lim x_{n}=0\}.\] This space has recently received some attention (see, for instance, [16, 19]). It is known that \(c_{0,\mathcal{I}}\) is a closed subspace (also, an ideal) of \(\ell_{\infty}\) and it is isometric to \(C_{0}(U_{\mathcal{I}})\) for some open set \(U_{\mathcal{I}}\) of \(\beta\mathbb{N}\) (see [16] or Proposition 3.1). Two extreme examples are worth keeping in mind. On the one hand, as we mentioned before, \(c_{0,\mathsf{Fin}}\) is exactly \(c_{0}\). On the other hand, if \(\mathcal{I}\) is the trivial ideal \(\mathcal{P}(\mathbb{N})\), we obtain the whole space \(\ell_{\infty}\). A natural question is to determine when two such spaces are isomorphic (isometric). As a consequence of the Banach-Stone theorem, \(c_{0,\mathcal{I}}\) and \(c_{0,\mathcal{J}}\) are isometric exactly when \(\mathcal{I}\) and \(\mathcal{J}\) are isomorphic as ideals. However, the same does not hold for isomorphism. Indeed, it is known that if \(\mathcal{I}\) is a maximal ideal, then \(c_{0,\mathcal{I}}\) is complemented in \(\ell_{\infty}\) and thus isomorphic to \(\ell_{\infty}\) (see [19]); but \(\ell_{\infty}\) is equal to \(c_{0,\mathcal{P}(\mathbb{N})}\) and a maximal ideal is not isomorphic to \(\mathcal{P}(\mathbb{N})\). Ideals on countable sets have been studied for a long time, and several ways of comparing them have been investigated (see the surveys [14, 24]). We are particularly interested in the Katetov pre-order (see, for instance, [15]). Given two ideals \(\mathcal{I}\) and \(\mathcal{J}\) on two countable sets \(X\) and \(Y\), respectively, we say that \(\mathcal{I}\) is Katetov below \(\mathcal{J}\), denoted as \(\mathcal{I}\leq_{K}\mathcal{J}\), if there is \(f:Y\to X\) such that \(f^{-1}(A)\in\mathcal{J}\) for all \(A\in\mathcal{I}\). Following the ideas behind a proof of Holsztynki's theorem [22], we show that \(\mathcal{I}\leq_{K}\mathcal{J}\) exactly when there is a (non-necessarily onto) Banach lattice isometry from \(c_{0,\mathcal{I}}\) to \(c_{0,\mathcal{J}}\) satisfying some additional conditions (see Theorem 3.7). We study some lattice-theoretic properties of the spaces \(c_{0,\mathcal{I}}\) and, in particular, we show that any closed ideal of \(\ell_{\infty}\) is equal to \(c_{0,\mathcal{I}}\) for some ideal \(\mathcal{I}\) on \(\mathbb{N}\). We found an interesting connection between the ideal theoretic notion of orthogonal ideals [12, 23] and the notion of \(c_{0}\)-disjoint subspaces of \(\ell_{\infty}\) (see section 4). We show how to represent some classical Banach spaces as a space of the form \(c_{0,\mathcal{I}}\) for some ideal \(\mathcal{I}\) on \(\mathbb{N}\). For example, we found ideals providing an isometric representation of the spaces \(\ell_{\infty}(c_{0})\) and \(c_{0}(\ell_{\infty})\). These two spaces have been recently shown to be non-isomorphic (see [5]). We present a different argument to show that they are not isometric. We also found an uncountable collection of pairwise non-isometric spaces of the form \(c_{0,\mathcal{I}}\). In [11, Problem 9], it was asked about ideals \(\mathcal{I}\) such that \(c_{0,\mathcal{I}}\) is Grothendieck; we present several non-Grothendieck spaces. ## 2. Preliminaries ### Banach spaces and Banach lattices We will use standard terminology and notation for Banach lattices and Banach space theory. For unexplained definitions and notations, we refer to [1, 21]. All Banach lattices analyzed here are assumed to be real. \(B_{X}\) stands for the closed unit ball of \(X\). The positive cone of a Banach lattice \(X\) is denoted by \(X^{+}\). A sublattice \(Y\) of a Banach lattice \(X\) is an _ideal_ if \(x\in Y\) whenever \(|x|\leq|y|\) for some \(y\in Y\). The closed ideal generated by a subset \(A\) of \(X\) is denoted by \(\langle A\rangle\). If \(X\) and \(Y\) are isomorphic Banach spaces, we write \(X\sim Y\). If \(X\) and \(Y\) are Banach lattices, a linear operator \(T\colon X\to Y\) is called a _Banach lattice isomorphism_ if \(T\) is a Banach isomorphism such that \(T(x\wedge y)=Tx\wedge Ty\) for all \(x,y\in X\). Furthermore, if \(T\) is an isometry, we say that \(T\) is a _Banach lattice isometry_. Finally, when \(T\) is not necessarily onto we will called it an _into_ Banach lattice isomorphism (isometry, respectively). Now, we introduce some notation. Let \((X_{j})_{j\in\mathbb{N}}\) be a family of Banach spaces. 1. \(\ell_{\infty}((X_{j})_{j\in\mathbb{N}})\) denotes the \(\ell_{\infty}\)-sum of \((X_{j})_{j\in\mathbb{N}}\), that is, the Banach space of all bounded sequences \((x_{j})\in\prod_{j}X_{j}\) endowed with the norm \(\|\cdot\|\) given by \(\|(x_{j})\|=\sup_{j}\|x_{j}\|\). 2. \(c_{0}((X_{j})_{j\in\mathbb{N}})\) is the \(c_{0}\)-sum of \((X_{j})_{j\in\mathbb{N}}\), that is, the closed subspace of \(\ell_{\infty}((X_{j})_{j\in\mathbb{N}})\) consisting of all null sequences, i.e, \(\lim_{j}\|x_{j}\|=0\). When \(X=X_{j}\) for all \(j\in\mathbb{N}\), these spaces are denoted by \(\ell_{\infty}(X)\) and \(c_{0}(X)\), respectively. Finally, for \(X=\mathbb{R}\), \(\ell_{\infty}(X)\) and \(c_{0}(X)\) correspond to \(\ell_{\infty}\) and \(c_{0}\), respectively. **Remark 2.1**.: It is easy to see that if each \(X_{j}\) is isometric (isomorphic) to \(X\), then \(\ell_{\infty}((X_{j})_{j\in\mathbb{N}})\) (\(c_{0}((X_{j})_{j\in\mathbb{N}})\)) is isometric (isomorphic, respectively) to \(\ell_{\infty}(X)\) (\(c_{0}(X)\), respectively). ### Ideals Recall that an ideal \(\mathcal{I}\) on a set \(X\) is a collection of subsets of \(X\) satisfying: 1. \(\emptyset\in\mathcal{I}\); 2. If \(A\subset B\) and \(B\in\mathcal{I}\), then \(A\in\mathcal{I}\); 3. If \(A,B\in\mathcal{I}\), then \(A\cup B\in\mathcal{I}\). We always assume that every finite subset of \(X\) belongs to \(\mathcal{I}\). The dual filter of an ideal \(\mathcal{I}\) is denoted by \(\mathcal{I}^{*}\) and consists of all sets of the form \(X\setminus A\) for some \(A\in\mathcal{I}\). If \(\mathcal{A}\) is a family of subsets of \(X\), \(\mathcal{I}(\mathcal{A})\) denotes the ideal generated by \(\mathcal{A}\) which consists of all subsets of finite unions of sets from \(\mathcal{A}\). If \(A\subset X\) and \(\mathcal{I}\) is an ideal on \(X\), we denote the restriction of \(\mathcal{I}\) to \(A\) by \(\mathcal{I}\upharpoonright A=\{A\cap B:B\in\mathcal{I}\}\) which is an ideal on \(A\). If \(\mathcal{I}\) and \(\mathcal{J}\) are ideals on \(X\), the ideal \(\mathcal{I}\sqcup\mathcal{J}\) is defined as \(\mathcal{I}\sqcup\mathcal{J}:=\{A\cup B:\;A\in\mathcal{I},B\in\mathcal{J}\}\). Two ideals \(\mathcal{I}\) and \(\mathcal{J}\) on \(\mathbb{N}\) are _isomorphic_ if there is a bijection (called isomorphism) \(f\colon\mathbb{N}\to\mathbb{N}\) such that \(A\in\mathcal{J}\) iff \(f^{-1}(A)\in\mathcal{I}\). We recall the _Katetov pre-order_ on ideals. If \(\mathcal{I}\) and \(\mathcal{J}\) are ideals on \(X\) and \(Y\), respectively, we write \(\mathcal{I}\leq_{K}\mathcal{J}\) if there is a function \(f\colon Y\to X\) (called a _Katetov reduction_) such that \(f^{-1}(A)\in\mathcal{J}\) for all \(A\in\mathcal{I}\). In most of the cases, the function \(f\) can be assumed to be onto. **Proposition 2.2**.: [3] _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(X\) and \(Y\), respectively. Suppose \(\mathcal{J}\) contains an infinite set and \(\mathcal{I}\leq_{K}\mathcal{J}\), then there is an onto map \(f\colon Y\to X\) such that \(f^{-1}(A)\in\mathcal{J}\) for all \(A\in\mathcal{I}\)._ In general, Katetov reductions are not bijective but this naturally suggests a variant of \(\leq_{K}\) which was studied in [3]. They defined \(\mathcal{I}\sqsubseteq\mathcal{J}\) if there is a bijective Katetov reduction from \(\mathcal{I}\) to \(\mathcal{J}\). In other words, \(\mathcal{I}\sqsubseteq\mathcal{J}\) if there is an ideal \(\mathcal{J}^{\prime}\subseteq\mathcal{J}\) such that \(\mathcal{I}\) is isomorphic to \(\mathcal{J}^{\prime}\). We will use this variant in the sequel. The _Fubini product_ is defined as follows. For two ideals \(\mathcal{I}\) and \(\mathcal{J}\) on \(\mathbb{N}\), \(\mathcal{I}\times\mathcal{J}\) is the ideal on \(\mathbb{N}\times\mathbb{N}\) given by: \[A\in\mathcal{I}\times\mathcal{J}\Longleftrightarrow\{n\in\mathbb{N}:\;\{m \in\mathbb{N}:\;(n,m)\in A\}\not\in\mathcal{J}\}\in\mathcal{I}.\] Another concept that will be used in the sequel is the following. Given a collection \(\mathcal{A}\) of subsets of \(\mathbb{N}\), the _orthogonal_ of \(\mathcal{A}\) (see [12, 23]) is the following family of sets \[\mathcal{A}^{\perp}=\{B\subseteq\mathbb{N}:(\forall A\in\mathcal{A})(A\cap B \text{ is finite})\}.\] An ideal \(\mathcal{I}\) is called _Frechet_ if \(\mathcal{I}=\mathcal{I}^{\perp\perp}\). We include some examples in section 5.1. Let \(\{K_{n}:\;n\in F\}\) be a partition of a countable set \(X\), where \(F\subseteq\mathbb{N}\). For \(n\in F\), let \(\mathcal{I}_{n}\) be an ideal on \(K_{n}\). The direct sum, denoted by \(\bigoplus_{n\in F}\mathcal{I}_{n}\), is defined by \[A\in\bigoplus_{n\in F}\mathcal{I}_{n}\Leftrightarrow(\forall n\in F)(A\cap K _{n}\in\mathcal{I}_{n}).\] In general, given a sequence of ideals \(\mathcal{I}_{n}\) over a countable set \(X_{n}\), we define \(\bigoplus_{n}\mathcal{I}_{n}\) by taking a partition \(\{K_{n}:\;n\in\mathbb{N}\}\) of \(\mathbb{N}\) and an isomorphic copy \(\mathcal{I}^{\prime}_{n}\) of \(\mathcal{I}_{n}\) on \(K_{n}\) and let \(\bigoplus_{n}\mathcal{I}_{n}\) be \(\bigoplus_{n}\mathcal{I}^{\prime}_{n}\). It should be clear that \(\bigoplus_{n}\mathcal{I}_{n}\) is, up to isomorphism, independent of the partition and the copies used. If all \(\mathcal{I}_{n}\) are equal (isomorphic) to \(\mathcal{I}\) we will write \(\mathcal{I}^{\omega}\) instead of \(\bigoplus_{n}\mathcal{I}_{n}\). ### The space \(c_{0,\mathcal{I}}(X)\) We begin by introducing the notion of \(\mathcal{I}\)-convergence for an ideal \(\mathcal{I}\) on \(\mathbb{N}\). This concept is due to Kostyrko, Salat and Wilczynski [17]. **Definition 2.3**.: Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\) and \(X\) be a Banach space. A sequence \((x_{n})\) in \(X\) is \(\mathcal{I}\)-convergent to \(x\in X\), and we write \(\mathcal{I}-\lim x_{n}=x\), if for each \(\varepsilon>0\), \(\{n\in\mathbb{N}:\;\|x_{n}-x\|\geq\varepsilon\}\in\mathcal{I}\). The main objetive of this paper is to study the following subspace of \(\ell_{\infty}\). \[c_{0,\mathcal{I}}(X)=\{(x_{n})\in\ell_{\infty}(X):\;\mathcal{I}-\lim x_{n}= \mathbf{0}\}.\] The upcoming result gives basic properties of \(\mathcal{I}\)-convergence (see also [18, Theorem 2.3]). If \(\mathbf{x}\in\ell_{\infty}(X)\) and \(\varepsilon>0\), we will use through out the whole paper the following notation: \[A(\varepsilon,\mathbf{x})=\{n\in\mathbb{N}:\;\|x_{n}\|\geq\varepsilon\}.\] **Proposition 2.4**.: _Let \(X\) be a Banach space and \(\mathcal{I}\) be an ideal on \(\mathbb{N}\)._ 1. _If_ \(c\in\mathbb{R}\) _and_ \(\mathcal{I}-\lim x_{n}=x\)_, then_ \(\mathcal{I}-\lim cx_{n}=cx\)_._ 2. _If_ \(\mathcal{I}-\lim x_{n}=x\) _and_ \(\mathcal{I}-\lim y_{n}=y\)_, then_ \(\mathcal{I}-\lim x_{n}+y_{n}=x+y\)_._ 3. _If_ \(\mathcal{I}-\lim x_{n}=\mathbf{0}\) _and_ \(\|y_{n}\|\leq\|x_{n}\|\) _for all_ \(n\in\mathbb{N}\)_, then_ \(\mathcal{I}-\lim y_{n}=\mathbf{0}\)_._ 4. \(c_{0,\mathcal{I}}(X)\) _is closed subspace of_ \(\ell_{\infty}(X)\)_._ 5. _If_ \(X\) _is a Banach lattice, then_ \(c_{0,\mathcal{I}}(X)\) _is a closed ideal of_ \(\ell_{\infty}(X)\)_._ Proof.: (1), (2) and (3) follow immediately from definition. (4): If \(\mathbf{y}\in\overline{c_{0,\mathcal{I}}(X)}^{\|\cdot\|_{\infty}}\) and \(\varepsilon>0\) is given, there is \(\mathbf{x}\in c_{0,\mathcal{I}}(X)\) such that \(\|\mathbf{y}-\mathbf{x}\|<\varepsilon/2\). Since \(A(\varepsilon,\mathbf{y})\subset A(\varepsilon/2,\mathbf{x})\), we conclude that \(\mathbf{y}\in c_{0,\mathcal{I}}(X)\). Finally for (5), if \(\mathbf{y}\in c_{0,\mathcal{I}}(X)\) and \(|\mathbf{x}|\leq|\mathbf{y}|\), then \(\|x_{n}\|\leq\|y_{n}\|\) for each \(n\in\mathbb{N}\). By (3), \(\mathbf{x}\in c_{0,\mathcal{I}}(X)\). When \(X=\mathbb{R}\) we write \(c_{0,\mathcal{I}}\) instead of \(c_{0,\mathcal{I}}(X)\). The next result provides another description of \(c_{0,\mathcal{I}}\). Recall that if \(\mathbf{y}=(y_{n})\) is a sequence, \(\operatorname{supp}(\mathbf{y})\) denotes the support of \(\mathbf{y}\), that is, the set \(\{n\in\mathbb{N}:\;y_{n}\neq 0\}\). **Proposition 2.5**.: _Let \(X\) be a Banach space and \(\mathcal{I}\) a proper ideal on \(\mathbb{N}\)._ 1. \(c_{0,\mathcal{I}}(X)=\overline{\{\mathbf{y}\in\ell_{\infty}(X):\,\operatorname{ supp}(\mathbf{y})\in\mathcal{I}\}}^{\|\cdot\|_{\infty}}\)_._ 2. \(c_{0,\mathcal{I}}=\overline{\operatorname{span}\{\chi_{A}:\,A\in\mathcal{I}\} }\)_._ 3. \(A\in\mathcal{I}\) _iff_ \(\chi_{A}\in c_{0,\mathcal{I}}\)_._ Proof.: (1) We first show that if \(\mathbf{y}=(y_{n})\in\ell_{\infty}(X)\) and \(\operatorname{supp}(\mathbf{y})\in\mathcal{I}\), then \(\mathbf{y}\in c_{0,\mathcal{I}}(X)\). In fact, let \(\varepsilon>0\) be given. Clearly \(A(\varepsilon,\mathbf{y})\subset\operatorname{supp}(\mathbf{y})\), and therefore \(A(\varepsilon,y)\in\mathcal{I}\). Since \(c_{0,\mathcal{I}}(X)\) is closed in \(\ell_{\infty}(X)\), then \(\supseteq\) holds in the equation above. Conversely, let \(\mathbf{x}=(x_{n})\in c_{0,\mathcal{I}}(X)\). Fix \(\varepsilon>0\) and let \(A=A(\varepsilon,\mathbf{x})\in\mathcal{I}\). Pick a finite sequence of real numbers \(\varepsilon=\lambda_{1}<\ldots<\lambda_{k}=\|(x_{n})\|_{\infty}+1\) such that \(\lambda_{i+1}-\lambda_{i}\leq\varepsilon\). Let \(A_{i}=\{n\in A:\,\lambda_{i}\leq\|x_{n}\|<\lambda_{i+1}\}\) and \(\mathbf{y}^{i}=(y^{i}_{n})\), where \[y^{i}_{n}=\begin{cases}\lambda_{i}x_{n}/\|x_{n}\|,&n\in A_{i};\\ 0,&\text{otherwise},\end{cases}\] for \(1\leq i<k\). Notice that \(A_{i}=\operatorname{supp}(\mathbf{y}^{i})\in\mathcal{I}\) for each \(1\leq i<k\) and \(\|\mathbf{x}-(\mathbf{y}^{1}+\ldots+\mathbf{y}^{k-1})\|_{\infty}\leq\varepsilon\). This shows \(\subseteq\). (2) Suppose that \(X=\mathbb{R}\). Now we let \[A^{+}_{i}=\{n\in A_{i}:\,x_{n}>0\},\quad A^{-}_{i}=\{n\in A_{i}:\,x_{n}<0\},\] \(\mathbf{y}^{i}_{1}=\lambda_{i}\chi_{A^{+}_{i}}\) and \(\mathbf{y}^{i}_{2}=-\lambda_{i}\chi_{A^{-}_{i}}\) for \(1\leq i<k\). We have \(\|\mathbf{x}-(\mathbf{y}^{1}_{1}+\cdots+\mathbf{y}^{k-1}_{1}+\mathbf{y}^{1}_ {2}+\cdots+\mathbf{y}^{k-1}_{2})\|\leq\varepsilon\). (3) The _only if part_ is obvious. Suppose \(\chi_{A}\in c_{0,\mathcal{I}}\), then \(A=A(1/2,\mathbf{x})\in\mathcal{I}\) and we are done. We recall that an ideal \(\mathcal{I}\) over \(\mathbb{N}\) can be identified (via characteristic functions) with a subset of the Cantor space \(\{0,1\}^{\mathbb{N}}\), so that we can talk about Borel, meager ideals, etc. It is worth keeping in mind the following result when dealing with meager ideals. **Theorem 2.6**.: _[_19_]_ _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). If \(\mathcal{I}\) is meager, then \(c_{0,\mathcal{I}}\) is not complemented in \(\ell_{\infty}\), and, thus, not isomorphic to \(\ell_{\infty}\)._ ## 3. Isomorphisms between \(c_{0,\mathcal{I}}\) spaces and the Katetov order on ideals The classical Banach-Stone theorem states that given two locally compact Hausdorff spaces \(K\) and \(S\), \(C_{0}(K)\) and \(C_{0}(S)\) are isometric iff \(K\) and \(S\) are homeomorphic [1, Theorem 4.1.5]. Holsztynski [13] extends this result by showing that if \(C_{0}(K)\) embeds isometrically in \(C_{0}(S)\), then \(K\) is a continuous image of a subspace of \(S\). Our goal in this section is to prove versions of these results in the setting of \(c_{0,\mathcal{I}}\) spaces. We first look at isometry and later to Banach lattice isometries. We first recall some classical notions. Recall that \(\beta\mathbb{N}\) is the Stone-Cech compactification of \(\mathbb{N}\) which is usually identified with the collection of all ultrafilters on \(\mathbb{N}\). For a set \(A\subset\mathbb{N}\), we let \(A^{*}=\{p\in\beta\mathbb{N}:\,A\in p\}\). The family \(\{A^{*}:\,A\subset\mathbb{N}\}\) defines a basis for the topology of \(\beta\mathbb{N}\). As usual, we identify each \(n\in\mathbb{N}\) with the principal ultrafilter \(\{A\subseteq\mathbb{N}:n\in A\}\). Every principal ultrafilter is an isolated point of \(\beta\mathbb{N}\). We set \(U_{\mathcal{I}}=\{p\in\beta\mathbb{N}:\,\mathcal{I}\cap p\neq\emptyset\}\). Since \(U_{\mathcal{I}}=\bigcup\{A^{*}:\,A\in\mathcal{I}\}\), \(U_{\mathcal{I}}\) is open in \(\beta\mathbb{N}\). Under the previous identification, \(\mathbb{N}\subseteq U_{\mathcal{I}}\) for every ideal \(\mathcal{I}\). Given a bounded sequence \((x_{n})\) of reals numbers and an ultrafilter \(p\) on \(\mathbb{N}\), we denote by \(p-\lim x_{n}\) the only \(x\in\mathbb{R}\) such that \(\{n\in\mathbb{N}:|x_{n}-x|<\varepsilon\}\in p\) for all \(\varepsilon>0\). Notice that if \(p\) is the principal ultrafilter \(m\), then \(p-\lim_{n}x_{n}=x_{m}\). It is well known that \(c_{0,\mathcal{I}}\) and \(C_{0}(U_{\mathcal{I}})\) are isometric (see for instance [11, p. 255] and [16, p. 12]), nevertheless we include here a proof written in terms of \(p\)-limits which will be needed in the sequel. **Proposition 3.1**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). The map \(\Phi_{\mathcal{I}}\colon c_{0,\mathcal{I}}\to C_{0}(U_{\mathcal{I}})\) given by_ \[\Phi_{\mathcal{I}}(\mathbf{x})(p)=p-\lim x_{n},\] _where \(\mathbf{x}=(x_{n})\) and \(p\in U_{\mathcal{I}}\), is a Banach lattice isometry. Moreover, \(A\in\mathcal{I}\) iff \(\chi_{A^{*}}\in C_{0}(U_{\mathcal{I}})\) and \(\Phi_{\mathcal{I}}(\chi_{A})=\chi_{A^{*}}\) for each \(A\in\mathcal{I}\)._ Proof.: Let \(\mathbf{x}=(x_{n})\in c_{0,\mathcal{I}}\) be given. We prove first that \(\Phi_{\mathcal{I}}(\mathbf{x})\in C_{0}(U_{\mathcal{I}})\). Fix \(p\in U_{\mathcal{I}}\) and let \(r>0\). To show that \(\Phi_{\mathcal{I}}(\mathbf{x})\) is continuous at \(p\), suppose that \(|\Phi_{\mathcal{I}}(\mathbf{x})(p)-a|=|(p-\lim x_{n})-a|<r\). Let \(\delta>0\) be such that \(|(p-\lim x_{n})-a|<r-\delta\) and \(A=\{n\in\mathbb{N}:\;|x_{n}-a|<r-\delta\}\). Note that \(p\in A^{*}\) and for \(q\in A^{*}\) we have \(|(q-\lim x_{n})-a|\leq r-\delta<r\). Hence, \(p\in A^{*}\subset\Phi_{\mathcal{I}}^{-1}(\mathbf{x})((a-r,a+r))\). Thus \(\Phi_{\mathcal{I}}(\mathbf{x})\) is continuous at \(p\). To see that \(\Phi_{\mathcal{I}}(\mathbf{x})\) vanishes at infinity, let \(\varepsilon>0\) and consider \(A_{\varepsilon}=\{n\in\mathbb{N}:\;|x_{n}|>\varepsilon/2\}\). If \(p\in U_{\mathcal{I}}\) satisfies \(|\Phi_{\mathcal{I}}(\mathbf{x})(p)|=|p-\lim x_{n}|\geq\varepsilon\), then \(A_{\varepsilon}\in p\), that is \(p\in A_{\varepsilon}^{*}\). Hence \(\{p\in U_{\mathcal{I}}:\;|\Phi_{\mathcal{I}}(\mathbf{x})(p)|\geq\varepsilon\} \subset A_{\varepsilon}^{*}\). Thus \(\{p\in U_{\mathcal{I}}:\;|\Phi_{\mathcal{I}}(\mathbf{x})(p)|\geq\varepsilon\}\) is compact for each \(\varepsilon>0\). Clearly \(\Phi_{\mathcal{I}}\) is linear. Now we prove that \(\Phi_{\mathcal{I}}\) is an isometry. Let \(\mathbf{x}\in c_{0,\mathcal{I}}\) be given and \(a=\|\mathbf{x}\|_{\infty}\). Note that \(|\Phi_{\mathcal{I}}(\mathbf{x})(p)|=|p-\lim x_{n}|\leq\|\mathbf{x}\|_{\infty}\). On the other hand, for \(0<\delta<a\), we have that \(A_{\delta}=\{n\in\mathbb{N}:\;|x_{n}|>a-\delta\}\in\mathcal{I}\). Take \(p\in A_{\delta}^{*}\). Whence, \(p\in U_{\mathcal{I}}\) and \(\Phi_{\mathcal{I}}(\mathbf{x})(p)=p-\lim x_{n}>a-\delta/2\). Since \(\delta>0\) was arbitrary, \(\|\Phi_{\mathcal{I}}(\mathbf{x})\|\geq a\). To see that \(\Phi_{\mathcal{I}}\) is onto, fix \(f\in C_{0}(U_{\mathcal{I}})\). Consider the sequence \(\mathbf{x}=(x_{n})\) given by \(x_{n}=f(n)\) for \(n\in\mathbb{N}\). For every \(\varepsilon>0\) we have \[A(\varepsilon,\mathbf{x})\subset\{p\in U_{\mathcal{I}}:\;|f(p)|\geq\varepsilon \}\subset U_{\mathcal{I}}=\bigcup\{A^{*}:\;A\in\mathcal{I}\}.\] By compactness, there are \(A_{1},\ldots,A_{n}\in\mathcal{I}\) such that \(A(\varepsilon,\mathbf{x})\subset\bigcup_{j=1}^{n}A_{j}\). Whence \(A(\varepsilon,\mathbf{x})\in\mathcal{I}\) and thus \(\mathbf{x}\in c_{0,\mathcal{I}}\). Since \(f\) is continuous, we have \[\Phi_{\mathcal{I}}(\mathbf{x})(p)=p-\lim x_{n}=p-\lim f(n)=f(p),\quad\text{for all }p\in U_{\mathcal{I}}.\] Hence, \(\Phi_{\mathcal{I}}(\mathbf{x})=f\). The identity \(\Phi_{\mathcal{I}}(\chi_{A})=\chi_{A^{*}}\) for each \(A\in\mathcal{I}\) follows from the fact that \(\chi_{A^{*}}(p)=p-\lim\chi_{A}(n)\) for every \(p\in\beta\mathbb{N}\) and all \(A\subset\mathbb{N}\). Finally, it is easy to see that \(\Phi_{\mathcal{I}}\) is a Banach lattice isomorphism. The proof of the following result is straightforward. **Proposition 3.2**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\) and \(h:\mathbb{N}\to\mathbb{N}\) be an isomorphism between \(\mathcal{I}\) and \(\mathcal{J}\). Let \(T_{h}\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) be given by \(T_{h}(\mathbf{x})=\mathbf{x}\circ h\). Then \(T_{h}\) is a Banach lattice isometry._ **Theorem 3.3**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\). Then \(c_{0,\mathcal{I}}\) and \(c_{0,\mathcal{J}}\) are isometric if, and only if, \(\mathcal{I}\) and \(\mathcal{J}\) are isomorphic._ Proof.: One direction is Proposition 3.2. For the other, let \(T\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) be an isometric isomorphism. Then \(\Phi_{\mathcal{J}}\circ T\circ\Phi_{\mathcal{I}}^{-1}\colon C_{0}(U_{\mathcal{I} })\to C_{0}(U_{\mathcal{J}})\) is an isometry. By the classical Banach-Stone theorem, there are a homeomorphism \(f\colon U_{\mathcal{J}}\to U_{\mathcal{I}}\) and a continuous map \(\sigma\colon U_{\mathcal{J}}\to\{-1,1\}\) such that \(\Phi_{\mathcal{J}}\circ T\circ\Phi_{\mathcal{I}}^{-1}(F)=\sigma F\circ f\) for all \(F\in C_{0}(U_{\mathcal{I}})\). Thus \((\Phi_{\mathcal{J}}\circ T)(\mathbf{x})=\sigma\Phi_{\mathcal{I}}(\mathbf{x}) \circ f\) for all \(\mathbf{x}\in c_{0,\mathcal{I}}\). It follows from Proposition 3.1 that if \(A\in\mathcal{I}\) is given, then \[(\Phi_{\mathcal{J}}\circ T)(\chi_{A})(p)=\sigma(p)\Phi_{\mathcal{I}}(\chi_{A})(f( p))=\sigma(p)\chi_{A^{*}}(f(p))=\sigma(p)\chi_{f^{-1}(A^{*})}(p),\quad\text{for all }p\in U_{\mathcal{J}}. \tag{1}\] On the other hand, if \(p\in U_{\mathcal{J}}\), we have \[\chi_{f^{-1}(A^{*})}(p)=p-\lim\chi_{A^{*}}(f(n))=p-\lim\chi_{A}(f(n))=p-\lim \chi_{f^{-1}(A)}(n)=\chi_{(f^{-1}(A))^{*}}(p). \tag{2}\] By combining (1) and (2) we obtain that \((\Phi_{\mathcal{J}}\circ T)(\chi_{A})=\sigma\chi_{(f^{-1}(A))^{*}}\) for all \(A\in\mathcal{I}\). Hence, \[T\chi_{A}=s\chi_{f^{-1}(A)}\quad\text{for all }A\in\mathcal{I},\] where \(s=\sigma\upharpoonright\mathbb{N}\). So \(h=f\upharpoonright\mathbb{N}\) is a bijection from \(\mathbb{N}\) to \(\mathbb{N}\) such that \(A\in\mathcal{I}\) iff \(h^{-1}(A)\in\mathcal{J}\). Thus \(\mathcal{I}\) and \(\mathcal{J}\) are isomorphic. **Theorem 3.4**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\). Then, \(\mathcal{I}\sqsubseteq\mathcal{J}\) if, and only if, there is an into Banach isometry \(T\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) such that \(T[c_{0,\mathcal{I}}]\) is an ideal._ Proof.: Suppose \(\mathcal{I}\sqsubseteq\mathcal{J}\) and let \(h:\mathbb{N}\to\mathbb{N}\) be a bijective Katetov reduction. Let \(T_{h}\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) given by \(T_{h}(\mathbf{x})=\mathbf{x}\circ h\). By Proposition 3.2, \(T_{h}\) is the required into Banach isometry (which is moreover a lattice isometry). Conversely, suppose such \(T\) exists. By Proposition 4.1, there is an ideal \(\mathcal{J}^{\prime}\) such that \(T[c_{0,\mathcal{I}}]=c_{0,\mathcal{J}^{\prime}}\). By Theorem 3.3, \(\mathcal{I}\) is isomorphic to \(\mathcal{J}^{\prime}\). Clearly \(\mathcal{J}^{\prime}\subseteq\mathcal{J}\), thus \(\mathcal{I}\sqsubseteq\mathcal{J}\). Next we will look at a condition stronger than isometries, namely, we work with Banach lattice isometries. This will be related to the Katetov pre-order on ideals. Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\) such that \(\mathcal{I}\leq_{K}\mathcal{J}\), that is, there is \(h\colon\mathbb{N}\to\mathbb{N}\) such that \(h^{-1}(A)\in\mathcal{J}\) whenever \(A\in\mathcal{I}\). We can always assume \(h\) to be onto (see Proposition 2.2). Recall the map \(T_{h}\) as in Proposition 3.2. **Proposition 3.5**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\) with \(\mathcal{I}\leq_{K}\mathcal{J}\), \(h:\mathbb{N}\to\mathbb{N}\) be an onto Katetov reduction from \(\mathcal{I}\) to \(\mathcal{J}\) and \(T_{h}\) as above. Then,_ 1. \(T_{h}\) _is an into Banach lattice isometry. Moreover, it satisfies the following:_ 1. _For each_ \(n\in\mathbb{N}\)_, there is_ \(A\in\mathcal{I}\) _satisfying_ \(T\chi_{A}(n)=1\)_._ 2. _For all_ \(\emptyset\neq\mathcal{F}\subseteq\mathcal{I}\) _such that_ \(\bigcap_{A\in\mathcal{F}}A=\emptyset\)_, we have_ \(\bigwedge_{A\in\mathcal{F}}T(\chi_{A})=0\)_._ 2. \(h\colon\mathbb{N}\to\mathbb{N}\) _is bijective iff_ \(T_{h}[c_{0,\mathcal{I}}]\) _is an ideal of_ \(c_{0,\mathcal{J}}\)_._ Proof.: (1) is straightforward. (2) Suppose \(h\) is bijective and let \(\mathbf{x}=(x_{n})\in c_{0,\mathcal{I}}\) and \(\mathbf{y}=(y_{n})\in c_{0,\mathcal{J}}\) be such that \(|\mathbf{y}|\leq|T_{h}(\mathbf{x})|=T_{h}(|\mathbf{x}|)\). For each \(n\in\mathbb{N}\), let \(z_{n}=y_{h^{-1}(n)}\) and \(\mathbf{z}=(z_{n})\). Then \(|z_{n}|\leq|x_{n}|\) for all \(n\in\mathbb{N}\), thus \(\mathbf{z}\in c_{0,\mathcal{I}}\) and clearly \(T_{h}(\mathbf{z})=\mathbf{y}\). Conversely, suppose that \(T_{h}[c_{0,\mathcal{I}}]\) is an ideal and that there are \(n\neq m\) such that \(h(n)=h(m)=k\). Then \(T_{h}(\chi_{\{k\}})=\chi_{h^{-1}\{k\}}\geq\chi_{\{n\}}\). Thus, there is \(\mathbf{x}\in c_{0,\mathcal{I}}\) such that \(T_{h}(\mathbf{x})=\chi_{\{n\}}\). In particular \(0=T_{h}(\mathbf{x})(m)=x_{h(m)}=x_{h(n)}=T_{h}(\mathbf{x})(n)=1\), a contradiction. Now we proceed to prove the converse of Proposition 3.5. **Theorem 3.6**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\). Let \(T\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) be an into Banach lattice isometry such that \(T[c_{0,\mathcal{I}}]\) is an ideal containing \(c_{0}\). Then \(T=T_{h}\) for some bijection \(h:\mathbb{N}\to\mathbb{N}\)._ Proof.: First, we claim that for every \(A\in\mathcal{I}\) there is \(B\in\mathcal{J}\) such that \(T(\chi_{A})=\chi_{B}\). In fact, we show that for all \(m\in\mathbb{N}\) there is \(k\in\mathbb{N}\) such that \(T(\chi_{\{m\}})=\chi_{\{k\}}\). Let \(k\in\mathbb{N}\) and \(a>0\) be such that \(a\chi_{\{k\}}\leq T(\chi_{\{m\}})\). Since \(T[c_{0,\mathcal{I}}]\) is an ideal, there is \(\mathbf{y}\in c_{0,\mathcal{I}}\) such that \(T(\mathbf{y})=a\chi_{\{k\}}\). Clearly \(0<\mathbf{y}\leq\chi_{\{m\}}\) and \(\|\mathbf{y}\|=a\). Therefore \(\mathbf{y}=a\chi_{\{m\}}\). Thus \(T(\chi_{\{m\}})=\chi_{\{k\}}\). Now let \(A\in\mathcal{I}\) and put \(B=\{n\in\mathbb{N}:T(\chi_{A})(n)=1\}\). We claim that \(\chi_{B}=T(\chi_{A})\). Clearly \(\chi_{B}\leq T(\chi_{A})\). Since \(T[c_{0,\mathcal{I}}]\) is an ideal, there is \(\mathbf{x}\in c_{0,\mathcal{I}}\) such that \(T(\mathbf{x})=\chi_{B}\). For each \(k\in A\), let \(m_{k}\in\mathbb{N}\) be such that \(T(\chi_{\{k\}})=\chi_{\{m_{k}\}}\). Notice that \(m_{k}\in B\) since \(T(\chi_{\{k\}})\leq T(\chi_{A})\) for all \(k\in A\). Thus \(T(\chi_{\{k\}})=\chi_{\{m_{k}\}}\leq\chi_{B}=T(\mathbf{x})\). So, \(\chi_{\{k\}}\leq\mathbf{x}\) for all \(k\in A\). Therefore \(\chi_{A}\leq\mathbf{x}\) and hence \(T(\chi_{A})\leq T(\mathbf{x})=\chi_{B}\) and we are done. Next we show show that for all \(n\in\mathbb{N}\), there is \(m\in\mathbb{N}\) such that \(T(\chi_{\{m\}})=\chi_{\{n\}}\). In fact, let \(n\in\mathbb{N}\). Since \(\chi_{\{n\}}\in c_{0}\), there is \(\mathbf{x}\in c_{0,\mathcal{I}}\) with \(\mathbf{x}>\mathbf{0}\) and \(T(\mathbf{x})=\chi_{\{n\}}\). Let \(m\in\mathbb{N}\) be such that \(x_{m}>0\). Since \(x_{m}\chi_{\{m\}}\leq\mathbf{x}\), we have \(x_{m}T(\chi_{\{m\}})\leq\chi_{\{n\}}\). So, \(m=n\) and \(\mathbf{x}=\chi_{\{m\}}\). From the first claim, there is \(f\colon\mathbb{N}\to\mathbb{N}\) such that \(T(\chi_{\{m\}})=\chi_{\{f(m)\}}\). Since \(T\) is an isometry, \(f\) is injective. Since \(c_{0}\subseteq T[c_{0,\mathcal{I}}]\), by (2) \(f\) is onto. Let \(h=f^{-1}\). From the proof of (1) it follows that \(T(\chi_{A})=\chi_{f(A)}\) for every \(A\in\mathcal{I}\). This shows that \(T=T_{h}\). **Theorem 3.7**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\) and \(T\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) be an into Banach lattice isometry. Suppose \(T\) has the following properties:_ 1. _for each_ \(n\in\mathbb{N}\)_, there is_ \(A\in\mathcal{I}\) _satisfying_ \(T\chi_{A}(n)=1\)_._ 2. _For all_ \(\emptyset\neq\mathcal{F}\subseteq\mathcal{I}\) _such that_ \(\bigcap_{A\in\mathcal{F}}A=\emptyset\)_, we have_ \(\bigwedge_{A\in\mathcal{F}}T(\chi_{A})=0\)_._ _Then \(\mathcal{I}\leq_{K}\mathcal{J}\)._ Proof.: Let \(T\colon c_{0,\mathcal{I}}\to c_{0,\mathcal{J}}\) be an into Banach lattice isometry and define \(\widetilde{T}\colon C_{0}(U_{\mathcal{I}})\to C_{0}(U_{\mathcal{J}})\) by \(\widetilde{T}=\Phi_{\mathcal{J}}\circ T\circ\Phi_{\mathcal{I}}^{-1}\). Notice that \(\widetilde{T}\) is a Banach lattice isometry and for each \(n\in\mathbb{N}\), there is \(A\in\mathcal{I}\) such that \(\widetilde{T}\chi_{A^{*}}(n)=1\). Following a proof of Holsztynski theorem (as presented in [22]) we define, for \(p\in U_{\mathcal{I}}\), \[\Delta(p) = \{q\in U_{\mathcal{J}}:\,\widetilde{T}^{*}\delta_{q}(\{p\})=1\}, \quad\text{and}\] \[\Delta = \bigcup_{p\in U_{\mathcal{I}}}\Delta(p).\] The following is a crucial ingredient, its proof can be found in [22]. **Claim 3.8**.: _The following statements are valid:_ 1. \(\Delta(p)\neq\emptyset\) _for all_ \(p\in U_{\mathcal{I}}\)_._ 2. \(\Delta(p)\cap\Delta(p^{\prime})=\emptyset\) _if_ \(p\neq p^{\prime}\)_._ 3. _The map_ \(\phi\colon\Delta\to U_{\mathcal{I}}\) _defined by_ \(\phi(q)=p\) _iff_ \(q\in\Delta(p)\) _is continuous and onto._ 4. \(\widetilde{T}F(q)=\sigma(q)F(\phi(q))\) _for all_ \(F\in C_{0}(U_{\mathcal{I}})\) _and_ \(q\in\Delta\)_, where_ \(\sigma\colon\Delta\to\mathbb{R}\) _is continuous and_ \(|\sigma(q)|=1\) _for all_ \(q\in\Delta\)_._ Since \(\widetilde{T}\) is a Banach lattice isometry, we have \(\sigma(q)=1\) for each \(q\in\Delta\). **Claim 3.9**.: _For each \(p\in U_{\mathcal{I}}\) we have_ \[\Delta(p)=\{q\in U_{\mathcal{J}}:\,\widetilde{T}F(q)=1,\text{whenever }F\geq \mathbf{0}\text{ and }F(p)=1=\|F\|\}.\] Proof of Claim 3.9: Suppose that \(\widetilde{T}\delta_{q}(\{p\})=1\). Let \(F\in C_{0}(U_{\mathcal{I}})\) be such that \(F\geq\mathbf{0}\) and \(F(p)=1=\|F\|\). Since \(\chi_{\{p\}}\leq F\), we have \(\widetilde{T}F(q)=\widetilde{T}^{*}\delta_{q}(F)\geq\widetilde{T}^{*}\delta_{ q}(\chi_{\{p\}})=1\). For the other inclusion, let \(q\in U_{\mathcal{J}}\) be such that \(\widetilde{T}F(q)=1\) whenever \(F\geq\mathbf{0}\) and \(F(p)=1=\|F\|\). Also let \(\mathcal{V}_{p}\) be a fundamental system of open neighborhoods of \(p\). If \(\varepsilon>0\) is given, there is an open \(V\subset U_{\mathcal{I}}\) with \(p\in V\) and \(\widetilde{T}^{*}\delta_{q}(V\setminus\{p\})<\varepsilon\). For each \(W\in\mathcal{V}_{p}\), let \(F_{W}\in C_{0}(U_{\mathcal{I}})\) be such that \(F_{W}(p)=1=\|F_{W}\|\), \(F_{W}\geq\mathbf{0}\) and \(F_{W}(U_{\mathcal{I}}\setminus W)=\{0\}\). Take \(W_{0}\in\mathcal{V}_{p}\) with \(W_{0}\subset V\). So, \[1=\widetilde{T}F_{W_{0}}(q)=\int_{U_{\mathcal{I}}}F_{W_{0}}\,d\widetilde{T}^{* }\delta_{q}=\int_{W_{0}}F_{W_{0}}\,d\widetilde{T}^{*}\delta_{q}\leq\widetilde{ T}^{*}\delta_{q}(\{p\})+\widetilde{T}^{*}\delta_{q}(V\setminus\{p\})\leq 1+\varepsilon.\] Since \(\varepsilon>0\) is arbitrary, we conclude that \(\widetilde{T}^{*}\delta_{q}(\{p\})=1\), that is, \(q\in\Delta(p)\). Thus we have proved Claim 3.9. **Claim 3.10**.: _For each \(p\in U_{\mathcal{I}}\) we have_ \[\Delta(p)=\{q\in U_{\mathcal{J}}:\,\widetilde{T}\chi_{A^{*}}(q)=1\text{ for all }A\in\mathcal{I}\text{ with }p\in A^{*}\}.\] Proof of Claim 3.10: The inclusion \(\subseteq\) is clear from Claim 3.9. For the other direction, let \(\widetilde{\Delta}(p)\) be the the right hand side of the identity above and \(q\in\widetilde{\Delta}(p)\). Let \(\varepsilon>0\) be given. If \(F\in C_{0}(U_{\mathcal{I}})\) satisfies \(F\geq\mathbf{0}\) and \(F(p)=1=\|F\|\), then \(B=\{n\in\mathbb{N}:\,|F(n)-1|<\varepsilon\}\in p\). Let \(A\in\mathcal{I}\cap p\). Since \(q\in\widetilde{\Delta}(p)\), \(\widetilde{T}\chi_{(A\cap B)^{*}}(q)=1\). As \((1-\varepsilon)\chi_{(A\cap B)^{*}}\leq F\) and \(T\) is lattice order preserving, we have that \(1-\varepsilon\leq TF(q)\). Since \(\varepsilon\) was arbitrary, \(TF(q)=1\) and \(q\in\Delta(p)\). Thus we have proved Claim 3.10. **Claim 3.11**.: \(\mathbb{N}\subset\Delta\) Proof of Claim 3.11: Fix \(n\in\mathbb{N}\) and let \(\mathcal{F}_{n}=\{A\in\mathcal{I}:\ T\chi_{A}(n)=1\}\). Thus, from the hypothesis (1), \(\mathcal{F}_{n}\) is non-empty. Now if \(A,B\in\mathcal{F}_{n}\), then \(T\chi_{A}(n)=1\) and \(T\chi_{B}(n)=1\). Since \(T\chi_{(A\cap B)}=T(\chi_{A}\wedge\chi_{B})=T(\chi_{A})\wedge T(\chi_{B})\), we conclude that \(T\chi_{(A\cap B)}(n)=1\). Hence \(\mathcal{F}_{n}\) has the finite intersection property. Let \(p\in\beta\mathbb{N}\) be such that \(\mathcal{F}_{n}\subset p\). We will show that \(n\in\Delta(p)\). Notice that \(\widetilde{T}\chi_{A^{*}}(q)=q-\lim_{m}T\chi_{A}(m)\), for every \(q\in U_{\mathcal{I}}\) and \(A\in\mathcal{I}\). In particular, when \(q\) is the principal ultrafilter \(n\), we get \(\widetilde{T}\chi_{A^{*}}(n)=T\chi_{A}(n)\). Thus \(\mathcal{F}_{n}=\{A\in\mathcal{I}:\ \widetilde{T}\chi_{A^{*}}(n)=1\}\). As \(\mathcal{F}_{n}\subseteq p\), we have that \(p\in\mathcal{U}_{\mathcal{I}}\). Let \(A\in\mathcal{F}_{n}\) be fixed. To show that \(n\in\Delta(p)\) it suffices to have, by Claim 3.10, that \(\widetilde{T}\chi_{B^{*}}(n)=1\) for all \(B\in\mathcal{I}\) with \(p\in B^{*}\). Fix such \(B\) and suppose, towards a contradiction, that \(\widetilde{T}\chi_{B^{*}}(n)\neq 1\). From Claim 3.8 (4), we have \(\widetilde{T}\chi_{B^{*}}(n)=\chi_{B^{*}}(\phi(n))\). Thus \(\widetilde{T}\chi_{B^{*}}(n)=0\). On the other hand, \[1=T\chi_{A}(n)=T\chi_{(A\cap B)}(n)+T\chi_{(A\setminus B)}(n).\] As \(0\leq T\chi_{A\cap B}(n)\leq T\chi_{B}(n)\), we conclude that \(T\chi_{(A\setminus B)}(n)=1\) and \(A\setminus B\in p\), which is a contradiction. Thus we have proved Claim 3.11. Now we show that \(\mathbb{N}\subseteq\phi^{-1}(\mathbb{N})\) where \(\phi\) is as in Claim 3.8. Let \(n\in\mathbb{N}\). Then \(\mathcal{F}_{n}=\{A\in\mathcal{I}:\ T\chi_{A}(n)=1\}\). We claim that \(\bigcap\{A:\ A\in\mathcal{F}_{n}\}\neq\emptyset\). In fact, we have \(\chi_{\{n\}}\leq T\chi_{A}\) for all \(A\in\mathcal{F}_{n}\). Thus \[0<\bigwedge_{A\in\mathcal{F}_{n}}T\chi_{A}.\] Thus, by the hypothesis (2), there is \(m\) such that \(m\in A\) for all \(A\in\mathcal{F}_{n}\). Therefore \(n\in\Delta(m)\), that is, \(\phi(n)=m\). To end the proof, we let \(h:=\phi\restriction\mathbb{N}\). By Claim 3.8(3), \(h\) is onto. We show that \(h:\mathbb{N}\to\mathbb{N}\) is a Katetov reduction. We have \[T\chi_{A}(n)=\widetilde{T}\chi_{A^{*}}(n)=\chi_{A^{*}}(\phi(n))=\chi_{A}(\phi( n))=\chi_{\phi^{-1}(A)}(n)=\chi_{h^{-1}(A)}(n) \tag{3}\] for all \(A\in\mathcal{I}\) and \(n\in\mathbb{N}\). Thus, if \(A\in\mathcal{I}\), then \(\chi_{h^{-1}(A)}\in c_{0,\mathcal{J}}\) which means that \(h^{-1}(A)\in\mathcal{J}\). This shows that \(\mathcal{I}\leq_{K}\mathcal{J}\) and we are done. We end this section commenting about the role of the condition (1) in Theorem 3.7. We show first a simple way to construct subspaces of \(c_{0,\mathcal{I}}\), a particular case was already observed in [19]. **Proposition 3.12**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\) and \(A\subseteq\mathbb{N}\). Then \(c_{0,\mathcal{I}|A}\) is Banach lattice isometric to a closed ideal of \(c_{0,\mathcal{I}}\)._ Proof.: Recall that \(\mathcal{I}\upharpoonright A\) is an ideal on \(A\) and thus \(c_{0,\mathcal{I}|A}\) consists of sequences on \(A\). For each \(\mathbf{x}=(x_{n})_{n\in A}\in c_{0,\mathcal{I}|A}\), let \(T(\mathbf{x}):=(\chi_{A}(n)x_{n})_{n\in\mathbb{N}}\). Clearly \(T(\mathbf{x})\in c_{0,\mathcal{I}}\). The map \(\Psi\) is the required Banach lattice isometry. It is not difficult to check that \(T(c_{0,\mathcal{I}|A})\) is an ideal of \(c_{0,\mathcal{I}}\). In the previous result suppose \(\mathcal{I}\) is not \(\mathcal{P}(\mathbb{N})\) and \(A\in\mathcal{I}\) is infinite, then \(\mathcal{I}\upharpoonright A=\mathcal{P}(A)\) and thus \(\mathcal{I}\upharpoonright A\not\leq_{K}\mathcal{I}\). In this case condition (1) in Theorem 3.7 fails because \(T(\chi_{B})(n)=0\) for all \(B\) and any \(n\not\in A\). ## 4. Some lattices properties of \(c_{0,\mathcal{I}}\) In this section we present some properties of \(c_{0,\mathcal{I}}\) as an ideal of the Banach lattice \(\ell_{\infty}\). First, we characterize the closed ideals of \(\ell_{\infty}\). Part (iii) below says that it is somewhat harmless to assumed that every closed ideal of \(\ell_{\infty}\) contains \(c_{0}\). The particular case of (i) below for a maximal ideal was shown in [18, Theorem 4.1]. **Proposition 4.1**.: _Let \(Y\) be a closed sublattice of \(\ell_{\infty}\). Then_ 1. \(Y\) _is an ideal iff there is an ideal_ \(\mathcal{I}\) _(not containing necesarely_ \(\mathsf{Fin}\)_) on_ \(\mathbb{N}\) _such that_ \(Y=c_{0,\mathcal{I}}\)_._ 2. \(c_{0}\subseteq c_{0,\mathcal{I}}\) _iff_ \(\mathsf{Fin}\subseteq\mathcal{I}\)_._ _._ 3. _For each_ \(A\subseteq\mathbb{N}\)_, let_ \(Z_{A}=\{{\bf x}\in\ell_{\infty}:x_{n}=0\text{ for all }n\in A\}\)_. For each closed ideal_ \(Y\) _of_ \(\ell_{\infty}\)_, let_ \(A=\{n\in\mathbb{N}:\chi_{\{n\}}\not\in Y\}\)_, then_ \(Y=\langle Y\cup c_{0}\rangle\cap Z_{A}\)_._ Proof.: (i) We have already seen that every \(c_{0,\mathcal{I}}\) is an ideal of \(\ell_{\infty}\). Conversely, if \(Y\) is an ideal of \(\ell_{\infty}\), let \(\mathcal{I}=\{A\subseteq\mathbb{N}:\;\chi_{A}\in Y\}\). It is easy to verify that \(\mathcal{I}\) is an ideal. From Proposition 2.5, it follows that \(c_{0,\mathcal{I}}\subseteq Y\). For the other inclusion, fix \({\bf x}\in Y\) and \(\varepsilon>0\). By an argument analogous to that used in the proof of Proposition 2.5(2), there is \({\bf y}\in\operatorname{span}\{\chi_{A}:\;A\in\mathcal{I}\}\) such that \(\|{\bf x}-{\bf y}\|<\varepsilon\). (ii) is straightforward. (iii) Firstly, we prove that \(Y\subseteq Z_{A}\). Assume that \(Y\setminus Z_{A}\neq\emptyset\) and let \({\bf z}=(z_{n})\in Y\setminus Z_{A}\). So, there is \(m\in A\) such that \(z_{m}\neq 0\). Since \(|z_{m}|\chi_{\{m\}}\leq|{\bf z}|\) and \(Y\) is an ideal, \(z_{m}\chi_{\{m\}}\in Y\). Hence, \(\chi_{\{m\}}\in Y\) which is impossible. On the other hand, it is clear that \(Y\subseteq\langle Y\cup c_{0}\rangle\). For the other inclusion, let \(\mathcal{I}\) be the ideal given by (i), i.e, such that \(c_{0,\mathcal{I}}=Y\). By Theorem 5.1 below we have \[\langle Y\cup c_{0}\rangle=\langle Y+c_{0}\rangle=\langle c_{0,\mathcal{I}}+c_ {0,{\sf Fin}}\rangle=\langle c_{0,\mathcal{I}\sqcup{\sf Fin}}\rangle=c_{0, \mathcal{I}\sqcup{\sf Fin}}.\] Now, if \({\bf x}\in\langle Y\cup c_{0}\rangle\cap Z_{A}\), then \({\bf x}\in c_{0,\mathcal{I}\sqcup{\sf Fin}}\). Let \(\varepsilon>0\) be given. Observe that \(A({\bf x},\varepsilon)\cap A=\emptyset\). If \(C\in{\sf Fin}\) and \(B\in\mathcal{I}\) satisfy \(A({\bf x},\varepsilon)=B\cup C\), we have \(C\in\mathcal{I}\) since \(C\cap A=\emptyset\). So, \(A({\bf x},\varepsilon)\in\mathcal{I}\). Whence, \({\bf x}\in c_{0,\mathcal{I}}=Y\). **Definition 4.2**.: 1. Two elements \(x,y\in\ell_{\infty}\) are called \(c_{0}\)_-disjoint_, and we write \(x\perp_{c_{0}}y\), if \(|x|\wedge|y|\in c_{0}\). 2. Two subspaces \(S\) and \(T\) of \(\ell_{\infty}\) are called \(c_{0}\)_-disjoint_ if \(s\perp_{c_{0}}t\) for each \(s\in S\) and \(t\in T\). 3. The \(c_{0}\)_-disjoint complement_ of a subspace \(Y\) of \(\ell_{\infty}\), denoted by \(c_{0}[Y]\), is \[c_{0}[Y]=\{x\in\ell_{\infty}:\;x\perp_{c_{0}}y\text{ for all }y\in Y\}=\{x\in\ell_{\infty}:\;|x| \wedge|y|\in c_{0}\text{ for all }y\in Y\}.\] **Theorem 4.3**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be two ideals over \(\mathbb{N}\). Then_ 1. \(c_{0,\mathcal{I}}\) _and_ \(c_{0,\mathcal{J}}\) _are_ \(c_{0}\)_-disjoint iff_ \(\mathcal{I}\subseteq\mathcal{J}^{\perp}\)_. In particular,_ \(c_{0,\mathcal{I}}\) _and_ \(c_{0,\mathcal{I}^{\perp}}\) _are_ \(c_{0}\)_-disjoint for every ideal_ \(\mathcal{I}\)_._ 2. \(c_{0,\mathcal{I}^{\perp}}=c_{0}[c_{0,\mathcal{I}}]\)_._ 3. \(\mathcal{I}\) _is Frechet iff_ \(c_{0}[c_{0}[c_{0,\mathcal{I}}]]=c_{0,\mathcal{I}}\)_._ Proof.: 1. Suppose \(\mathcal{I}\subseteq\mathcal{J}^{\perp}\) and let \({\bf x}=(x_{n})\in c_{0,\mathcal{I}}\) and \({\bf y}=(y_{n})\in c_{0,\mathcal{J}}\). Let \(\varepsilon>0\). Then \(A(\varepsilon,{\bf x})=\{n\in\mathbb{N}:|x_{n}|\geq\varepsilon\}\in\mathcal{I}\). Thus \(A(\varepsilon,{\bf x})\in\mathcal{J}^{\perp}\). On the other hand, \(A(\varepsilon,{\bf y})=\{n\in\mathbb{N}:|y_{n}|\geq\varepsilon\}\in\mathcal{J}\). Thus \(A(\varepsilon,{\bf x})\cap A(\varepsilon,{\bf y})\) is finite. Thus there is \(n_{0}\) such that for all \(n>n_{0}\), \(|x_{n}|<\varepsilon\) or \(|y_{n}|<\varepsilon\). Thus \(\min\{|x_{n}|,|y_{n}|\}<\varepsilon\) for all \(n>n_{0}\). Hence \(|{\bf x}|\wedge|{\bf y}|\in c_{0}\). Conversely, suppose \(\mathcal{I}\not\subseteq\mathcal{J}^{\perp}\). Let \(A\in\mathcal{I}\setminus\mathcal{J}^{\perp}\). Then there is \(B\subseteq A\) infinite such that \(B\in\mathcal{J}\cap\mathcal{I}\). Hence \(\chi_{B}\in c_{0,\mathcal{I}}\cap c_{0,\mathcal{J}}\) and clearly \(\chi_{B}\not\in c_{0}\) as \(B\) is infinite. For the second claim, we recall that \(\mathcal{I}\subseteq\mathcal{I}^{\perp\perp}\) for every ideal \(\mathcal{I}\). 2. The inclusion \(\subseteq\) follows from (i). Conversely, suppose \({\bf x}=(x_{n})\not\in c_{0,\mathcal{I}^{\perp}}\) and let \(1>\varepsilon>0\) be such that \(A(\varepsilon,{\bf x})\not\in\mathcal{I}^{\perp}\). Thus there is \(B\in\mathcal{I}\) such that \(C:=A(\varepsilon,{\bf x})\cap B\) is infinite. Then \(\chi_{C}\in c_{0,\mathcal{I}}\) and \(\varepsilon\chi_{C}\leq|{\bf x}|\wedge\chi_{C}\). Thus \(|{\bf x}|\wedge\chi_{C}\not\in c_{0}\). 3. Suppose \(\mathcal{I}\) is Frechet. By (ii), we have \[c_{0}[c_{0}[c_{0,\mathcal{I}}]]=c_{0}[c_{0,\mathcal{I}^{\perp}}]=c_{0,\mathcal{ I}^{\perp\perp}}=c_{0,\mathcal{I}}.\] Conversely, to see that \(\mathcal{I}\) is Frechet it suffices to show that \(\mathcal{I}^{\perp\perp}\subseteq\mathcal{I}\). Let \(B\in\mathcal{I}^{\perp\perp}\). By (ii) and our assumption, \(\chi_{B}\in c_{0,\mathcal{I}^{\perp\perp}}=c_{0}[c_{0,\mathcal{I}^{\perp}}]=c_{0}[c _{0}[c_{0,\mathcal{I}}]]=c_{0,\mathcal{I}}\). From Proposition 2.5, we conclude \(B\in\mathcal{I}\). Recall that an ideal \(\mathcal{I}\) on \(\mathbb{N}\) is _tall_ if every infinite subset of \(\mathbb{N}\) contains a infinite member of \(\mathcal{I}\). A subset \(A\) of a Banach lattice \(E\) is called _order dense in \(B\)_ if for each \({\bf 0}\neq x\in B\) there exists \(a\in A\) such that \({\bf 0}<|a|\leq|x|\). Observe that \(c_{0,\mathcal{I}}\) is order dense in \(\ell_{\infty}\) for any ideal \(\mathcal{I}\) (containing \({\sf Fin}\)). **Theorem 4.4**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). The following statements are equivalent:_ 1. \(\mathcal{I}\) _is tall;_ 2. \(c_{0,\mathcal{I}}\setminus c_{0}\) _is order dense in_ \(\ell_{\infty}\setminus c_{0}\)_;_ 3. \(c_{0,\mathcal{I}^{\perp}}=c_{0}\)_._ Proof.: (1) \(\Rightarrow\) (2): Suppose that \(\mathcal{I}\) is tall and let \(\mathbf{0}\neq\mathbf{x}=(x_{n})\in\ell_{\infty}\setminus c_{0}\) be given. For some \(\varepsilon_{0}>0\) we have \(A(\varepsilon_{0},\mathbf{x})\) is infinite. The tallness of \(\mathcal{I}\) implies that there exists \(B\in\mathcal{I}\) infinite satisfying \(B\subset A(\varepsilon_{0},\mathbf{x})\). So, \(\mathbf{y}=\chi_{B}\cdot\mathbf{x}\in c_{0,\mathcal{I}}\setminus c_{0}\) and \(\mathbf{0}<|\mathbf{y}|\leq|\mathbf{x}|\). (2) \(\Rightarrow\) (3): Suppose there is \(\mathbf{x}\in c_{0,\mathcal{I}^{\perp}}\) such that \(\mathbf{x}\not\in c_{0}\). Since \(c_{0,\mathcal{I}}\setminus c_{0}\) is order dense in \(\ell_{\infty}\setminus c_{0}\), there exists \(\mathbf{y}\in c_{0,\mathcal{I}}\setminus c_{0}\) satisfying \(0<|\mathbf{y}|\leq|\mathbf{x}|\). Thus, \(A(\varepsilon,\mathbf{y})\subset A(\varepsilon,\mathbf{x})\) for each \(\varepsilon>0\). There is \(\varepsilon_{0}>0\) with \(A(\varepsilon_{0},\mathbf{x})\) infinite. Since \(A(\varepsilon_{0},\mathbf{x})\in\mathcal{I}^{\perp}\) and \(A(\varepsilon,\mathbf{y})\in\mathcal{I}\), it follows that \(A(\varepsilon,\mathbf{y})\) is finite for each \(0<\varepsilon<\varepsilon_{0}\) which contradicts that \(\mathbf{y}\not\in c_{0}\). So, \(c_{0,\mathcal{I}^{\perp}}=c_{0}\). (3) \(\Rightarrow\) (1): The equality \(c_{0,\mathcal{I}^{\perp}}=c_{0}\) means that \(\mathcal{I}^{\perp}=\mathsf{Fin}\). Thus, \(\mathcal{I}\) is tall. ## 5. Banach spaces isomorphic to \(c_{0,\mathcal{I}}(X)\) In this section we show that, for some ideals, \(c_{0,\mathcal{I}}\) is isomorphic to a known classical Banach space. We do it for ideals of the form \(\mathcal{I}\sqcup\mathcal{J}\), \(\bigoplus\limits_{n\in\mathbb{N}}\mathcal{I}_{n}\), \(\mathcal{I}^{\omega\perp}\) and the Fubini product \(\mathcal{I}\times\mathcal{J}\). **Theorem 5.1**.: _Let \(X\) be a Banach space. If \(\mathcal{I}\) and \(\mathcal{J}\) are ideals on \(\mathbb{N}\), then \(c_{0,\mathcal{I}\sqcup\mathcal{J}}(X)=c_{0,\mathcal{I}}(X)+c_{0,\mathcal{J}}(X)\)._ Proof.: If \(\mathbf{x}=(x_{n})\in c_{0,\mathcal{I}}(X)\) and \(\mathbf{y}=(y_{n})\in c_{0,\mathcal{J}}(X)\), then \(A(\varepsilon,\mathbf{x}+\mathbf{y})\subset A(\varepsilon/2,\mathbf{x})\cup A (\varepsilon/2,\mathbf{y})\) for all \(\varepsilon>0\). So, \(c_{0,\mathcal{I}}(X)+c_{0,\mathcal{J}}(X)\subset c_{0,\mathcal{I}\sqcup \mathcal{J}}(X)\). On the other hand, if \(\mathbf{z}=(z_{n})\in\ell_{\infty}(X)\) and \(\operatorname{supp}(\mathbf{z})\in\mathcal{I}\sqcup\mathcal{J}\), take \(A\in\mathcal{I}\) and \(B\in\mathcal{J}\) with \(\operatorname{supp}(\mathbf{z})=A\cup B\) and \(A\cap B=\emptyset\). By setting \(\mathbf{x}=(\chi_{A}(n)z_{n})\) and \(\mathbf{y}=(\chi_{B}(n)z_{n})\), we have \(\mathbf{z}=\mathbf{x}+\mathbf{y}\), \(\mathbf{x}\in c_{0,\mathcal{I}}\) and \(\mathbf{y}\in c_{0,\mathcal{J}}\). By Proposition 2.5(1) we conclude that \(c_{0,\mathcal{I}\sqcup\mathcal{J}}(X)\subset c_{0,\mathcal{I}}(X)+c_{0, \mathcal{J}}(X)\). **Theorem 5.2**.: _Let \(X\) be a Banach space, \(\{K_{n}:\;n\in\mathbb{N}\}\) be a partition of \(\mathbb{N}\) and \(\mathcal{I}_{n}\) be an ideal on \(K_{n}\) for each \(n\in\mathbb{N}\). If \(\mathcal{I}=\bigoplus\limits_{n\in\mathbb{N}}\mathcal{I}_{n}\), then \(c_{0,\mathcal{I}}(X)\) is isometric to \(\ell_{\infty}((c_{0,\mathcal{I}_{n}}(X))_{n\in\mathbb{N}})\). In particular, \(c_{0,\mathcal{J}^{\omega}}(X)\) is isometric to \(\ell_{\infty}(c_{0,\mathcal{J}}(X))\) for any ideal \(\mathcal{J}\)._ Proof.: Let \[\Psi\colon c_{0,\mathcal{I}}(X) \longrightarrow\ell_{\infty}((c_{0,\mathcal{I}_{n}}(X))_{n\in \mathbb{N}})\] \[(x_{n}) \longmapsto((x_{n})_{n\in K_{m}})_{m\in\mathbb{N}}.\] 1. To see that \(\Psi\) is a well defined linear isometry, let \(\mathbf{x}=(x_{n})\in c_{0,\mathcal{I}}(X)\) and \(\varepsilon>0\). Then \[A(\varepsilon,\mathbf{x})\in\mathcal{I}\quad\Longleftrightarrow\{n\in K_{m}:\; \|x_{n}\|\geq\varepsilon\}\in\mathcal{I}_{m}\text{ for each }m\in\mathbb{N}.\] Thus, \((x_{n})_{n\in K_{m}}\in c_{0,\mathcal{I}_{m}}(X)\) for all \(m\in\mathbb{N}\). On the other hand, \[\|\Psi(\mathbf{x})\|=\sup\limits_{m\in\mathbb{N}}\|(x_{n})_{n\in K_{m}}\|_{ \infty}=\sup\limits_{m\in\mathbb{N}}\sup\limits_{n\in K_{m}}\|x_{n}\|=\sup \limits_{n\in\mathbb{N}}\|x_{n}\|=\|\mathbf{x}\|.\] 2. To see that \(\Psi\) is onto, let \(\mathbf{y}=((y_{n}^{m}))_{m\in\mathbb{N}}\in\ell_{\infty}((c_{0,\mathcal{I}_{m} }(X))_{m\in\mathbb{N}})\) be given. Define \(\mathbf{x}=(x_{n})\in\ell_{\infty}(X)\) by \(x_{n}=y_{n}^{m}\) iff \(n\in K_{m}\). If \(\varepsilon>0\) and \(m\in\mathbb{N}\) are given, then \[\{n\in\mathbb{N}:\;\|x_{n}\|\geq\varepsilon\}\cap K_{m}=\{n\in K_{m}:\;\|y_{n}^ {m}\|\geq\varepsilon\}\in\mathcal{I}_{m}.\] So \(A(\varepsilon,\mathbf{x})\in\mathcal{I}\) for each \(\varepsilon>0\) i.e. \(\mathbf{x}\in c_{0,\mathcal{I}}(X)\). Clearly \(\Psi(\mathbf{x})=\mathbf{y}\). **Corollary 5.3**.: _Let \(X\) be a Banach space. Let \(\{A,B\}\) be a partition of \(\mathbb{N}\) and \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(A\) and \(B\), respectively. Then \(c_{0,\mathcal{I}\oplus\mathcal{J}}(X)\) is isometric to \(c_{0,\mathcal{I}}(X)\oplus_{\infty}c_{0,\mathcal{J}}(X)\)._ **Theorem 5.4**.: _Let \(X\) be a Banach space, \(\{K_{n}:\;n\in\mathbb{N}\}\) be a partition of \(\mathbb{N}\), \(\mathcal{I}_{n}\) be an ideal on \(K_{n}\) for each \(n\in\mathbb{N}\) and \(\mathcal{I}=\bigoplus\limits_{n\in\mathbb{N}}\mathcal{I}_{n}\). Then \(c_{0,\mathcal{I}^{\perp}}(X)\) is isometric to \(c_{0}((c_{0,\mathcal{I}^{\perp}_{j}}(X))_{j\in\mathbb{N}})\). In particular, \(c_{0,\mathcal{J}^{\omega\perp}}(X)\) is isometric to \(c_{0}(c_{0,\mathcal{J}^{\perp}}(X))\) for any ideal \(\mathcal{J}\)._ Proof.: Recall that \[A\in\mathcal{I}^{\perp}\Longleftrightarrow(\exists N\in\mathbb{N})(A\subseteq \bigcup_{i\leq N}K_{i}\text{ and }(\forall i\leq N)\;(A\cap K_{i}\in\mathcal{I}^{\perp}_{i})).\] For any \(\mathbf{x}\in c_{0,\mathcal{I}^{\perp}}(X)\), we set \(\mathbf{y}^{j}=(x_{n})_{n\in K_{j}}\) for each \(j\in\mathbb{N}\). We have the following: 1. \(\mathbf{y}^{j}\in c_{0,\mathcal{I}^{\perp}_{j}}(X)\) for all \(j\in\mathbb{N}\). In fact, let \(j\in\mathbb{N}\) and \(\delta>0\) be given. There is \(N\in\mathbb{N}\) such that \(A(\delta,\mathbf{x})\subset K_{1}\cup\dots\cup K_{N}\) and \(A(\delta,\mathbf{x})\cap K_{i}\in\mathcal{I}^{\perp}_{i}\) for all \(i\in\{1,\dots,N\}\). Observe that \(A(\delta,\mathbf{x})\cap K_{j}=A(\delta,\mathbf{y}^{j})\). Thus, if \(j\leq N\), we have that \(A(\delta,\mathbf{y}^{j})\in\mathcal{I}^{\perp}_{i}\). Otherwise, \(A(\delta,\mathbf{y}^{j})=A(\delta,\mathbf{x})\cap K_{j}=\emptyset\). In either case, we conclude that \(\mathbf{y}^{j}\in c_{0,\mathcal{I}^{\perp}_{j}}(X)\). 2. The sequence \((\mathbf{y}^{j})_{j\in\mathbb{N}}\) converges to \(0\). If \(\varepsilon>0\) is given, there is \(N\in\mathbb{N}\) such that \(A(\varepsilon,\mathbf{x})\subset K_{1}\cup\dots\cup K_{N}\). If \(j>N\) and \(n\in K_{j}\), we have \(\|x_{n}\|<\varepsilon\). So \(\|\mathbf{y}^{j}\|=\sup\limits_{n\in K_{j}}\|x_{n}\|\leq\varepsilon\) if \(j>N\). The two previous claims show that \[\Psi :c_{0,\mathcal{I}^{\perp}}(X) \longrightarrow c_{0}((c_{0,\mathcal{I}^{\perp}_{j}}(X))_{j\in \mathbb{N}})\] \[\mathbf{x}=(x_{n}) \longmapsto(\mathbf{y}^{j})_{j\in\mathbb{N}}\] is well defined and is clearly a linear isometry. To see that \(\Psi\) is onto, let \((\mathbf{y}^{j})\in c_{0}((c_{0,\mathcal{I}^{\perp}_{j}}(X))_{j\in\mathbb{N}})\) be given and consider the sequence \(\mathbf{x}=(x_{n})_{n}\) defined by \(x_{n}=y^{j}_{n}\) if \(n\in K_{j}\). Let \(\varepsilon>0\), there is \(N\in\mathbb{N}\) such that \(\|\mathbf{y}^{j}\|=\sup\limits_{n\in K_{j}}\|y^{j}_{n}\|<\varepsilon\) for all \(j>N\). Thus, \(A(\varepsilon,\mathbf{x})\subset K_{1}\cup\dots\cup K_{N}\). Also note that \(A(\varepsilon,\mathbf{x})\cap K_{j}=\{n\in K_{j}:\;\|y^{j}_{n}\|\geq \varepsilon\}\in\mathcal{I}^{\perp}_{j}\) for all \(j\leq N\), which yields \((x_{n})\in c_{0,\mathcal{I}^{\perp}}(X)\). Clearly, \(\Psi(\mathbf{x})=(\mathbf{y}^{j})\). To study the Banach space \(c_{0,\mathcal{I}\times\mathcal{J}}(X)\) we need to compute the norm of the quotient \(\ell_{\infty}/c_{0,\mathcal{I}}\). For that end we recall the definition of \(\mathcal{I}-\limsup\) of a sequence. **Definition 5.5**.: [6] Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). For each sequence \(\mathbf{x}=(x_{n})\) in \(\mathbb{R}\), we set \[B_{\mathbf{x}}=\{b\in\mathbb{R}:\;\{k\in\mathbb{N}:\;x_{k}>b\}\not\in \mathcal{I}\}\] and \[\mathcal{I}-\limsup x_{n}=\begin{cases}\sup B_{\mathbf{x}},&B_{ \mathbf{x}}\neq\emptyset,\\ -\infty,&B_{\mathbf{x}}=\emptyset.\end{cases}\] Note that a sequence \((x_{n})\) is \(\mathcal{I}\)-convergent to \(0\) iff \(\mathcal{I}-\limsup|x_{n}|=0\) (see [6, Theorem 4]). **Lemma 5.6**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\)._ 1. _If_ \(x_{n}\leq y_{n}\) _for each_ \(n\in\mathbb{N}\)_, then_ \(\mathcal{I}-\limsup x_{n}\leq\mathcal{I}-\limsup y_{n}\)_._ 2. _If_ \(c\in\mathbb{R}\)_, then_ \(\mathcal{I}-\limsup c+x_{n}=c+\mathcal{I}-\limsup x_{n}\)_._ Proof.: (1) If \(\mathcal{I}-\limsup x_{n}=-\infty\), there is nothing to prove. Suppose that \(B_{\mathbf{x}}\neq\emptyset\) and let \(\varepsilon>0\) be given. If \(b\in B_{\mathbf{x}}\) satisfies \(\mathcal{I}-\limsup x_{n}-\varepsilon<b\), then \(\{k\in\mathbb{N}:\;y_{k}>b\}\not\in\mathcal{I}\). Hence \(b\in B_{\mathbf{y}}\) and \(\mathcal{I}-\limsup x_{n}-\varepsilon\leq\mathcal{I}-\limsup y_{n}\). Since \(\varepsilon>0\) was arbitrary, the result follows. (2) Just observe that \[\{b\in\mathbb{R}:\;\{k\in\mathbb{N}:\;x_{k}+c>b\}\not\in\mathcal{I}\}=c+\{b^{ \prime}\in\mathbb{R}:\;\{k\in\mathbb{N}:\;x_{k}>b^{\prime}\}\not\in\mathcal{I}\}.\qed\] **Lemma 5.7**.: _Let \(X\) be a Banach space and \(\mathbf{x}=(x_{n})\in\ell_{\infty}(X)\). Then_ \[\mathcal{I}-\limsup\|x_{n}\|=\|(x_{n})+c_{0,\mathcal{I}}(X)\|.\] Proof.: Let \(\mathbf{z}\in c_{0,\mathcal{I}}(X)\). Then \(\|x_{n}\|\leq\|\mathbf{x}-\mathbf{z}\|+\|z_{n}\|\) for all \(n\in\mathbb{N}\). By Lemma 5.6 we have \(\mathcal{I}-\limsup\|x_{n}\|\leq\|\mathbf{x}-\mathbf{z}\|\). Thus, \(\mathcal{I}-\limsup\|x_{n}\|\leq\inf\{\|\mathbf{x}-\mathbf{z}\|:\,\mathbf{z} \in c_{0,\mathcal{I}}(X)\}\). Now let \(r\in\mathbb{R}\) be such that \[\mathcal{I}-\limsup\|x_{n}\|<r<\inf\{\|\mathbf{x}-\mathbf{z}\|:\,\mathbf{z} \in c_{0,\mathcal{I}}(X)\}.\] Note that \(A:=A(r,\mathbf{x})\in\mathcal{I}\). If \(A=\emptyset\), we are done. If not, for \(n\in A\) we have \(\mathbf{z}=x_{n}\chi_{\{n\}}\in c_{0,\mathcal{I}}(X)\) and \(r<\|\mathbf{x}-\mathbf{z}\|=0\), which is impossible. Whence, \(\mathcal{I}-\limsup\|x_{n}\|=\inf\{\|\mathbf{x}-\mathbf{z}\|:\,\mathbf{z}\in c _{0,\mathcal{I}}(X)\}=\|(x_{n})+c_{0,\mathcal{I}}(X)\|\). **Theorem 5.8**.: _Let \(X\) be a Banach space and \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\mathbb{N}\). Then \(c_{0,\mathcal{I}\times\mathcal{J}}(X)/c_{0,\mathcal{J}^{\omega}}(X)\) is isomorphic to \(c_{0,\mathcal{I}}(\ell_{\infty}(X)/c_{0,\mathcal{J}}(X))\)._ Proof.: We start by proving: **Claim 5.9**.: _Let \(\mathbf{x}=(x_{n})\in\ell_{\infty}(X)\) and \(\delta>0\) be given._ 1. _If_ \(A(\delta,\mathbf{x})\in\mathcal{I}\)_, then_ \(\mathcal{I}-\limsup_{j}\|x_{j}\|\leq\delta\)_._ 2. _If_ \(\mathcal{I}-\limsup_{j}\|x_{j}\|\leq\delta\)_, then_ \(A(2\delta,\mathbf{x})\in\mathcal{I}\)_._ Proof of claim.: (1): Suppose that \(\mathcal{I}-\limsup_{j}\|x_{j}\|>\delta\). There is \(b\in\mathbb{R}\) such that \(\delta<b\) and \(\{j\in\mathbb{N}:\,\|x_{j}\|>b\}\not\in\mathcal{I}\). But we have \(\{j\in\mathbb{N}:\,\|x_{j}\|>b\}\subset A(\delta,\mathbf{x})\in\mathcal{I}\), which is absurd. So, \(\mathcal{I}-\limsup_{j}\|x_{j}\|\leq\delta\). (2): If \(A(2\delta,\mathbf{x})=\{j\in\mathbb{N}:\,\|x_{j}\|\geq 2\delta\}\not\in\mathcal{I}\), we have \(\{j\in\mathbb{N}:\,\|x_{j}\|>\delta^{\prime}\}\not\in\mathcal{I}\) for all \(\delta^{\prime}<2\delta\). Hence, \(\delta^{\prime}\leq\mathcal{I}-\limsup_{j}\|x_{j}\|\). Since \(\delta^{\prime}<2\delta\) were arbitrary, we conclude that \(2\delta\leq\mathcal{I}-\limsup_{j}\|x_{j}\|\), a contradiction. Now we continue with the proof of the theorem. If \(\mathbf{x}=(x_{(n,m)})\in\ell_{\infty}(\mathbb{N}\times\mathbb{N})(X)\), we set \(\mathbf{x}_{(n)}=(x_{(n,m)})_{m\in\mathbb{N}}\). Let \[\Psi \colon c_{0,\mathcal{I}\times\mathcal{J}}(X)\to c_{0,\mathcal{I }}(\ell_{\infty}(X)/c_{0,\mathcal{J}}(X))\] \[\mathbf{x}=(x_{(n,m)})\mapsto(\mathbf{x}_{(n)}+c_{0,\mathcal{J }}(X))_{n\in\mathbb{N}}.\] From Claim 5.9 and Proposition 5.7 we have \[\mathbf{x}=(x_{(n,m)})\in c_{0,\mathcal{I}\times\mathcal{J}}(X)\] \[\iff(\forall\,\varepsilon>0)\left(\{n\in\mathbb{N}:\,\{m\in \mathbb{N}:\,\|x_{(n,m)}\|\geq\varepsilon\}\not\in\mathcal{J}\}\in\mathcal{I}\right)\] \[\iff(\forall\,\varepsilon>0)\left(\{n\in\mathbb{N}:\,\mathcal{J}- \limsup\|\mathbf{x}_{(n)}\|\geq\varepsilon\}\in\mathcal{I}\right)\] \[\iff(\forall\,\varepsilon>0)\left(\{n\in\mathbb{N}:\,\|\mathbf{x }_{(n)}+c_{0,\mathcal{J}}(X)\|\geq\varepsilon\}\in\mathcal{I}\right)\] \[\iff(\mathbf{x}_{(n)}+c_{0,\mathcal{J}}(X))_{n\in\mathbb{N}}\in c _{0,\mathcal{I}}(\ell_{\infty}(X)/c_{0,\mathcal{J}}(X)).\] So, \(\Psi\) is well defined. Now, if \((\mathbf{y}^{n}+c_{0,\mathcal{J}}(X))_{n\in\mathbb{N}}\in c_{0,\mathcal{I}}( \ell_{\infty}(X)/c_{0,\mathcal{J}}(X))\) is given, we set \(x_{(n,m)}=y_{m}^{n}\) for \(n,m\in\mathbb{N}\). It is not difficult to check that \(\mathbf{x}=(x_{(n,m)})\in c_{0,\mathcal{I}\times\mathcal{J}}(X)\) and \(\Psi(\mathbf{x})=(\mathbf{y}^{n}+c_{0,\mathcal{J}}(X))_{n\in\mathbb{N}}\). Now, \[\|\Psi(\mathbf{x})\|=\sup_{n\in\mathbb{N}}\left(\mathcal{J}-\limsup\|\mathbf{x }_{(n)}\|\right)\leq\sup_{n\in\mathbb{N}}\sup_{m\in\mathbb{N}}\|x_{(n,m)}\|=\| \mathbf{x}\|_{\infty}.\] Also we have \[\Psi(\mathbf{x})=\mathbf{0} \Longleftrightarrow\left(\forall\,n\in\mathbb{N}\right)\left( \,\mathbf{x}_{(n)}\in c_{0,\mathcal{J}}(X)\,\right)\] \[\Longleftrightarrow\left(\forall\,\varepsilon>0\right)\left( \forall\,n\in\mathbb{N}\right)\left(\,A(\varepsilon,\mathbf{x}_{(n)})\in \mathcal{J}\,\right)\] \[\Longleftrightarrow\left(\forall\,\varepsilon>0\right)\left( \,A(\varepsilon,\mathbf{x})\in\mathcal{J}^{\omega}\,\right)\] \[\Longleftrightarrow\mathbf{x}\in c_{0,\mathcal{J}^{\omega}}(X).\qed\] To state the next result, we recall that if \(X\) and \(Y\) are Banach spaces, \(X\overset{\vee}{\otimes}Y\) denotes the _injective tensor product_ of \(X\) and \(Y\) (see [7, Chapter 1]). **Theorem 5.10**.: _Let \(X\) be a Banach space and \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Then \(c_{0,\mathcal{I}}\overset{\vee}{\otimes}X\) is isometric to a subspace of \(c_{0,\mathcal{I}}(X)\)._ Proof.: If \(u=\sum_{j=1}^{m}(x_{n}^{j})_{n}\otimes y_{j}\in c_{0,\mathcal{I}}\otimes X\), then \[|u|_{\vee} =\sup_{x^{*}\in B_{c_{0,\mathcal{I}}^{*}},y^{*}\in B_{X^{*}}} \left|\sum_{j=1}^{m}x^{*}((x_{n}^{j})_{n})y^{*}(y_{j})\right|\] \[=\sup_{x^{*}\in B_{c_{0,\mathcal{I}}^{*}},y^{*}\in B_{X^{*}}} \left|x^{*}\left(\sum_{j=1}^{m}(x_{n}^{j}y^{*}(y_{j}))_{n}\right)\right|\] \[=\sup_{y^{*}\in B_{X^{*}}}\left\|\sum_{j=1}^{m}(x_{n}^{j}y^{*}(y_ {j}))_{n}\right\|\] \[=\sup_{y^{*}\in B_{X^{*}}}\left\|\left(y^{*}\left(\sum_{j=1}^{m} x_{n}^{j}y_{j}\right)\right)_{n}\right\|\] \[=\left\|\left(\sum_{j=1}^{m}x_{n}^{j}y_{j}\right)_{n}\right\|.\] So the map \[\sum_{j=1}^{m}(x_{n}^{j})_{n}\otimes y_{j}\in c_{0,\mathcal{I}}\otimes X \mapsto\left(\sum_{j=1}^{m}x_{n}^{j}y_{j}\right)_{n}\in c_{0,\mathcal{I}}(X)\] extends to an isometry from \(c_{0,\mathcal{I}}\overset{\vee}{\otimes}X\) to \(c_{0,\mathcal{I}}(X)\). **Remark 5.11**.: The above isometry is not onto in general. Indeed, if \(\mathcal{I}=\mathcal{P}(\mathbb{N})\), then \(c_{0,\mathcal{I}}\) is \(\ell_{\infty}\) and it is known that the existence of an isometry from \(\ell_{\infty}\overset{\vee}{\otimes}X\) onto \(\ell_{\infty}(X)\) implies that \(X\) contains a complemented copy of \(c_{0}\) (see [20]). However, when \(\mathcal{I}=\mathsf{Fin}\), the above isometry is onto as it is established in [7, Theorem 1.1.11]. **Question 5.12**.: For which ideals \(\mathcal{I}\), is \(c_{0,\mathcal{I}}\overset{\vee}{\otimes}X\) isometric to \(c_{0,\mathcal{I}}(X)\)? ### Some examples In [12] was studied the smallest collection \(\mathcal{B}\) of ideals on \(\mathbb{N}\) containing the ideal of finite sets and closed under countable direct sums and the operation of taking orthogonal. They defined by recursion a sequence of ideals \(P_{\alpha}\) and \(Q_{\alpha}\) for \(\alpha<\omega_{1}\). For every limit ordinal \(\alpha<\omega_{1}\) we fix an increasing sequence \((\upsilon_{n}^{\alpha})_{n}\) of ordinals such that \(\sup_{n}(\upsilon_{n}^{\alpha})=\alpha\). 1. \(P_{0}=\mathcal{P}(\mathbb{N})\) and \(Q_{0}=(P_{0})^{\perp}=\mathsf{Fin}\). 2. \(P_{\alpha+1}=(P_{\alpha}^{\perp})^{\omega}\). 3. \(P_{\alpha}=\oplus_{n}(P_{\upsilon^{\alpha}})^{\perp}\), for \(\alpha<\omega_{1}\) a limit ordinal. 4. \(Q_{\alpha}=(P_{\alpha})^{\perp}\) for every \(\alpha<\omega_{1}\). Then \(\mathcal{B}=\{P_{\alpha},Q_{\alpha}:\alpha<\omega_{1}\}\) has exactly \(\aleph_{1}\) non isomorphic ideals. For example, \(P_{1}=\mathsf{Fin}^{\omega}\) is the sum countably many times the ideal \(\mathsf{Fin}\) of all finite subsets of \(\mathbb{N}\), it is a well known ideal sometimes denoted by \(\{\emptyset\}\times\mathsf{Fin}\). Its orthogonal \(Q_{1}=\mathsf{Fin}^{\omega\perp}\) is also denoted \(\mathsf{Fin}\times\{\emptyset\}\). All these ideals are Frechet (that is, they satisfy that \(\mathcal{I}^{\perp\perp}=\mathcal{I}\)). In particular, \((Q_{\alpha})^{\perp}=P_{\alpha}\). **Theorem 5.13**.: _[_12_]_ _(i) Every ideal in \(\mathcal{B}\) is isomorphic to either \(P_{\alpha}\), \(Q_{\alpha}\) or \(P_{\alpha}\oplus Q_{\alpha}\) for some \(\alpha<\omega_{1}\)._ _(ii) For every \(\alpha<\beta<\omega_{1}\), there are subsets \(A,B\) of \(\mathbb{N}\) such that \(P_{\alpha}\approx Q_{\beta}\upharpoonright A\) and \(Q_{\alpha}\approx P_{\beta}\upharpoonright B\)._ _(iii) All ideals in \(\mathcal{B}\) are Borel subsets of \(2^{\mathbb{N}}\), and thus they are meager ideals._ Let \(\mathsf{WO}(\mathbb{Q})\) denote the ideal of all well founded subsets of \(\mathsf{WO}(\mathbb{Q})\). For simplicity, we will write \(\mathsf{WO}\) instead of \(\mathsf{WO}(\mathbb{Q})\). Observe that \(\mathsf{WO}^{\perp}\) is the ideal of well founded subsets of \((\mathbb{Q},<^{*})\) where \(<^{*}\) is the reversed order of \(\mathbb{Q}\). In fact, the map \(x\mapsto-x\) from \(\mathbb{Q}\) onto \(\mathbb{Q}\) is an isomorphism between \(\mathsf{WO}\) and \(WO^{\perp}\). In particular, \(\mathsf{WO}\) is a Frechet ideal. We recall also that \(\mathsf{WO}\) is a meager ideal since it is a co-analytic subset of the Cantor cube \(\{0,1\}^{\mathbb{Q}}\). From the results in [12] we have **Theorem 5.14**.: _Every ideal \(\mathcal{I}\) in \(\mathcal{B}\) is isomorphic to a restriction \(\mathsf{WO}\upharpoonright A\) for some \(A\subseteq\mathbb{Q}\)._ **Theorem 5.15**.: _The following hold for every countable ordinal \(\alpha\)._ 1. \(c_{0,P_{\alpha+1}}\) _is isometric to_ \(\ell_{\infty}(c_{0,Q_{\alpha}})\)_._ 2. \(c_{0,Q_{\alpha+1}}\) _is isometric to_ \(c_{0}(c_{0,P_{\alpha}})\)_._ 3. \(c_{0,P_{\alpha}}\) _is isometric to_ \(\ell_{\infty}((c_{0,Q_{\upsilon_{n}^{\alpha}}})_{n})\) _for_ \(\alpha\) _a limit ordinal._ 4. \(c_{0,Q_{\alpha}}\) _is isometric to_ \(c_{0}((c_{0,P_{\upsilon_{n}^{\alpha}}})_{n})\) _for_ \(\alpha\) _a limit ordinal._ Proof.: (i) Since \(P_{\alpha+1}=(Q_{\alpha})^{\omega}\), the claim follows from Theorem 5.2. (ii) As \(Q_{\alpha+1}=(Q_{\alpha})^{\omega\perp}\) and \((Q_{\alpha})^{\perp}=P_{\alpha}\), the claim follows from Theorem 5.4. (iii) Let \(\alpha\) be a limit ordinal. The result follows from Theorem 5.2 as \(P_{\alpha}=\oplus_{n}(P_{\upsilon_{n}^{\alpha}})^{\perp}=\oplus_{n}Q_{\upsilon _{n}^{\alpha}}\). (iv) Since \(Q_{\alpha}=(P_{\alpha})^{\perp}=(\oplus_{n}(P_{\upsilon_{n}^{\alpha}})^{\perp})^ {\perp}\), the result follows from Theorem 5.4. From Proposition 3.12 and Theorem 5.14 we immediately get the following. Observe that \(c_{0,\mathsf{WO}}\) is not isomorphic to \(\ell_{\infty}\) by Theorem 2.6. **Theorem 5.16**.: _For every \(\alpha<\omega_{1}\), \(c_{0,P_{\alpha}}\) and \(c_{0,Q_{\alpha}}\) are isometric to a closed subspace of \(c_{0,\mathsf{WO}}\)._ Some particular instances of Theorem 5.15 are the following: \(c_{0,\mathsf{Fin}}=c_{0}\). \(c_{0,\mathsf{Fin}^{\perp}}=\ell_{\infty}\). \(c_{0,\mathsf{Fin}^{\perp\omega}}\) is isometric to \(\ell_{\infty}(\ell_{\infty})=\ell_{\infty}\). \(c_{0,\mathsf{Fin}^{\omega}}\) is isometric to \(\ell_{\infty}(c_{0})\). \(c_{0,\mathsf{Fin}^{\omega\perp}}\) is isometric to \(c_{0}(\ell_{\infty})\). \(c_{0,\mathsf{Fin}^{\omega\perp\omega}}\) is isometric to \(\ell_{\infty}(c_{0}(\ell_{\infty}))\). Cembranos and Mendoza [5] showed that \(\ell_{\infty}(c_{0})\) and \(c_{0}(\ell_{\infty})\) are not isomorphic. Since \(\mathsf{Fin}^{\omega}\) and \(\mathsf{Fin}^{\omega\perp}\) are not isomorphic 1, from Theorem 3.3, we obtain the following weak version of their result. Footnote 1: A quick way to see this is noticing that \(\mathsf{Fin}^{\omega\perp}\) is \(F_{\sigma}\) and \(\mathsf{Fin}^{\omega}\) is not. **Theorem 5.17**.: _[_5_]_ \(\ell_{\infty}(c_{0})\) is not isometric to \(c_{0}(\ell_{\infty})\)._ Concerning to the Banach space \(c_{0,\mathsf{Fin}\times\mathsf{Fin}}\), Theorem 5.8 implies that \(c_{0}(\ell_{\infty}/c_{0})\) is isomorphic to \(c_{0,\mathsf{FIN}\times\mathsf{FIN}}/c_{0,\mathsf{FIN}^{\omega}}\). On the other hand, the following result gives a quotient representation of \(c_{0,\mathsf{Fin}\times\mathsf{FIN}}\). **Theorem 5.18**.: \(c_{0,\mathsf{Fin}\times\mathsf{Fin}}\) _is isomorphic to \(\ell_{\infty}(c_{0})\times c_{0}(\ell_{\infty})/K\), where \(K\) is isomorphic to \(c_{0}\)._ Proof.: Consider the maps \(\Psi_{1}\colon c_{0,\mathsf{Fin}^{\omega}}\to\ell_{\infty}((c_{0}(K_{n}))_{n \in\mathbb{N}})\) and \(\Psi_{2}\colon c_{0,\mathsf{Fin}^{\omega\perp}}\to c_{0}((\ell_{\infty}(K_{n}) )_{n\in\mathbb{N}})\) defined in Theorems 5.2 and 5.4, respectively. Let \(S\colon c_{0,\mathsf{Fin}^{\omega}}\times c_{0,\mathsf{Fin}^{\omega\perp}} \to c_{0,\mathsf{Fin}^{\omega}}+c_{0,\mathsf{Fin}^{\omega\perp}}\) be defined by \(S(a,b)=a+b\), for \(a\in c_{0,\mathsf{Fin}^{\omega}}\) and \(b\in c_{0,\mathsf{Fin}^{\omega\perp}}\). If \(\Psi_{1}^{-1}\times\Psi_{2}^{-1}\) is the natural map from \(\ell_{\infty}((c_{0}(K_{n}))_{n\in\mathbb{N}})\times c_{0}((\ell_{\infty}(K_{n} ))_{n\in\mathbb{N}})\) onto \(c_{0,\mathsf{Fin}^{\omega}}\times c_{0,\mathsf{Fin}^{\omega\perp}}\), then \[S\circ(\Psi_{1}^{-1}\times\Psi_{2}^{-1})\colon(x,y)\in\ell_{\infty}((c_{0}(K_ {n}))_{n\in\mathbb{N}})\times c_{0}((\ell_{\infty}(K_{n}))_{n\in\mathbb{N}}) \to\Phi_{1}^{-1}(x)+\Phi_{2}^{-1}(y)\in c_{0,\mathsf{Fin}^{\omega}}+c_{0, \mathsf{Fin}^{\omega\perp}}\] is linear, continuous and onto. By definition of \(\mathsf{Fin}\times\mathsf{Fin}\) we have \[\mathsf{Fin}\times\mathsf{Fin}=\{A\cup B:\;A\in\mathsf{Fin}^{\omega},\;B\in \mathsf{Fin}^{\omega\perp}\}.\] So, \(c_{0,\mathsf{Fin}\times\mathsf{FIN}}=c_{0,\mathsf{Fin}^{\omega}}+c_{0, \mathsf{Fin}^{\omega\perp}}\) by Theorem 5.1. From the fact \(\ell_{\infty}((c_{0}(K_{n}))_{n\in\mathbb{N}})\) and \(c_{0}((\ell_{\infty}(K_{n}))_{n\in\mathbb{N}})\) are isometric to \(\ell_{\infty}(c_{0})\) and \(c_{0}(\ell_{\infty})\), respectively, we get the first part of the result. Finally, note that \(K=\ker S\circ(\Psi_{1}^{-1}\times\Psi_{2}^{-1})\) is isomorphic to \(c_{0}(c_{0})\). Indeed, we have \[K =(\Psi_{1}^{-1}\times\Psi_{2}^{-1})^{-1}(\ker S)\] \[=(\Psi_{1}^{-1}\times\Psi_{2}^{-1})^{-1}(\{(a,-a):\,a\in c_{0, \mathsf{Fin}^{\omega}}\cap c_{0,\mathsf{Fin}^{\omega\perp}}\})\] \[=(\Psi_{1}^{-1}\times\Psi_{2}^{-1})^{-1}(\{(a,-a):\,a\in c_{0, \mathsf{Fin}}\})\] \[=\{(x,-x):\,x\in c_{0}((c_{0}(K_{n}))_{n\in\mathbb{N}})\}.\] So, the map \(x\in c_{0}((c_{0}(K_{n}))_{n\in\mathbb{N}})\mapsto(x,-x)\in K\) is an isomorphism. ### Grothendieck property Recall that a Banach space is _Grothendieck_ (or has the _Grothendieck property_) if every operator from \(X\) to \(c_{0}\) is weakly compact. It is known that the Grothendieck property pass to quotients and complemented subspaces [11, Proposition 3.1.4]. Now we are interested in the Grothendieck property for the Banach spaces \(c_{0,\mathcal{I}}\). In [11, Problem 9] it is asked about the ideals \(\mathcal{I}\) such that \(c_{0,\mathcal{I}}\) is Grothendieck. Concerning to this question, we have the following results. **Theorem 5.19**.: _For each ordinal \(1\leq\alpha<\omega_{1}\), \(c_{0,P_{\alpha}}\) and \(c_{0,Q_{\alpha}}\) are not Grothendieck spaces._ Proof.: We proceed by induction on \(\alpha\). If \(\alpha=1\), \(c_{0,P_{1}}\) and \(c_{0,Q_{1}}\) are isometric to \(\ell_{\infty}(c_{0})\) and \(c_{0}(\ell_{\infty})\), respectively. Note that \(c_{0,P_{1}}\) is not Grothendieck because \(c_{0}\) is a quotient of it. On the other hand, \(c_{0,Q_{1}}\) is not Grothendieck by [11, Proposition 5.3.8] since \(c_{0}(\ell_{\infty})\) is isometric to \(c_{0}\overset{\vee}{\otimes}\ell_{\infty}\)[7, Theorem 1.1.11]. Now, let \(\beta<\omega_{1}\) and suppose that \(c_{0,P_{\beta}}\) and \(c_{0,Q_{\beta}}\) are not Grothendieck. By Theorem 5.15, \(c_{0,P_{\beta+1}}\) and \(c_{0,Q_{\beta+1}}\) are isometric to \(\ell_{\infty}(c_{0,Q_{\beta}})\) and \(c_{0}(c_{0,P_{\beta}})\), respectively. Since \(c_{0,Q_{\beta}}\) and \(c_{0,P_{\beta}}\) are quotient of \(\ell_{\infty}(c_{0,Q_{\beta}})\) and \(c_{0}(c_{0,P_{\beta}})\), respectively, we conclude that \(c_{0,P_{\beta+1}}\) and \(c_{0,Q_{\beta+1}}\) do not have the Grothendieck property. Now, let \(\alpha\) be a limit ordinal and assume that \(c_{0,P_{\beta}}\) and \(c_{0,Q_{\beta}}\) do not have the Grothendieck property for all \(\beta<\alpha\). Fix an increasing sequence \((v_{n}^{\alpha})_{n}\) of ordinals such that \(\sup_{n}(v_{n}^{\alpha})=\alpha\). By Theorem 5.15, \(c_{0,P_{\alpha}}\) and \(c_{0,Q_{\alpha}}\) are isometric to \(\ell_{\infty}((c_{0,Q_{\alpha}})_{n})\) and \(c_{0}((c_{0,P_{v_{n}^{\alpha}}})_{n})\), respectively. Since \(c_{0,Q_{v_{n}^{\alpha}}}\) and \(c_{0,P_{v_{n}^{\alpha}}}\) do not have the Grothendieck property for any \(n\in\mathbb{N}\), we infer that \(\ell_{\infty}((c_{0,Q_{v_{n}^{\alpha}}})_{n})\) and \(c_{0}((c_{0,P_{v_{n}^{\alpha}}})_{n})\) are not Grothendieck spaces. **Theorem 5.20**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). If \(\mathcal{I}\) is a meager, then \(c_{0,\mathsf{Fin}\times\mathcal{I}}\) is not a Grothendieck space._ Proof.: Suppose that \(c_{0,\mathsf{Fin}\times\mathcal{I}}\) is Grothendieck. By Theorem 5.8, \(c_{0}(\ell_{\infty}/c_{0,\mathcal{I}})\) is Grothendieck. By [11, Proposition 5.3.8], \(\ell_{\infty}/c_{0,\mathcal{I}}\) must by finite-dimensional. Consequently, \(c_{0,\mathcal{I}}\) is complemented in \(\ell_{\infty}\). By Theorem 2.6, \(\mathcal{I}\) is a non-meager ideal. It is known that \(c_{0,\mathcal{I}}\) is complemented if \(\mathcal{I}\) is a maximal ideal ([19]). A more general fact is the following result for which we include a proof for the sake of completeness. **Proposition 5.21**.: _[_16_, p. 2]_ _If \(\mathcal{I}\) is an intersection of finitely many maximal ideals on \(\mathbb{N}\), then \(c_{0,\mathcal{I}}\) is complemented in \(\ell_{\infty}\) and, in particular, it is Grothendieck._ Proof.: Let \(\mathcal{J}_{1},\ldots,\mathcal{J}_{n}\) be maximal ideals on \(\mathbb{N}\) such that \(\mathcal{I}=\bigcap_{k=1}^{n}\mathcal{J}_{k}\). Consider the map \(\Phi\colon\ell_{\infty}\to\mathbb{R}^{n}\) defined by \[\Phi(\mathbf{x})=(\mathcal{J}_{1}^{*}-\lim x_{n},\ldots,\mathcal{J}_{n}^{*}- \lim x_{n}),\quad\mathbf{x}=(x_{n}).\] It is not difficult to check that \(\Phi\) is an onto bounded linear operator. Note that \(\ker\Phi=c_{0,\mathcal{I}}\). So, \(\ell_{\infty}/c_{0,\mathcal{I}}\) is finite dimensional. Hence, \(c_{0,\mathcal{I}}\) is complemented in \(\ell_{\infty}\). **Theorem 5.22**.: _If \(\mathcal{I}\) is an intersection of finitely many maximal ideals on \(\mathbb{N}\), then \(c_{0,\mathcal{I}^{\omega}}\) is complemented in \(\ell_{\infty}\) and, in particular, is Grothendieck._ Proof.: From the hypothesis, we know that \(c_{0,\mathcal{I}}\) is complemented in \(\ell_{\infty}\) (Proposition 5.21). Let \(Y\subset\ell_{\infty}\) be a subspace such that \(\ell_{\infty}=c_{0,\mathcal{I}}\oplus Y\). So, \[\ell_{\infty}\sim\ell_{\infty}(\ell_{\infty})\sim\ell_{\infty}(c_{0,\mathcal{ I}})\oplus_{\infty}\ell_{\infty}(Y)\sim c_{0,\mathcal{I}^{\omega}}\oplus_{ \infty}\ell_{\infty}(Y),\] by Theorem 5.2. Whence \(c_{0,\mathcal{I}^{\omega}}\) is isomorphic to a complemented subspace of \(\ell_{\infty}\). **Question 5.23**.: Is \(c_{0,\mathsf{WO}}\) a Grothendieck space? ## Acknowledgments Part of the research of this paper was developed during a postdoctoral stay of the first author supported by Fundacao de Apoio a Pesquisa do Estado de Sao Paulo, FAPESP, Processo 2021/01144-2, and UIS.
2301.13512
OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control
This paper presents OpTaS, a task specification Python library for Trajectory Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and MPC are increasingly receiving interest in optimal control and in particular handling dynamic environments. While a flurry of software libraries exists to handle such problems, they either provide interfaces that are limited to a specific problem formulation (e.g. TracIK, CHOMP), or are large and statically specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the other hand, allows a user to specify custom nonlinear constrained problem formulations in a single Python script allowing the controller parameters to be modified during execution. The library provides interface to several open source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate integration with established workflows in robotics. Further benefits of OpTaS are highlighted through a thorough comparison with common libraries. An additional key advantage of OpTaS is the ability to define optimal control tasks in the joint space, task space, or indeed simultaneously. The code for OpTaS is easily installed via pip, and the source code with examples can be found at https://github.com/cmower/optas.
Christopher E. Mower, João Moura, Nazanin Zamani Behabadi, Sethu Vijayakumar, Tom Vercauteren, Christos Bergeles
2023-01-31T10:00:39Z
http://arxiv.org/abs/2301.13512v1
OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control ###### Abstract This paper presents OpTaS, a task specification Python library for Trajectory Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and MPC are increasingly receiving interest in optimal control and in particular handling dynamic environments. While a flurry of software libraries exists to handle such problems, they either provide interfaces that are limited to a specific problem formulation (e.g. TracItK, CHOMP), or are large and statically specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the other hand, allows a user to specify custom nonlinear constrained problem formulations in a single Python script allowing the controller parameters to be modified during execution. The library provides interface to several open source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate integration with established workflows in robotics. Further benefits of OpTaS are highlighted through a thorough comparison with common libraries. An additional key advantage of OpTaS is the ability to define optimal control tasks in the joint space, task space, or indeed simultaneously. The code for OpTaS is easily installed via pip, and the source code with examples can be found at github.com/cmower/optas. ## I Introduction High-dimensional motion planners and controllers are integrated in many of the approaches for solving complex manipulation tasks. Consider, for example, a robot operating in an unstructured and dynamic environment that, e.g. places an object onto a shelf, or drilling during pedicle screw fixation in surgery (see Fig. 1). In such cases, a planner and controller must account for objectives/constraints like bi-manual coordination, contact constraints between robot-object and object-environment, and be robust to disturbances. Efficient motion planning and fast controllers are an effective way of enabling robots to perform these tasks subject to motion constraints, system dynamics, and changing task objectives. Sampling-based planners [1] are effective, however, they typically require considerable post-processing (e.g. trajectory smoothing). Optimal planners (i.e. that are provably asymptotically optimal, e.g. RRT\({}^{*}\)) are promising but inefficient (in terms of computation duration) for solving high-dimensional problems [2]. Gradient-based trajectory optimization (TO) is a key approach in optimal control, and has also been utilized for motion planning. This approach underpins many recent works in robotics for planning and control, e.g. [3, 4, 5, 6, 7, 8, 9, 10]. Given an initialization, optimization finds a locally optimal trajectory, comprised of a stream of state and control commands subject to motion constraints and system dynamics (i.e. equations of motion). Several reliable open-source and commercial optimization solvers exist for solving TO problems, e.g. IPOPT [11], KNITRO [12], and SNOPT [13]. However, despite the success of the optimization approaches proposed in the literature and motion planning frameworks such as MoveIt [14], there is a lack of libraries enabling fast development/prototyping of optimization-based approaches for multi-robot setups that easily interfaces with these efficient solvers. To fill this gap, this paper proposes OpTaS, a user-friendly task-specification library for rapid development and deployment of nonlinear optimization-based planning and control approaches such as Model Predictive Control (MPC). The library leverages the symbolic framework of CasADi [15], enabling function derivatives to arbitrary order via automatic differentiation. This is important since some solvers (e.g. SNOPT) utilize the Jacobian and Hessian. Fig. 1: Examples of contact-rich manipulation showing (a) a robot placing an item on a shelf, (b) a human interacting with a robot performing a drilling task during pedicle screw fixation. Image credit: University Hospital Balgrist, Daniel Hager Photography & Film GmbH. ### _Related work_ In this section, we review popular optimization solvers and their interfaces. Next, we describe works similar (in formulation) to our proposed library. Finally, we summarize the key differences and highlight our contributions. Table I summarizes alternatives and how they compare to OpTaS. There are several capable open-source and commercial optimization solvers. First considering quadratic programming, the OSQP method provides a general purpose solver based on the alternating direction method of multipliers [17]. Alternatively, CVXOPT implements a custom interior-point solver [18]. IPOPT implements an interior-point solver for constrained nonlinear optimization. SNOPT provides an interface to an SQP algorithm [13]. KNITRO also solves general mixed-integer programs [12]. Please note that SNOPT and KNITRO are proprietary. These solvers are often implemented in low-level programming languages such as C, C++, or FORTRAN. However, there are also many interfaces to these methods via higher level languages, such as Python, to make implementation and adoption easier. The SciPy library contains the optimize module [19] to interface with low-level routines, e.g. conjugate gradient and BFGS algorithm [20], the Simplex method [21], COBYLA [22], and SLSQP [23]. A requirement when using optimization-based methods is the need for function gradients. Several popular software packages implement automatic differentiation [24, 15, 25]. We leverage the CasADi framework [15] for deriving gradients. Our choice for CasADI is based on the fact that it comes readily integrated with common solvers for optimal control. To the best of our knowledge, JAX and PyTorch are not currently integrated with constrained nonlinear optimization solvers. Similar to our proposed library are the following packages. The MoveIt package provides the user with specific IK/planning formulations and provides interfaces to solvers for the particular problem [14]. The eTaSL library [26] allows the user to specify custom tasks specifications, but only supports problems formulated as quadratic programs. The CASCLIK library uses CasADi and provides support for constraint-based inverse kinematic controllers [27], to the best of our knowledge they allow optimization in the joint space. We provide joint space, task space optimization and also the ability to simultaneously optimize in the joint/task space. Furthermore, our framework supports optimization of several robots in a single formulation. The EXOTica library allows the user to specify a problem formulation from an XML file [28]. The package, however, requires the user to supply analytic gradients for additional sub-task models. ### _Contributions_ This paper makes the following contributions: * A task-specification library, in Python, for rapid development/deployment of TO approaches for multi-robot setups. * Modeling of the robot kinematics (forward kinematics, geometric Jacobian, etc.), to arbitrary derivative order, given a URDF specification. \begin{table} \begin{tabular}{l|l c c c c c c} & Languages & End-pose & Traj. & MPC & Solver & AutoDiff & ROS & Re-form \\ \hline **OpTaS** & Python & ✓ & ✓ & ✓ & QP/NLP & ✓ & ✓ & ✓ \\ EXOTica & Python/C++ & ✓ & ✓ & ✗ & QP/NLP & ✗ & ✓ & ✓ \\ MoveIt & Python/C++ & ✓ & ✓ & ✗ & QP & ✗ & ✓ & ✗ \\ Track & Python/C++ & ✓ & ✗ & QP & ✗ & ✗ & ✗ \\ RBDL & Python/C++ & ✓ & ✗ & QP & ✗ & ✗ & ✗ \\ eTaSL & C++ & ✓ & ✗ & QP & ✓ & ✗ & ✓ \\ OpenRAVE & Python & ✗ & ✓ & ✗ & QP & ✗ & ✓ & ✗ \\ \end{tabular} \end{table} TABLE I: Comparison between OpTaS and common alternatives in literature. Fig. 2: System overview for the proposed OpTaS library. **Red** highlights the main features of the proposed library. **Green** shows configuration parameter input. **Grey** shows third-party frameworks/libraries. Finally, the image in the top-right corner shows integration with the ROS-PyBullet Interface [16]. * An interface that allows a user to easily reformulate an optimal control problem, and define parameterized constraints for online modification of the optimization problem. * Analysis comparing the performance of the library (i.e. solver convergence, solution quality) versus existing software packages. Further demonstrations highlight the ease in which nonlinear constrained optimization problems can be set up and deployed in realistic settings. ## II Problem Formulation We can write an optimal control formulation of a TO or planning problems as \[\min_{x,u}\text{cost}(x,u;T)\quad\text{subject to}\quad\begin{cases}\dot{x}=f(x,u) \\ x\in\mathbb{X}\\ u\in\mathbb{U}\end{cases} \tag{1}\] where \(t\) denotes time, and \(x=x(t)\in\mathbb{R}^{n_{x}}\) and \(u=u(t)\in\mathbb{R}^{n_{u}}\) denote the states and controls, with \(T\) being the time-horizon for the planned trajectory. The scalar function \(\text{cost}:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\to\mathbb{R}\) represents the cost function (typically a weighted sum of terms each modeling a certain sub-task), the dot notation denotes a derivative with respect to time (i.e. \(\dot{x}\equiv\frac{dx}{dt}\)), \(f\) represents the system dynamics (equations of motion), and \(\mathbb{X}\subseteq\mathbb{R}^{n_{x}}\) and \(\mathbb{U}\subseteq\mathbb{R}^{n_{u}}\) are feasible regions for the states and controls respectively (modeled by a set of equality and inequality constraints). Direct optimal control, optimizes for the controls \(u\) for a discrete set of time instances, using numerical methods (e.g. Euler or Runge-Kutta), to integrate the system dynamics over the time horizon \(T\)[29]. Given an initialization \(x^{\text{init}},u^{\text{init}}\), a locally optimal trajectory \(x^{*},u^{*}\) is found by solving (1). As discussed in Sec. I, many works propose optimization-based approaches for planning and control. These can all be formulated under the same framework, i.e. a TO problem as in (1). The goal of our work is to deliver a library that allows a user to quickly develop and prototype constrained nonlinear TO for multi-robot problems, and deploy them for motion generation. The library includes two types of problems, IK and task-sace TO, and indeed both simultaneously. Common steps, such as transcription that transforms the problem's task-level description into a form accepted by numerical optimization solver routines, should be automated and thus not burden the user. Furthermore, many works in practice require the ability to adapt constraints dynamically to handle changes in the environment (e.g. MPC). This motivates a constraint parameterization feature. ## III Proposed Framework In this section, we describe the main features of the proposed library shown in Fig. 2. The library is completely implemented in the Python programming language. We chose Python because it is simple for beginners but also versatile with many well-developed libraries, and it easily facilitates fast prototyping. ### _Robot model_ The robot model (RobotModel) provides the kinematic modeling and specifies the time derivative orders required for the optimization problem. The only requirement is a URDF to instantiate the object2. A key feature is that we can include several robots in the TO, which is useful for dual arm and whole-body optimization. Additional base frames and end-effector links can be added programatically (for example, when several robots are included the optimization their base frames should be registered within a global coordinate frame). Footnote 2: [http://wiki.ros.org/urdf](http://wiki.ros.org/urdf) The RobotModel class allows access to data such as: the number of degrees of freedom, the names of the actuated joints, the upper and lower actuated joint limits, and the kinematics model. Furthermore, we provide methods to compute the forward kinematics and geometric Jacobian in any given reference frame. Several methods modeling the kinematics are supplied, given a specification from the user for the base frame and end-effector frame. These methods include: the \(4\times 4\) homogeneous transformation matrix, translation position, rotational representations (e.g. Euler angles, quaternions), the geometric and analytical Jacobian. Each of the methods above depend on a joint state (supplied as either a Python list, NumPy array, or CasADi symbolic array). ### _Task model_ Several works optimize robot motion in the task space and then compute the IK as a secondary step, e.g. [8, 9]. The task model (TaskModel) provides a representation for any arbitrary trajectory. For example, the three dimensional position trajectory of an end-effector. In the same way as the robot model, the time derivatives can be specified in the interface an arbitrary order. ### _Optimization builder_ This section introduces and describes the optimization builder class (OptimizationBuilder). The purpose of this class is to aid the user to easily setup a TO problem, and then automatically build an optimization problem model (Sec. III-D) that interfaces with a solver interface (Sec. III-E). The development cycle consists in specifying the task (i.e. decision variables, parameters, cost function, and constraints) using intuitive syntax and symbolic variables. Then, the builder creates an optimization problem class, which interfaces with several solvers. ### _Optimization problem model_ The standard TO is stated in (1). This task/problem is specified by the optimization builder class in intuitive syntax for the user. Transcribing the problem to a form that can be solved by off-the-shelf solvers is non-trivial. The output of the optimization builder method build is an optimization problem model that allows us to interface with several solvers. The most general optimization problem that is modeled by OpTaS is given by \[X^{*}=\operatorname*{arg\,min}_{X}\ f(X;P) \tag{2a}\] \[\text{subject to}\] \[k(X;P) =M(P)X+c(P)\geq 0\] (2b) \[a(X;P) =A(P)X+b(P)=0\] (2c) \[g(X;P) \geq 0\] (2d) \[h(X;P) =0 \tag{2e}\] where \(X=[vec(x)^{T},vec(u)^{T}]^{T}\in\mathbb{R}^{n_{X}}\) is the decision variable array such that \(x,u\) are as defined in (1) and \(vec(\cdot)\) is a function that returns its input as a 1-dimensional vector, \(P\in\mathbb{R}^{n_{P}}\) is the vectorized parameters, \(f:\mathbb{R}^{n_{X}}\rightarrow\mathbb{R}\) denotes the objective function, \(k:\mathbb{R}^{n_{X}}\rightarrow\mathbb{R}^{n_{x}}\) denotes the linear inequality constraints, \(a:\mathbb{R}^{n_{X}}\rightarrow\mathbb{R}^{n_{x}}\) denotes the linear equality constraints, \(g:\mathbb{R}^{n_{X}}\rightarrow\mathbb{R}^{n_{g}}\) denotes the nonlinear inequality constraints, and \(h:\mathbb{R}^{n_{X}}\rightarrow\mathbb{R}^{n_{h}}\) denotes the nonlinear equality constraints. The decision variables \(X\) are all the joint states and other variables specified by the user stacked into a single vector. Similarly for the parameters, cost terms, and constraints. Vectorization is made possible by the SXContainer data structure implemented in the sx_container module. This data structure enables automatic transcription of the TO problem specified in (1) into the form (2). Of course, not all task specifications will require definitions for each of the functions in (2). Depending on the structure of the objective function and constraints, the required time budget, and accuracy, some solvers will be more appropriate for solving (2). For example, a quadratic programming solver that only handles linear constraints (e.g. OSQP [17]) is unsuitable for solving a problem with nonlinear objective function and nonlinear constraints. The build process automatically identifies the optimization problem type, exposing only the relevant solvers. Several problem types are available to the user: unconstrained quadratic cost, linearly constrained with quadratic cost, nonlinear constrained with quadratic cost, unconstrained with nonlinear cost, linearly constrained with nonlinear cost, nonlinear cost and constraints. #### Iii-B1 Initialization Upon initialization of the optimization builder class we can specify **(i)** the number of time steps in the trajectory, **(ii)** several robot and task models (given a unique name for each), **(iii)** the joint states (positions and required time-derivatives) that integrate the decision variable array, **(iv)** task space labels, dimensions, and derivatives to also integrate the decision variable array, **(v)** a Boolean describing the alignment of the derivatives (Fig. 3), and **(vi)** a Boolean indicating whether to optimize time steps. The alignment of time-derivatives can be specified in two ways. Each derivative is aligned with its corresponding state (alignement), or otherwise. This is specified by the derivvs_align flag in the optimization builder interface and shown diagramatically in Fig. 3. In addition, the user can also optimize the time-steps between each state. The time derivatives can be integrated over time, e.g. \(q_{t+1}=q_{t}+\delta\tau_{t}\dot{q}_{t}\), where \(\delta\tau_{t}\) is an increment in time. When optimize_time=True, then each \(\delta\tau_{t}\) is included as decision variables in the optimal control problem. #### Iii-B2 Decision variables and parameters Decision variables are specified in the optimization builder class interface for the joint space, task space, and time steps. Each group of variables is given a unique label and can be retrieved using the get_model_state method. States are retrieved by specifying a robot name or task name, the required time index, and the time derivative order required. Additional decision variables can be included in the problem by using the add_decision_variables method given a unique name and dimension. Parameters for the problem (e.g. safe distances) can be specified using the add_parameter method. To specify a new parameter, a unique name and dimension is required. #### Iii-B3 Cost and constraint functions The cost function in (1) is assumed to be made up of several cost terms, i.e. \[\text{cost}(x,u;T)=\sum_{i}\ c_{i}(x,u;T) \tag{3}\] where \(c_{i}:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}\) is an individual cost term modeling a specific sub-task. For example, let us define the cost terms \(c_{0}=\|\psi(x_{T})-\psi^{*}\|^{2}\) and \(c_{1}=\lambda\int_{0}^{T}\ \|u\|^{2}\ dt\) (note, discretization is implicit in this formulation where \(\psi:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{3}\) is a function for the forward kinematics position (note, this can be provided by the robot model class as described in Sec. III-A), \(\psi^{*}\in\mathbb{R}^{3}\) is a goal task space position, and \(0<\lambda\in\mathbb{R}\) is a scaling term used to weight the relative importance of one constraint against the other. Thus, \(c_{0}\) describes an ideal state for the final state, and \(c_{1}\) encourages trajectories with minimal control signals (e.g. minimize joint velocities). Each cost term is added to the problem using the add_cost_term method; the build sequence ensures each term is added to the objective function. Several constraints can be added to the optimization problem by using the add_equality_constraint and add_leq_inequality_constraint methods that add equality and inequality constraints respectively. When the constraints are added to the problem, they are first checked to see if they are linear constraints with respect to the decision variables. This functionality allows the library to differentiate between linear and nonlinear constraints. Additionally, OpTaS offers several methods that provide an implementation for common constraints, as, for example, Fig. 3: Joint state alignment with time. User supplies derivvs_align that specifies how joint state time derivatives should be aligned. joint position/velocity limits and time-integration for the system dynamics \(f\) (e.g joint velocities can be integrated to positions). ### _Solver interface_ OpTaS provides interfaces to solvers (open-source and commercial) that interface with CasADi [15] (such as IPOPT [11]), SNOPT [13], KNITRO [12], and Gurobi [30]), the Scipy minimize method [19], OSQP [17], and CVX-OPT [18]. #### Iii-E1 Initialization of solver When the solver is initialized, several variables are setup and the optimization problem object is set as a class attribute. The user must then call the setup method - that itself is an interface to the solver initialization that the user has chosen. The requirement of this method is to setup the interface for the specific solver; relevant solver parameters are passed to the interface at this stage. #### Iii-E2 Resetting the interface When using the solver as a controller, it is expected that the solver should be called more than once. In the case for feedback controllers or controllers with parameterized constraints (e.g. obstacles), this requires a way to reset the problem parameters. Furthermore, the initial seed for the optimizer is often required to be reset at each control loop cycle. To reset the initial seed and problem parameters the user calls reset_initial_seed, and reset_parameters, respectively. Both the initial seed and parameters are initialized by giving the name of the variables. The required vectorization is internally performed by the solver utilizing features of the SXContainer data structure. Note, if any decision variables or parameters are not specified in the reset methods then they automatically default to zero. This enables warm-starting the optimization routine, e.g. with the solution of the previous time-step problem. #### Iii-E3 Solving an optimization problem The optimization problem is solved by calling the solve method. This method passes the optimization problem to the desired solver. The resulting data from the solver is collected and transformed back into the state trajectory for each robot. A method is provided, named interpolate, is used to interpolate the computed trajectories across time. Additionally, the method stats retrieves available optimization statistics (e.g. number of iterations). #### Iii-E4 Extensible solver interface The solver interface has been implemented to allow for extensibility, i.e. additional optimization solvers can be easily integrated into the framework. When a user would like to include a new solver interface, they must create a new class that inherits from the Solver class. In their sub-class definition they must implement three methods: (i) setup which (as described above) initializes the solver interface, (ii) _solve that calls the solver and returns the optimized variable \(X^{*}\), and (iii) stats that returns any statistics from the solver. ### _Additional features_ Support for integration with ROS [31] is provided out-of-the-box. The ROS node provided is integrated with the ROS-PyBullet Interface [16] so the publishers/subscribers can connect a robot in the optimization problem with a robot simulated in PyBullet. In addition, we provide a port of the spatialmath library by Corke [32] that supports CasADi variables. This library defines methods for manipulating homogeneous transformation matrices, quaternions, Euler angles, etc. using CasADi symbolic variables. ## IV Code Example In this section, we describe a common TO problem and give the code that models the problem. We aim to highlight how straightforward it is to setup a problem. Consider a serial link manipulator, and goal to find a collision-free plan over time horizon \(T\) to a goal end-effector position \(p_{g}\) given a starting configuration \(q_{c}\). A single spherical collision is represented by a position \(o\) and radius \(r\). The robot configuration \(q_{t}\) represent states, and the velocities \(\dot{q}_{t}\) are controls. The cost function is given by \(\|p(q_{T})-p_{g}\|^{2}\) where \(p\) is the position of the end-effector given by the forward kinematics. We solve the problem by minimizing the cost function subject to the constraints: (i) initial configuration, \(q_{0}=q_{c}\), (ii) joint limits \(q^{-}\leq q_{t}\leq q^{+}\), and (iii) obstacle avoidance, \(\|p(q_{t})-o\|^{2}\geq r^{2}\). The system dynamics is represented by several equality constraints \(q_{t+1}=q_{t}+\delta t\dot{q}_{t}\) that can be specified by methods already in-built into OpTaS. The code for the TO problem above, is shown in Fig. 4. Fig. 4: Example code for TO described in Section IV. ## V Experiments ### _Optimization along custom dimensions_ Popular solvers, such as TracIK [33], require the user to provide a 6D pose as the task space goal. Whilst this is applicable to several robotics problems (e.g. pick-and-place) it may not be necessary to optimize each task space dimension (e.g. spraying applications does not require optimization in the roll angular direction). Furthermore, optimizing in more dimensions than necessary may be disadvantageous. OpTaS can optimize or neglect any desired task space dimension. This can have certain advantages, for example increasing the robot workspace. Consider a non-prehensile pushing task along the plane, optimizing the full 6D pose may not be ideal since the task is two dimensional. By optimizing in the two dimensional plane and specifying boundary constraints on the third linear spatial dimension, increases the robots workspace. We setup a tracking experiment in OpTaS using a simulated Kuka LWR robot arm to compare the two cases: (i) optimize the full 6D pose, and (ii) optimize 2D linear position. The robot is given an initial configuration (Fig. 4(a) left) and the task is to move the end-effector with velocity of constant magnitude and direction in the 2D plane. The end configuration for each approach is shown in Fig. 4(a) right and the end-effector trajectories are shown in Fig. 4(b). We see that the 2D optimization problem is able to reach a greater distance, highlighting that the robot workspace is increased. ### _Performance comparison_ In this section, we demonstrate that OpTaS can formulate similar problems and compare its performance to alternatives. First, we model, with OpTaS, the same problem as used in TracIK [33] and in addition we also model the problem using EXOTica [28]. The Scipy SLSQP solver [23] was used for OpTaS and EXOTica. With same Kuka LWR robot arm in the previous experiment, we setup a task where the robot must track a figure-of-eight motion in task space (Fig. 6) and record the CPU time for the solver duration at each control loop cycle. The results are shown in Fig. 6(a). TracIK is the fastest (\(0.049\pm 0.035\)ms), which is expected since it is optimized for a specific problem formulation. We see that OpTaS (\(2.608\pm 0.239\)ms) is faster than EXOTica (\(3.694\pm 0.300\)ms) A second experiment, using the same setup as before, was performed comparing the performance of OpTaS against EXOTica with an additional cost term to maximize manipulability [34]. The results are shown in Fig. 6(b). Despite using the same formulation and solver, OpTaS (\(2.650\pm 0.270\)ms) achieved better performance than EXOTica (\(7.640\pm 1.404\)ms). Without extensive profiling it is difficult to precisely explain this difference. However, EXOTica requires the user to supply analytical gradients for sub-tasks (called _task maps_ in the EXOTica documentation). EXOTica does not provide the gradients for the manipulability task, and thus falls-back to using the finite difference method to estimate the gradient - this can can be slow to compute. ## VI Conclusions In this paper, we have proposed OpTaS: an optimization-based task specification Python library for TO and MPC. OpTaS allows a user to setup a constrained nonlinear programs for custom problem formulations and has been shown to perform well against alternatives. Parameterization enables programs to act as feedback controllers, motion planners, and benchmark problem formulations and solvers. We hope OpTaS will be used by researchers, students, and industry to facilitate the development of control and motion planning algorithms. The code base is easily installed via pip and has been made open-source under the Apache 2 license: [https://github.com/cmower/optas](https://github.com/cmower/optas). Fig. 5: Comparison of end-effector task space trajectories computed using two different formulations. (a) Shows the start (left), and final configurations (right) for the robot under each approach. (b) Plots the end-effector position trajectory two dimensions. Fig. 6: Figure-of-eight trajectory tracked by the Kuka LWR. Fig. 7: Solver duration comparisons for figure of eight motion. (a) Compares an IK tracking approach described in Section V, (b) is a similar comparison that includes a maximization term for manipulability. Green is OpTaS, red is TracIK, and blue is EXOTica.
2309.13330
Predicting Temperature of Major Cities Using Machine Learning and Deep Learning
Currently, the issue that concerns the world leaders most is climate change for its effect on agriculture, environment and economies of daily life. So, to combat this, temperature prediction with strong accuracy is vital. So far, the most effective widely used measure for such forecasting is Numerical weather prediction (NWP) which is a mathematical model that needs broad data from different applications to make predictions. This expensive, time and labor consuming work can be minimized through making such predictions using Machine learning algorithms. Using the database made by University of Dayton which consists the change of temperature in major cities we used the Time Series Analysis method where we use LSTM for the purpose of turning existing data into a tool for future prediction. LSTM takes the long-term data as well as any short-term exceptions or anomalies that may have occurred and calculates trend, seasonality and the stationarity of a data. By using models such as ARIMA, SARIMA, Prophet with the concept of RNN and LSTM we can, filter out any abnormalities, preprocess the data compare it with previous trends and make a prediction of future trends. Also, seasonality and stationarity help us analyze the reoccurrence or repeat over one year variable and removes the constrain of time in which the data was dependent so see the general changes that are predicted. By doing so we managed to make prediction of the temperature of different cities during any time in future based on available data and built a method of accurate prediction. This document contains our methodology for being able to make such predictions.
Wasiou Jaharabi, MD Ibrahim Al Hossain, Rownak Tahmid, Md. Zuhayer Islam, T. M. Saad Rayhan
2023-09-23T10:23:00Z
http://arxiv.org/abs/2309.13330v1
# Predicting Temperature of Major Cities Using Machine Learning and Deep Learning ###### Abstract Currently, the issue that concerns the world leaders most is climate change for its effect on agriculture, environment and economies of daily life. So, to combat this, temperature prediction with strong accuracy is vital. So far, the most effective widely used measure for such forecasting is Numerical weather prediction (NWP) which is a mathematical model that needs broad data from different applications to make predictions. This expensive, time and labor consuming work can be minimized through making such predictions using Machine learning algorithms. Using the database made by University of Dayton which consists the change of temperature in major cities we used the Time Series Analysis method where we use LSTM for the purpose of turning existing data into a tool for future prediction. LSTM takes the long-term data as well as any short-term exceptions or anomalies that may have occurred and calculates trend, seasonality and the stationarity of a data. By using models such as ARIMA, SARIMA, Prophet with the concept of RNN and LSTM we can, filter out any abnormalities, preprocess the data compare it with previous trends and make a prediction of future trends. Also, seasonality and stationarity help us analyze the recourrence or repeat over one year variable and removes the constrain of time in which the data was dependent so see the general changes that are predicted. By doing so we managed to make prediction of the temperature of different cities during any time in future based on available data and built a method of accurate prediction. This document contains our methodology for being able to make such predictions. Predicting temperature, Time Series Analysis, Recurrent Neural Networks, Long Short Term Memory Networks. ## I Introduction This research was done with an aim to predict future temperatures using machine learning and deep learning. Rising temperature has been one of the largest problems in recent times so the research we have done below was done with the aim of developing a highly accurate temperature prediction method which will enable us to predict future temperature of major cities of the world which can be used to study the rise of temperature better. Moreover, forecasting temperature is dependent on a lot of data. These datasets consist data based on certain regions and the trend of the change of temperature of those regions. It will provide data based on the geography of the region, regional activities, regional timeline etc. Such large database is difficult to collect and can only predict up to a near future. But, in order to build a system capable of not just short-term but also long-term predictions based on the previous temperature data available we need the help of machine learning and deep learning approach. These work with time series. Sometimes we can see that in the prediction there are some errors in the graph. Sometimes the errors are too high. So, what is really done in this case is some error calculation is used to get the exact forecast temperature. Errors can be corrected by viewing them as amplitude errors. It is a very nonlinear optimization problem. In order to do so we used machine learning and deep learning to develop an accurate prediction method using a database of monthly temperatures of major cities. The temperature graph is a non linear graph. In most cases, ML is used but in time series analysis Artificial Neural Networks(ANN) and Support Vector Machines (SVM) is used. MultiLayer Perceptron Neural Networks (MLPNN) and Radial Basis Function Neural Networks (RBFNN) developed the temperature prediction. These are the parts of ANN. Levenberg-Marquardt and Gradient Descent Is mostly used to optimize the algorithms. Deep learning is used to make predictions accurately. To forecast hourly air temperature people use Long Short Term Memory (LSTM) Recurrent Neural Networks (RNN). By running time series analysis using LSTM and other methods we constructed our prediction models. ### _Motivation_ Looking at the impact of the climate change and the future consequences we will have to face regarding this matter we were motivated based on how we can participate in research done on climate change and it's effect on temperature and make future predictions on which works can be done to tackle climate change. The threat of climate change upon mankind is now greater than ever. One of if not the most devastating consequences of climate change is the constant rise of global temperature. This trend of rise of temperature has been present and constant since the beginning of industrial revolution up to the recent decades. It has been attributed to many natural calamities such as rising sea level, drought, change in weather, harvest failure, extension of many species and so on. In order to take counteractive measurements against this problem regarding rising temperature it is important we obtain a better understanding of this situation. One of the better ways of doing so is to study through the available data we have on temperature and work through them. Also, to understand the severity of our current situation and to paint a picture of the future based on current trends we should find a viable way of predicting temperature. Such predictions will paint a clear picture of what other problems we will face in the future and the time we have left to make significant changes. It will also give us information that will come handy when measuring the effectiveness and viability of the actions taken against rising temperature. ### _Research problem_ Working with data on temperature has some challenges to face. Anything that is related to the environment is highly unpredictable. Many natural causes or otherwise can result in drastic ups and downs in data. Moreover, it is also difficult to ensure the accuracy of the data. Different methods of data collections can lead to different results sometimes so it needs to be ensured the accuracy is maintained. But using highly accurate dataset sometimes limits the data as the modern approaches of data collection only can give data of very recent times. Which means, going further back in time the data are most of the time collected through orthodox methods where the accuracy can be questioned. Also another challenge we have to face is in case of temperature the changes are often slow and it takes a while before any significant change becomes visible. The problem it creates is if the change between two base points is too far negligible and the graph becomes somewhat constant it is difficult to imply time series analysis and get results. Because, in time series the values need to be stationary for this method to work. Because in time series if something has a particular behavior pattern it follows over time it will assume that it will happen the same in the future and mitigate other possibilities. So, to ensure our data is stationary we need to take our bases from the data in a way that they maintain a constant mean, a constant variance and autocovariance that does not depend on time. At the same time the data should also maintain an accurate average to preserve overall accuracy. Finally, we need to ensure that the trend and seasonality of the data is well preserved. All over the world. There are cities with short summers and long winters and vice versa. So, the trend and seasonality should come differently based on latitude, longitude and the location on the axis etc. Which means we can't predict all the locations the same way and will need to take other significant variables into measure as well. ### _Research objective_ The objective of this research is to create a model for predicting temperature changes which will help us to obtain better understanding in regards to climate change. Rise of temperature has been a matter of great concern all over the world. It has been identified as the root cause of many natural disasters and other problems. Major cities all over the world are concerned about bringing significant changes and controlling this trend of temperature rise. But without an insight into the future it's difficult to assess our current condition and set any benchmark. Our goal is to assist in making those benchmarks easier with predictions. Also with vast information that can be gathered from our model such as trends in changing or the seasonality of temperature rise and fall will give environmental researchers a lot of insight on the matter and drastically boost their progress. ## II Background ### _Time Series_ For this project the most prominent algorithm is going to be Time Series Analysis. In this model a set of data or observations are taken at a specified time in our case which will be the monthly temperature of major cities. The purpose of time series analysis is to extract data from a specified time from the past or a prediction from the future. Which means, with the help of time series analysis it is possible to forecast the future situation, explain past behavior, evaluate the current situation or progress and to plan for the future compiling the results and prediction which in our case will provide the major cities with their temperature changes, paint a picture of the future condition based on the ongoing trend and to sustain a plan on controlling the temperature rise. Time series analysis works with four major components which are trend, seasonality, irregularity and cycle. Trend basically is the common trend observed in a data. In our case, which can be the common trend of temperature growth in certain cities due to global warming, industrialization, pollution etc. Seasonality is how the data changes based on a certain time frame in a visible basis such as during December certain cities can have cooler climate and a much hotter climate in April due to seasonal changes. Irregularities are unexpected changes in the data or graph which can be drastic changes in temperature in certain months due to natural disasters. And lastly the cycle is the general cycle the graph follows [6]. What makes this model so viable for forecast and prediction is that it can work with a single variable. In time series analysis we can work our predictions based on a single "time" variable. It's simpler because in this case the irregularity or any missing data of any other variable or variables don't hamper the equations. To make it clear, in a simple regression model the equation may look like \(y=mx+c\) where the value of "y" is always dependent on the value of "x". But in time series we can simply work with "y". If we look at a simple time series equation there, we can put the equation for the value of "y" as in \(y_{t}=y_{t-1},y_{t-2},y_{t-3},\ldots,t\) where the variable y in time t can be traced back using variable y in t-1, t-2 to the furthest variable available with some error adjustment. This means any missing data or any irregularities in data can be adjusted with the help of previous data. The given equation falls under the AR model which is one of the simplest machine learning models. Like this other existing model in time series analysis exists such as AR, MA, ARIMA, Prophet, Nuroprophet etc. ### _Seasonality_ Time series characteristic in which data changes on a consistent and predictable basis throughout the year, is called seasonality. Any recurrent or repeated change or trend over the course of a year is said to be seasonal. A seasonal pattern occurs if a time series is impacted by cyclic factors, for example the time throughout the year or weekday. These are the pulsating forces at work, working in a consistent and predictable manner over the course of a year. They follow the same or roughly the same pattern throughout the period of a year. This deviation will be obvious in the time series if the data is collected hourly, daily, weekly, quarterly, or monthly. Seasonality has a set and predictable periodicity [1]. ### _Trend_ Data pattern that shows how a series progresses over time to comparably greater or lower values, is called trend. To be specific, as the time series' slope grows or falls, a trend is detected. A trend generally lasts for a short time before evaporating; it isn't the same every time. A trend is what happens when data reveals a long-term increase or decrease. It's not need to be in order. The term "changing direction" refers to when a trend flips from rising to dropping. The trend projects how likely the data is to increase or decrease with time. A trend is a long-term, averaged, smooth pattern. The increase or fall does not need to be in the similar path throughout a set length of time. The propensity may appear to increase, decrease, or remain constant over time. The overall trend, however, must be positive, negative, or stable [2]. ### _Stationarity_ Stationarity refers to an attribute of time series which can be observed independent of time. In the case of motionless time series in general, predictable patterns are not observed over time.The series will look horizontal (with some cyclic behavior) on time graphs, with constant variance. If the time series is not stationary, one of the following procedures can be used to make it stationary. Given the series, a new series can be created by differentiating the data[5]. One point less from the actual data will be held on the differenced data. One difference is enough although the difference can be done more times. Also, we can arrange in some curve to the data and then model from that arrangement in case a trend is present. A simple fit, for instance a straight line, is used most of the time since the goal is to lessen the trend. With the help of logarithm or square root of the series we can assist in stabilizing non-constant variance. We can imply a proper constant to make all the data positive before applying the transformation on negative data. Erasing this constant from the model can get us expected values and projections for future points. ### _Recurrent Neural Networks (RNN)_ When it comes to thinking or decision making, we don't think from scratch. For instance, during basic communication we don't trace back to the first word every time to figure out the next suitable word to construct a proper sentence but go on based on the previous word we uttered. This persistency is much needed in effective predictions. In order to achieve that we need recurrent neural networks or RNN. RNN is a modified version of traditional neural network where the persistency is maintained as it has a loop. In this pattern it can recur the previous events constantly and work based on that[3]. To illustrate this with the help of figure 1, here we can see that for a RNN "A" which is getting xt data as input in order to return ht as output, it is creating multiple copies. Each time a new input is generated it is passed down to its successor creating a loop. This chain-like nature reveals that recurrent neural networks are intimately related to sequences and lists. They're the natural architecture of a neural network to use for such data. ## III Literature review Paper[16] works with 4 types of regressors which are Linear regressor, isotonic regressor, support vector regressor and polynomial regressor. All these regressors are used on Monthly Average temperature of Bangladesh Along with Rain data collected during 1901-2015 time period. In preprocessing, the dataset has been split into 3 parts depending on the 3 seasons of Bangladesh, summer, rainy and cold. The 4 regressors are then put into work on the 3 separate data models. the result of the training dataset showed that isotonic regressor has performed better than other regressors. 3 types of errors which are Mean Squared Error, Mean Absolute Error, Median Absolute Error with a R2 score which represented the extent of fluctuation Fig. 1: Dataflow in RNN that excluded the autonomous factors in the model helped verifying the result of the training sets. But, when attempted to run the isotonic regressor on the testing datasets, the isotonic regressor failed by giving a constant value for the future temperature of Bangladesh for 2019 to 2040. SVR and Polynomial regressor of degree 3 performed quite well as the findings were shown in the paper. Ashfaq et al. concluded the paper by showing SVR and Polynomial Regressor of Degree 3 can be considered the best for predicting the future average temperature values. This study[9], was done by a few members from the research group, "Kilimanjaro ecosystem under global change: Linking biodiversity, biotic interactions and biogeochemical ecosystem process" with the aim to gather high resolution climate information that are essential for various applications. 14 various types of machine learning algorithms were used here to forecast the monthly air temperature across the Kilimanjaro region. As a result, more accurate results were gathered than what the orthodox kriging approach can gather. Several linear and non-linear models were used the linear models being GLM, GAM, PCR, PLS, SvmLinear and non-linear being avNNet, KNN, NNET, svmRadical alongside cubist, ctree, gbm, rf regression trees. The linear models generally tried to minimize the sum of the square errors either with a focus bias or a variance. Non-linear models provide predictions based on the amplitude of various models used to quantify the distance between the predictor variables and the model's closest known group. The regression trees divided the training dataset into categories based on response values that were comparable. The prediction model is chosen based on rules that are appropriate for the predictors of the variables. The authors of the paper[7], invested their attention to a model based on EMD(Empirical mode decomposition) and LS-SVM(Least Squares Support Vector Machine). They used a dataset containing the monthly average temperature during 1951-2003. To start, EMD was applied to decompose the time series into a series of various scales of Intrinsic mode function. For these IMFs, the appropriate kernel function and model parameters are used to construct LS-SVM to predict. Based on the input and output objects here they have used linear kernel function and the RBF. The authors also used EMD-LSSVM model to predict the temperature values. And then, the Root mean Square Error(RMSE) and Relative Error (RE) model are used to verify the prediction accuracy. Their research result conveys that among the three models used the EMD-LSSVM models perform better than the separate LS-SVM and the RBF. Here, EMD-LSSVM model predicts with high accuracy and smaller volatility. According to the paper, The reason behind this is that a non-stationary time series can be made a series of stable single components with certain regularity. The authors in the paper [8], used time series analysis to forecast weather temperature. The paper successfully showed that it's possible to predict the evolution of temperature by means of the ARIMA (Auto-Regressive Integrated Moving Average) models. The research is based on collected data of the past 150 years where the reference period 1850-1899 was more pronounced for Europe and Belgium(MIRA). They used the 10 year moving average in the analysis. They stated that regression analysis would be an inappropriate approach to model the trend of a time series, since it assumes time as an independent variable, whereas, time series are characterized by the dependence of their data. Hence, to analyze dependent data they proposed Arima models. Moreover, they pointed out that ARIMA models have a weak point that the models require the time series to be stationary before starting the analysis. Therefore, to identify the appropriate ARIMA model for a time series, it's needed to remove the major seasonality in order to obtain a stationary series. To compare models they used the AIC (Akaike information criterion) as a measure of goodness of fit. Time Series Modeling v4.30 software was used as the main tool for computations. To find whether the series is stationary or not they used the Box-Pierce test. The focus of this paper [21], is to ensure better understanding of climate change based on different anthropogenic emission scenarios. They have focused on finding how short term emissions as such have a long term effect on climate change and approached machine learning to find this information. Their work is highly data driven and they have proposed building a surrogate climate model using sets of GCM simulations performed in recent years Hadley Centre Global Environment model 3. Taking in account the different seneicos the initial sudden response shown in the first few years are considered to be short term and then when the global mean temperature reaches a steady state are considered to be long term in their research. The task consists of taking short term response as "x" and long-term response as "y" and learning the function of x "f(x)". The mapping to be constructed using Ridge regression, Gaussian Process regression with a linear kernel. Both Ridge and the GPR increases the accuracy by a fair margin but the error is lessened using GPR. In this paper [15], various weather figure methods were considered and the results of applying different types of machine learning and ANN algorithms on weather forecasting were also compared. It also explains how meteorologists blend a few techniques like synoptic forecasting, persistence forecasting, computer forecasting and statistical forecasting to forecast weather. Outputs of different types of models like RBF-HPSOGA, RBF-GA, Gaussian SVM, Wavelet SVM and RBF-NN etc. were analyzed. Moreover, authors investigated various information digging approaches for forecasting climate. Finally, RBF NN utilizing Hybrid PSGOA and Wavelet based SVM, gave the maximum execution productivity. This paper [19] mainly talks about solving two problems and comes to a solution. The first problem is to synchronize the problem. In this paper they made a synchronization process to forecast the temperature. The second problem focuses on spatial-equilibration between sites that looks at the relative correlations of primary and proxy variables. But both of their jobs are to forecast the temperature. The forecasted time is not limited. This research is forecasting the future condition of temperature. They used "Oscillation Discovery and Prediction" which was important to forecast the longtime temperature and this is the main thing of this paper. This paper [18] talks about very useful methods. It also talks about regional temperature forecasting and long-term global temperature forecasting. In long-term global temperature forecasting they are shown how they can predict the gt by taking SI, SOD, CO2, Sulfate, ENSO as input. The diagram is given below. RNN method is very useful to predict temperature. And there are also some error calculations which are talked about in this paper. These methods are very useful and it will be very helpful. As there is also talking about those five errors it will make the prediction perfect. They have shown four tables to explain the inputs outputs and the configuration. They wanted to make a prediction which would become more accurate and they have succeeded. They also talked about the Stacked Denoising Auto-Encoders which gives 97.94% accuracy. On the other hand ANN gives 94.92% accuracy. In this study report[14], Researchers suggested a strategy of machine learning in order to forecast the concentration of PM2.5 in a city of Ecuador, Quito. In order to do that, they used the meteorological data of this highly elevated city. To split the data into multiple groups according to the concentrations of PM2.5, machine learning algorithms are utilized. In this specific classification problem, supervised learning approaches such as BTs and L-SVM are used to develop models. A regression is performed using BT, L-SVM, and NN. Using CGM to do regression outperforms other commonly used machine learning methods such as NN, LSVM, and BT. Also, using time series algorithms to discover trends over long periods of time should increase the accuracy of the prediction and allow forecasting of the concentrations of PM2.5 for a longer period of time. Moreover, Three fundamental meteorological elements which are direction of wind, wind speed and precipitation are the base of this model, all of which have a direct impact on pollution. In the paper [24], authors have done a survey on the latest studies of deep learning-based weather forecasting considering the aspects of the design of NN architectures. Here, the author focuses on deep learning techniques in weather forecasting by comparing the existing DLWP studies. Then they analyzed the pros and cons of DLWP by comparing it with the conventional NWP, and summarized the potential of DLWP. They have discussed three types of data and they're Multi-dimensional real-type data, Satellite image data, Long time sequence data. They have also mentioned two types of DNN models. One is basic DNN models such as Autoencoders CNN and LSTM and another one is typical hybrid DNN models. STConv52S, ConvLSTM, TrajGRU, PredRNN, MetNet models are used for weather state prediction and Hybrid CNN-LSTM, multi-channel convolutional encoder-decoder models are used for extreme weather detection. This work mainly is on extreme event detection on planetary-scale data and again they have inspected the outputs from the climate model's results to explain the climate changes by the year 2100. In this paper [12], authors aimed to outperform the traditional methods of weather forecasting by using robust machine learning techniques. Here, they predicted the highest and lowest temperature for seven days using weather data of the past two days. They used a variation of functional regression model and linear regression model. Among these two, the first one was capable of capturing the trends in the weather. Though both models got outperformed by the professional weather forecasting services, the discrepancy between the professional forecasting and their models reduced quickly in the prediction of the later days. They also think that their models might outperform the professional ones in predicting temperature for longer time periods. Moreover, they found out that the linear regression model outperformed the functional regression model. Nevertheless, they think that the latter would have performed better if they had based their forecasts on the weather data of four or five days. In this paper [17], the authors attempted to develop an efficient low cost weather forecasting system using machine learning which would work in remote areas. They used data analytics and machine learning algorithms like random forest classification to predict the condition of the weather. The main target was to predict whether it would rain or not on a particular day depending on the factors like humidity, temperature and pressure. They found out that the most important factor while predicting rain is humidity followed by temperature and pressure. They have used these machine learning algorithms in python on a Raspberry Pi 3 B board. Other hardwares like the BMP180 pressure sensor and DHT11 humidity and temperature sensor were also a necessary part of this work. In this study [20], the authors suggested a weather prediction method that leverages historical information from several weather stations to develop basic machine learning models that can provide reasonable predictions for certain atmospheric patterns in the near future in a short amount of time. Furthermore, they argue that using data from many adjacent weather stations rather than data from only the region for which weather forecasting is being done is preferable. Because the expected outcomes are continuous quantitative values, such as temperature, they employed regression techniques. Because it ensembles several decision trees when making decisions, Random Forest Regression (RFR) is demonstrated to be the best regressor. Ridge Regression (Ridge), Support Vector Regression (SVR), Multi-layer Perceptron Regression (MLPR), and Extra-Tree Regression (ETR) are just a few of the regression techniques employed. ## IV Methodology and Implementation The intention of this research is to build a model which can predict the temperature of future years in an accurate manner. The dataset which is chosen for the research holds a vast data of previous patterns of rising temperature of many major cities in the globe. We have built our model based on machine learning and deep learning algorithms. Our model is fed the pre-processed data to train the model and is tested with testing data. Time series analysis is a major model that deals with this type of problem. Models like ARIMA, SARIMA and PROPHET have been used to build our model. To utilize the full potential of time series analysis we used LSTM and CNN. To build an accurate model we have planned our work in several steps. Those steps are- 1. **Data preprocessing:** We have ensured all the preprocessing our datasets need in this step. It can prove useful for the accuracy of our model. 2. **Splitting Dataset:** The dataset has been split into a Training set and a testing set in this step. 3. **Research and Develop:** We reviewed previous works which has been done on this specific topic and try to learn from them to build a more accurate and robust model. 4. **Build and Training proposed model:** At this level, we have built our proposed models and trained it with the aforementioned Training dataset. 5. **Testing and measuring the model:** Testing data was feed to the model to test its accuracy and also compare the accuracy with training accuracy 6. **Visualizing the result:** The predicted values has been be presented in this final step. ### _Dataset_ In order to get appropriate temperature predictions, a very well detailed dataset is a must need. Therefore, during our data collection we used the dataset of Average Daily Temperature Archive provided by University of Dayton [22]. This is a dataset which has the daily temperatures of most of the world's major cities which includes 167 international cities across the various regions of the globe as well as 157 U.S cities. The current dataset includes daily temperature from January 1, 1995 to May, 2020. The dataset contains a total of 8 columns. Among these, 4 columns which include Region, Country, State and city are used to define the location of the data collection and 3 columns which include Month, Day and Year signifies the date of collecting the temperatures. And the 8th or the final column contains the average temperature of a city on a particular date. In table I, a filtered version of the dataset is shown. ### _Data preprocessing_ In order to enhance the generalizability of our model we need to perform various types of data preprocessing processes. These operations would remove the redundant data as well as it would reorganize some of the columns. The processes are: 1. **Imputing the null values:** Here the rows which have value -99 in the average temperature column actually contain null values. Therefore, we would replace the null values meaning the row containing value -99, with the average temperature value from its previous row. 2. **Feature Engineering:** we would extract a date column from the Month, Day and Year column. This would be much more effective than the previous format. Moreover, we would drop the previously existing Month, Day and Year column. 3. **Dropping Unnecessary Columns:** In the selected dataset, all the countries except the U.S, do not have any element in the State column. Hence, we would drop the State column as well as the Region since these information are redundant in regards to our model. 4. **Data Splitting:** The entire data frame was split into 80% and 20% for the training set and testing set respectively. Again, 20% of the training set was used Fig. 2: flowchart for achieving research objectives in the validation. For the individual 3 models we have performed data pre-processing accordingly. The batch size we created from the dataframe has a size of 32. Furthermore we have created mini batches which has size of 5 for a smoother training process. ### _Architecture_ In this section, we have explained our proposed model that we have used to predict the future temperatures. We have selected 3 algorithms for our model which are given below: #### Iii-C1 Lstm One drawback of RNN is although it can predict from recent events it lacks the concept of context. For instance, during the prediction of temperature there are some exceptions that might occur such as a natural disaster. These events go against the normal state and need to take the context in account. LSTM takes in account the long term dependencies [10]. What modifies an RNN model to a LSTM model is that the repeating module here has four layers. Here in the given figure (3) we can see the full process of how LSTM operates. For starting we have data from a past event being sent to a successor state. Here the forget gate is receiving the memory of a past event. In another part, a sigmoid layer is called the "forget gate layout" which looks at \(h_{t\,-\,1}\) and xt and outputs a number between 0 and 1 where 1 is to keep the information and 0 is to get rid of the information [4]. \[f_{t}\;=\;\sigma\left(W_{f}\;\cdot\left(h_{t\,-\,1},\;x_{t}\right)+b_{f}\right) \tag{1}\] Afterwards, another sigmoid layer called "input gate layout" decides which values should be updated and a tanh layer creates a vector for new values that can be updated. \[f_{t}\;=\;\sigma\left(W_{f}\;\cdot\left(h_{t\,-\,1},\;x_{t}\right)+b_{i}\right) \tag{2}\] \[C_{t}\;=\;\tanh\left(W_{c}\;\left(h_{t\,-\,1},\;x_{t}\right)+b_{i}\right) \tag{3}\] Next step is to update the old cell state from \(C_{t\,-\,1}\) to \(C_{t}\). If we multiply the old state by \(f_{t}\) and add \(i_{t}\cdot C_{t}\) we get the new candidate value. This value is scaled by how much we want to update each state values. In our code here we drop information of the past events and imply new values. \[C_{t}\;=\;f_{t}\;\cdot\;C_{t\,-\,1}\;+\;i_{t}\cdot C_{t} \tag{4}\] Finally the output we get is a filtered version of the cell state. It is done by first running a sigmond layer to decide the parts of the cell state that will be given as outputs. Then the cell state is pushed through tanh and is multiplied by the output of the sigmond gate to filter out the output needed. For the weather forecast it is where it decides weather overcast clouds may lead to rain or not. \[o_{t}\;=\;\sigma\left(W_{o}\;\cdot\left(h_{t\,-\,1},x_{t}\right)+b_{o}\right) \tag{5}\] \[h_{t}=o_{t}\cdot tanh\left(C_{t}\right) \tag{6}\] Our proposed CNN-LSTM model is shown in the Figure 4, we have used keras and tensorflow libraries to build our CNN-LSTM model. The details of this implementation is shown below, 1. **Convolutional layer:** In our model, we have used the conv1D layer in keras. One conv1D is used for the input layer which has a total of 192 parameters. 2. **LSTM layer:** We have decided to use a total of 2 LSTM layers for the model. These 2 layers have tanh as their activation function. Total of 24382 and 33024 parameters are used in these 2 layers of LSTM respectively. 3. **Dense layer:** After the LSTM layers, we have selected 3 dense layers from keras which are also called the hidden layers. These dense layers have Relu as their activation function and have a total of 2080, 1056 and Fig. 3: Dataflow in LSTM #### 4.2.2 Prophet We have chosen Prophet as our second model to perform the prediction on the selected data frame. Prophet is a time series data forecasting process based on an additive model that fits non-linear trends with yearly, weekly, and daily seasonality, as well as holiday impacts. Prophet was released as open source software by Facebook's Core Data Science team. It works well with time series with strong seasonality and data from multiple seasons of historical data. Prophet decomposes data into three main model components: trend, seasonality and holidays and they are combined in the given equation- \[yt\;=\;g\left(t\right)\;+\;s\left(t\right)\;+\;h\left(t\right)\;+\;et \tag{7}\] Here, g(t) describes a piecewise-linear trend, s(t) describes periodic changes like various seasonal patterns, h(t) represents holiday effects which take place on irregular schedules over a day or a period of days and \(et\) is the error term which represents any individual changes which are not explained by the model [23]. #### 4.2.3 Arima We know in time series analysis the data has to be stationary. Which means it has to have a content mean over time. But sometimes this requirement is not full-filled. In that case the AR or MA model becomes obsolete and we have to resort to the ARIMA model. ARIMA means Auto Regressive Integrated Moving Average where instead of predicting the time series itself the prediction is done based on one stamp of the series from its previous time stamp. So, for a series "\(y_{t}\)" we will take a portion of it "\(z_{t}\)" which we can simply define as \(z_{t}=a_{t\;-\;1}\;-\;a_{t}\) which is the data of some month subtracted by data from previous months. Therefore, even if the graph itself is not stationary we can still divide it into small stationary parts. For our research it will assist us greatly when it comes to the unpredictable changes in value of temperature. ARIMA model consists of three parameters p, d, q where p is the parameter of the AR part d for the integration part and q for the MA part. So, for a simple \(ARIMA_{(1,1,1)}\) model the equation is supposed to look like: \[Z_{t}\;=\;\phi_{1}\,Z_{t\;-\;1}\;+\;\theta_{1}\,e_{t\;-\;1}\;+e_{t} \tag{8}\] Figure 4: Proposed LSTM model Figure 5: LSTM model summary Now to extract values for the main function \(y_{t}\): \[y_{t}=z_{t-1}a_{t-1}=\,z_{t-1}\,z_{t-2}\,a_{t-2}=\cdots=\sum_{i\,-1}^{t\,-1}z_{t- i}+\,a_{l} \tag{9}\] Here we have the last data value at al and so that's our end value of data [13]. #### Iii-C4 Sarima The SARIMA model is referred to as the Seasonal ARIMA model. The ARIMA model faces a challenge when it comes to seasonality. And seasonality is something highly visible in temperature forecasting as rise and fall in temperature follows a seasonal pattern. And seasonality is not a stationary data which we need in time series analysis. So, the SARIMA model modifies the existing ARIMA model adding seasonal components. We already know in ARIMA model the parameters are p, d, q where p is the parameter of the AR part d for the integration part and q for the MA part and the model looks like \(ARIMA_{(p,q,r)}\). To take seasonality into counts the model is modified to \(ARIMA_{p,q,r}P,Q,Rs\) where the capital letters stand for the seasonal parameters of AR terms, differences and MA terms respectively while the value of "s" shows the length of the season. So, in order to forecast wt we should write \(wt=yt-yt-l\) where "l" is the length of the dataset. Since SARIMA incorporates both seasonal and non seasonal factors, it can be written as, \[ARIMA(p,d,q)*(P,D,Q)S \tag{10}\] Here, p = non-seasonal AR order, d = non-seasonal differencing, q = non-seasonal MA order, P = seasonal AR order, D = seasonal differencing, Q = seasonal MA order, and S = time span of repeating seasonal pattern. We have Used a Sarima model on the temperature data of Rio de Janeiro. The model is implemented in the traditional way. As a regression model, we have performed necessary Data preprocessing to make the training process as smooth as possible. To perform the Sarima model, we had to implement the Arima model on the dataframe as it is shown in the discussion above that the Sarima is actually the Arima with an addition of Seasonal factors[11]. ### _Implementation_ This section explains the implementation of the proposed model for predicting temperature of the major cities. Python was used for implementation and testing of the proposed model as Python is being used in the majority of machine learning and deep learning models. Tensorflow is used as it is the most popular library for Neural network models. Libraries like Matplotlib, numpy, pandas, fbprophet, seaborn along with sklearn are used to make the model as versatile as possible. #### Iii-D1 LSTM Implementation To start with, Our proposed model consists of three stages; Data preparation, Training the model and testing for each algorithm we have used. In the data preparation phase, we have selected Rio De Janeiro randomly from our dataset and tested our model corresponding to the city's temperature. All the data of Rio de Janeiro were separated into a data frame. Then, All the columns are dropped except the Date and AvgTemperature and the false values are removed. In the following figure(6), the data frame is shown. Here, The total data frame for Rio de Janeiro is plotted on the graph. From 1996 to 2020, Temperature of each day is shown in this graph after dropping all the null and error values. Again, we have split the dataframe into train and test set. 85 percent of data is split into train set and the rest of 15 percent into test set without shuffling. Then, MinMaxScaler is used to scale the train and test sets. As the most difficult part to separate the data in branches, we have used a keras API to make the branches. After generating the Time series of both the train and test set, we have converted the series into tensor slices. Then, these slices were made into tensor flow windows of size 5. Using the map option we have split the variables into X and Y variables. The batch option helped us to put the variables into mini batches suitable for training. We tried to train our LSTM layer to find the correct learning rate with 100 epochs. By using the LearningRateScheduler we track the correct learning rate. We have taken the step where the learning rate is the steepest shown in the figure (7). In the testing phase The model was trained with 500 epochs. Then, we have performed predictions with the test set. #### Iii-D2 Prophet Implementation Again, For prophet, we have prepared our data in the same manner as we had done for Fig. 6: DataFrame plotted into graph Fig. 7: Learning Rate of LSTM layer the LSTM layer. For Prophet, the process was quite straightforward. The data have been split into the train and test sets. First we fit the train set and test set. Again we tried to predict the past values. Then, we have compared the values of the actual and predicted value as seen in the following figure (8). In the figure(8), the black points are the actual values and the blue lines are the predicted values with trend by the Prophet model. The Temperature is on the y axis plotted against the years which are plotted in the x axis. In figure (9), the trends of the data is plotted using the plot components method from the fbprophet library. The trends of weeks and years are plotted in the y axis against the corresponding dates plotted in the x axis #### Iv-B3 ARIMA & SARIMA Implementation Lastly, we have run the SARIMA model. Here, in the start we have imported the necessary libraries which are numpy, matplotlib, pandas, scikit learn and statesmodel. After importing the data and performing necessary preprocessing, using the decompose method from statesmodel we have plotted the observed, trend, seasonal and residual of the data frame which is shown in figure (10). The adfuller method from the statsmodel is used to observe if our data is stationary or not. As we have seen that our data is quite stationary, with the P value lower than 0.05 we found out the average temperature of every month of the selected timeframe seen in figure (11). Now, We have split data into training and test sets where 80 percent of data have gone into the training set and the rest of 20 percent into Test set. Then, we have plotted the autocorrelation (figure 12) and partial autocorrelation (figure 13) of the data frame. Auto correlation and partial autocorrelation are the factors to find out the p, d and q value for implementing the ARIMA model.To check the stationarity we have used the autocorrelation function and the partial autocorrelation. We have also implemented a custom way to find the correct value for p,d and q which is a for loop to train the model with all the different combinations of p,d and q ranging 0-8. After finding out the best combination for p,q and d which is (7,0,2) by checking the least RMSE value given by our method. We trained our arima model with the following parameters and predicted using the test Data. As We have determined the parameters of AR, I and MA according to the behavior between the ACF and PACF plots in the figure 11 and the custom method we used, we tried to train our SARIMA model with the S value of 12. In the end we plotted the predicted results from the SARIMA model. Fig. 11: Plot of average temperature of months Fig. 8: The past actual values and the predicted values compared Fig. 10: Trend and seasonality Fig. 9: Trends of Data in daily, weekly and yearly manner ## V Results ### _Predictions according to LSTM_ As we trained our LSTM model for 500 epochs using our batch data, We have predicted the temperature of Rio de Janeiro for the test data. From the figure, we can see the predicted data from 2017 to 2020 in the following figure (14). It can be seen from the graph of figure 14, the predicted values are quite close to the actual values. The orange lines represent the predicted values while the blue ones represent the actual values for the year from 2017 to 2020. After that the training loss is plotted against Epochs in x axis. It can be hard to understand how the Training loss has decreased over Epochs. For that a zoomed view of Training losses are shown in the figure 16. In the zoomed view it is shown that the training loss has decreased from 1.58 to 1.48 as epochs increased from 200 to 500. Fig. 16: Zoomed training loss for LSTM Fig. 12: Autocorrelation performance Fig. 13: Partial Autocorrelation performance Fig. 15: Training loss compared to Epochs for LSTM Predicted values from the LSTM model have been enlisted in the table (III) with a comparison between the actual values of the last set of dates from the test set. If we examine the values of both actual and predicted, it is quite eminent that the predictions are quite good in terms of actual values. For example, on the date 2020-05-09 the predicted and actual value are almost the same. Again for the next day at the table, A similar case has taken place. ### _Predictions using Prophet_ For predictions, Firstly a future dataframe of 730 days is created using the make_future_dataframe method from the Fbprophet library. A new set of predictions for dates starting from june 2020 to 2022 june can be seen as the singular blue lines in the Figure (17). The black dots represent the actual values of the data frame. A comparison between the Actual and predicted values are shown in the following table (IV). Here, it can be seen that the temperature prediction is kind of odd as the predicted temperature has been similar in a relatively short period of time. ### _Predictions from ARIMA_ Before Running the SARIMA, we have performed training and predicting the data with ARIMA model. As our data was stationary, ARIMA performed quite well predicting the temperature for the test set shown in the figure (18). We have also shown the values of prediction from month 05 to month 09 of 2015 (table V). ### _Predictions of SARIMA_ Finally As we have run the SARIMA model which has given us much better results than the upper 2 algorithms, it can be seen that the SARIMA parameters are well fitted and the predicted values are following the actual values (table VI and figure 19) and also the seasonal pattern (figure 20). Now, As we analyze the error values for each case in the following table (VII), it is quite clear that SARIMA is performing better than the three algorithms we have used here. The prophet model has performed in the poorest manner while the LSTM model has performed quite well but fell short to SARIMA and ARIMA. Fig. 17: Prediction for 2020 to 2022 of Rio de Janeiro Fig. 18: Prediction of Arima model As we have observed that SARIMA is performing better on the Rio de Janeiro time frame. We decided to perform Sarima in a different city, Data. We chose Delhi, which is one of the most polluted cities in the world and tested our model on the temperature of Delhi. Here, unlike the temperature of Rio de janiero, the data of Delhi was not stationary as we decompose the data and tested with the adfuller method from statsmodel library. we had to change the hyperparameters of ARIMA and SARIMA according to the ACF and PACF of the temperature. We chose the best fit values for p, d and q which are 5,0 and 5 and trained the model. Predictions of the Sarima model on Delhi are shown in the aforementioned figure (23). The predictions which are the green line are following the orange lines which represent the test set values. The predicted values of table (VIII) also represent the comparison with the test values. The mae, mse and rmse values of the Sarima model on the Delhi dataframe is shown here in the table (IX). Lastly, we have created a future data frame from the 6th month of 2022 from the 7th month of 2023. We have tried to predict this future data frame with the sarima model which is seen in the table (X). Fig. 23: Prediction according to SARIMA on Delhi Fig. 22: Zoomed prediction from SARIMA Fig. 19: Prediction from Sarima Fig. 20: Prediction from Sarima #### V-E1 Conclusion Changes in temperature have caused great concerns in the recent decades. The orthodox methods of calculating and predicting this increase of temperature around the world are becoming more and more obsolete. So, in order to get better insight and accurate calculation modern approaches such as using Machine learning algorithms are necessary for accuracy and efficiency. As we implied Time series analysis using LSTM in different models such as Prophet, ARIMA and SARIMA and put them into comparison we came to the conclusion that the best performing model for the prediction has been SARIMA. Implying this model in Rio De Janeiro we found that when applied to the Prophet or ARIMA the error value is greater than what occurs when SARIMA is implied. Afterwards the model was used in a different city which is Delhi this time as it has a higher pollution level. This time though when using the adduller method from the statsmodel library the data came out not to be stationary. So, basically it not only helped us to predict the temperature of Delhi it also portrayed the imbalance that pollution creates in temperature. ### _Future work_ Although we have just portrayed this way of predicting temperature we do have some future aspirations. Recently we have found A global GHG inventory from 1990-2017 on International Greenhouse Gas Emissions which will see us compare how much pollution effect the trend of the temperature. Furthermore, we've found a dataset on Carbon Dioxide emission rates. We will try to merge these two or at least one dataset with the dataset we used in our paper. For these we will need some time to further work on it. Nevertheless, we are looking forward to develop our own model or exploring other new methods of ensemble learning to get more accurate results. So we have some future aspirations to work on this project.
2309.15999
Organic Electronics in Biosensing: A Promising Frontier for Medical and Environmental Applications
The promising field of organic electronics has ushered in a new era of biosensing technology, offering a promising frontier for applications in both medical diagnostics and environmental monitoring. This review paper provides a comprehensive overview of the remarkable progress and potential of organic electronics in biosensing applications. It explores the multifaceted aspects of organic materials and devices, highlighting their unique advantages, such as flexibility, biocompatibility, and low-cost fabrication. The paper delves into the diverse range of biosensors enabled by organic electronics, including electrochemical, optical, piezoelectric, and thermo sensors, showcasing their versatility in detecting biomolecules, pathogens, and environmental pollutants. Furthermore, integrating organic biosensors into wearable devices and the Internet of Things (IoT) ecosystem is discussed, offering real-time, remote, and personalized monitoring solutions. The review also addresses the current challenges and prospects of organic biosensing, emphasizing the potential for breakthroughs in personalized medicine, environmental sustainability, and the advancement of human health and well-being.
Jyoti Bala Kaushal, Pratima Raut, Sanjay Kumar
2023-09-27T20:31:08Z
http://arxiv.org/abs/2309.15999v2
# Organic Electronics in Biosensing: A Promising Frontier for Medical and Environmental Applications ###### Abstract The promising field of organic electronics has ushered in a new era of biosensing technology, offering a promising frontier for applications in both medical diagnostics and environmental monitoring. This review paper provides a comprehensive overview of organic electronics' remarkable progress and potential in biosensing applications. It explores the multifaceted aspects of organic materials and devices, highlighting their unique advantages, such as flexibility, biocompatibility, and low-cost fabrication. The paper delves into the diverse range of biosensors enabled by organic electronics, including electrochemical, optical, piezoelectric, and thermo sensors, showcasing their versatility in detecting biomolecules, pathogens, and environmental pollutants. Furthermore, integrating organic biosensors into wearable devices and the Internet of Things (IoT) ecosystem is discussed, offering real-time, remote, and personalized monitoring solutions. The review also addresses the current challenges and future prospects of organic biosensing, emphasizing the potential for breakthroughs in personalized medicine, environmental sustainability, and the advancement of human health and well-being. ## 1 Introduction Organic electronics have emerged as a promising frontier in the field of biosensing, offering innovative and versatile solutions for medical and environmental applications. With the rapid advancement of organic materials and devices, integrating organic electronics into biosensing platforms has unlocked many possibilities for sensitive, real-time, and label-free biological and chemical analytes detection. This convergence of organic electronics and biosensing can revolutionize medical diagnostics, point-of-care testing, wearable health monitoring, and environmental monitoring, among other critical domains. Based on carbon-based compounds and polymers, organic electronic devices present distinct advantages that make them well-suited for biosensing applications. These materials offer biocompatibility, enabling direct interactions with biological systems without causing adverse reactions, making them ideal for implantable biosensors and in-vivo monitoring. Additionally, organic materials exhibit exceptional flexibility, enabling the development of conformable and wearable biosensing devices that can seamlessly adapt to the human body or environmental surfaces, expanding their utility in personalized healthcare and environmental monitoring. The unique electronic properties of organic materials, such as tunability, conductivity, and semiconducting behavior, contribute to their exceptional sensing capabilities. Organic electronic devices, such as organic field-effect transistors (OFETs), organic electrochemical transistors (OECTs), and organic photodetectors (OPDs), have demonstrated high sensitivity, selectivity, and rapid response times, allowing for the accurate detection of target analytes in complex samples. In this context, this review explores the exciting frontier of organic electronics in biosensing, focusing on its applications in medical diagnostics and environmental monitoring. We delve into an overview of organic bioelectronic materials, the various organic electronic devices, and their fabrication methods, detailing their sensing mechanisms and advantages over traditional sensing technologies. Additionally, we discuss the challenges faced in integrating organic electronics in biosensing platforms, such as biocompatibility, stability, manufacturing scalability, and data security and privacy, and the innovative strategies employed to address these obstacles. ## 2 Organic Bioelectronic Materials ### Conducting Polymers Conducting polymers (CPs) are a class of organic materials that exhibit electrical conductivity while maintaining the desirable mechanical properties of polymers. Unlike traditional semiconductors, CPs are intrinsically conductive without requiring any additional dopants. This unique combination of electrical and mechanical properties makes conducting polymers highly attractive for various applications, including electronics, biosensors, actuators, and energy storage devices. The electrical conductivity of conducting polymers arises from the delocalization of \(\pi\) electrons, which occurs through the presence of alternating single and double bonds along their polymer chains. These \(\pi\) electrons can move freely through the conjugated system, allowing the movement of charge carriers (electrons and holes) consequently resulting in electrical conductivity. CPs possess a valence band (HOMO - Highest Occupied Molecular Orbital) and a conduction band (LUMO - Lowest Unoccupied Molecular Orbital) [1]. The energy gap between the HOMO and LUMO determines the material's bandgap, affecting its electrical properties [2]. In their pure, undoped state, organic polymers may behave as insulators or semiconductors due to the large energy gap between the HOMO and LUMO [3]. Nonetheless, doping allows these materials to become conductive. Doping entails the introduction of additional charge carriers into the material, achieved through the incorporation of electron donors (n-type doping) or electron acceptors (p-type doping). This deliberate addition of charge carriers reduces the energy gap, thereby facilitating the movement of charge carriers and enhancing the electrical conductivity of organic polymers. Importantly, it's worth noting that not all organic semiconductors necessitate doping to exhibit their desired electrical properties. In the case of CPs, the energy gap (bandgap) between the HOMO and LUMO is relatively small compared to insulators but larger than true metals. CPs exhibit distinct electrical and mechanical properties, allowing researchers to tailor their performance for specific applications. The electrical conductivity of conducting polymers can be tuned by varying factors such as oxidation state, doping level, and environmental conditions. CPs can be chemically doped or electrochemically doped to enhance their conductivity. By doping, additional charge carriers are introduced into the material, increasing electrical conductivity. Moreover, the mechanical properties of CPs are influenced by factors such as molecular weight, chemical structure, and processing methods. These polymers can be synthesized into various forms, including films, fibers, and coatings, while retaining conductivity. The flexibility and ease of processability make conducting polymers suitable for applications where traditional inorganic conductors may be limited due to their rigidity. The unique combination of electrical conductivity and mechanical flexibility enables conducting polymers to be used in electronic devices such as organic transistors, flexible displays, and printed circuits. They are also employed as sensing elements in chemical and biosensors, where their conductivity changes upon interaction with specific analytes. In the field of energy storage, conducting polymers are explored for applications in supercapacitors and batteries due to their high charge storage capacity. One of the pioneering conducting polymers is polyaniline (PANI), which first discovered its conductive properties in the late 1970s. Since then, several other conducting polymers, such as polythiophene(PTs), polypyrrole (PPy), and poly(3,4-ethylenedioxythiophene) (PEDOT), have been developed and extensively studied [4]. PEDOT is the most ubiquitous organic mixed ionic/electronic conductors (OMIECs), a class of materials exhibiting simultaneous electronic and ionic conductivity [5]. This unique combination of properties makes OMIECs highly valuable for various applications, including electrochemical devices, energy storage systems, actuators and artificial muscles, and biosensors [6]. OMIECs comprise soft organic materials, such as conducting polymers or small organic molecules, that can conduct electrons and ions, offering advantages over traditional electronic or ionic conductors [7]. Figure 1 shows the chemical structure of commonly used conducting polymers. ### Organic Semiconductors Organic semiconductors are a class of organic materials with unique electronic properties, lying between traditional conductors and insulators. The band gap between organic semiconductors' valence band and conduction band is relatively lower than in insulators and higher than in conducting polymers. The organic semiconductors are composed of carbon-based molecules or polymers, known as \(\pi\)-conjugated systems, which enable the movement of charge carriers (electrons and holes) through their conjugated molecular structure [8; 9; 10; 11]. Small-molecule semiconductors consist of discrete, well-defined organic molecules, while polymer-based semiconductors comprise long-chain polymer structures with repeating monomer units. Organic molecules primarily comprise carbon atoms bonded to hydrogen, oxygen, nitrogen, and other elements. Carbon's ability to form stable covalent bonds with various other atoms allows for the diverse and complex structures found in organic molecules. Organic molecules contain specific functional groups, which are arrangements of atoms that confer distinct chemical properties and reactivity to the molecule. For example, the hydroxyl group (-OH) in alcohols is hydrophilic, while the carbonyl group (-C=O) in ketones and aldehydes has unique reactivity. In recent years, several organic molecular semiconductors have been extensively studied, including oligoacenes, oligothiophenes, discotic liquid crystals, triphenylamines, perylenes, tetrathiafulvalenes, and fullerenes [12]. Similarly, prominent examples of organic polymeric semiconductors include polyparaphenylenevinylene (PPV), polyparaphenylene (PPP), polyparaphenylene (PF), and polyfluorene copolymers [13]. Both organic semiconductors offer advantages, such as solution processability and low-temperature deposition, making them suitable for various electronic and optoelectronic applications. Their versatile properties contribute to their widespread use in developing innovative and cost-effective devices for modern technologies. The unique properties of organic semiconductors have led to their integration into a wide range of electronic devices, such as organic field-effect transistors (OFETs) [14; 15], organic light-emitting diodes (OLEDs) [16; 17; 18], organic photovoltaics (OPVs) [19; 20; 21], and organic sensors [22]. OFETs utilize organic semiconductors as the active channel material, enabling flexible and low-power transistor devices. Organic sensors utilize the sensitivity of organic semiconductors to detect changes in environmental parameters, such as gas concentration or biomolecular interactions [23]. Figure 2 shows examples of commonly used small molecule-based and polymer-based organic semiconductors for different types of bioelectronics devices [24; 25; 26]. While organic semiconductors have numerous advantages, such as flexibility and cost-effectiveness, they are not without challenges. These challenges encompass relatively lower charge carrier mobility when compared to their inorganic counterparts and susceptibility to environmental factors like humidity and temperature, as noted in recent studies [27; 28; 29]. To overcome these limitations, researchers are actively exploring advanced material engineering, innovative doping techniques, and novel device architectures to enhance the performance and stability of organic semiconductors [30; 31]. Figure 1: Chemical structure of commonly used conducting polymers: polyaniline (PANI), polypyrrole (PPy), polythiophene(PTs), polyacetylene, polyphenylene, and poly(3,4-ethylenedioxythiophene) (PEDOT). Organic semiconductors remain a highly promising platform for developing flexible, cost-effective, and energy-efficient electronic devices [32]. Their unique properties and versatile applications have positioned them as compelling candidates for the next generation of electronic and optoelectronic innovations. These advancements drive innovation across various domains, including wearable technology, flexible displays, and renewable energy solutions. As research and development in organic semiconductors continue to progress, new opportunities are anticipated to emerge in the ever-evolving realm of organic electronics. ### Biomolecules as Sensing Elements Biomolecules serve as highly sensitive and selective sensing elements in various biosensing applications. These natural macromolecules, including proteins, nucleic acids, enzymes, and antibodies, exhibit specific interactions with target analytes, enabling the detection and quantification of various substances with remarkable accuracy. The inherent recognition capabilities of biomolecules make them valuable sensing elements in biosensors, enabling real-time monitoring of biochemical reactions and detecting analytes with exceptional specificity. Figure 2: Chemical structures of organic semiconductors **(a)**\(\pi\)-conjugated small molecular families based semiconductors, and **(b)** polymer-based semiconductors. One of the key advantages of using biomolecules as sensing elements is their ability to bind specifically to target molecules, known as ligands or antigens, through molecular recognition processes [33]. This binding interaction is governed by complementary shapes and chemical properties between the biomolecule's active sites and the target analyte, allowing for highly selective detection [34]. The high affinity of biomolecules to their target analytes ensures that biosensors can distinguish between similar molecules, achieving precise and reliable measurements. Various techniques are employed to immobilize biomolecules onto the sensor surface while maintaining biological activity. Surface modification methods, such as physical adsorption, covalent binding, and self-assembled monolayers, allow the biomolecules to remain functional while attached to the sensor surface [35, 36, 37, 38]. Immobilization ensures the biomolecular sensing elements remain near the transducer, facilitating efficient signal transduction upon analyte binding. Moreover, enzymes are a specific class of biomolecules extensively used in biosensing applications due to their catalytic activity [39, 40]. Enzymatic biosensors utilize enzymes as sensing elements with a transducer to generate a detectable signal proportional to the concentration of the target analyte. This resulting signal arises from enzymatic reactions that induce changes in proton concentration, gas release/uptake (e.g., ammonia or oxygen), light emission, heat release, and more [41]. The transducer converts this signal into a measurable response, like current, potential, temperature change, or light absorption, using electrochemical, thermal, or optical methods. This signal can be amplified, processed, or stored for subsequent analysis. Additionally, antibodies are highly specific recognition elements used in immunoassays [42, 43]. They can selectively bind to antigens, pathogens, toxins, or specific biomolecules, forming antibody-antigen complexes. These complexes are detectable through various transduction methods, such as optical, electrochemical, or piezoelectric signals, allowing for sensitive and specific detection of the target analyte. Furthermore, nucleic acids, such as DNA and RNA, are utilized in nucleic acid-based biosensors [44, 45, 46, 47]. These sensing elements recognize specific DNA sequences or RNA targets through hybridization reactions. Nucleic acid biosensors are vital for genetic analysis, disease diagnostics, and monitoring of nucleic acid-based biomarkers. Figure 3 illustrates the biomolecules-based biosensors. Overall, biomolecules serve as powerful sensing elements in biosensors, enabling the detection of a wide range of analytes, including proteins, nucleic acids, small molecules, and even viruses or bacteria. Their high specificity, sensitivity, and ability to function under physiological conditions make them invaluable tools in medical diagnostics, environmental monitoring, food safety, and various other applications. As research continues, integrating biomolecules into novel sensing platforms promises to revolutionize biosensing technology, opening up new avenues for precise, rapid, and cost-effective detection of analytes in diverse fields. ### Nanomaterials Nanomaterials are materials characterized by nanoscale dimensions, typically ranging from 1 to 100 nanometers in at least one dimension [48]. These materials exhibit unique properties that differ significantly from their bulk counterparts, making them valuable for various science, engineering, and technology applications. The small size of nanomaterials results in a high surface-to-volume ratio, leading to enhanced reactivity and increased surface area for interactions with other materials. This unique feature allows for tailoring their physical, chemical, and mechanical properties through precise size, shape, and composition control [49]. Based on dimensionality, nanomaterials can be categorized into four main categories: zero-dimensional (0D), one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) nanomaterials (see Figure 4). Zero-dimensional nanomaterials are nanoparticles with nanoscale dimensions in all three directions. Examples include nanoparticles and quantum dots. Nanoparticles comprise metals, metal oxides, semiconductors, polymers, and other materials. Due to their small size, nanoparticles exhibit quantum confinement effects, where their electronic and optical properties become size-dependent. This phenomenon leads to novel optical, electrical, and magnetic behaviors different from bulk materials. For example, gold nanoparticles exhibit unique plasmonic properties, making them suitable for applications in sensing and imaging [50]. Another example of 0D nanomaterials is nanocomposites, formed by combining nanoparticles with a matrix material to enhance specific properties. These materials integrate the unique properties of nanoparticles, such as enhanced surface area and tailored functionality, with the structural support of the matrix material [51, 52]. In biosensing, nanocomposites can be engineered to create susceptible and selective sensors. Nanoparticles can act as signal amplifiers, enhancing the detection signal through their distinctive optical, electrical, or catalytic properties. The matrix material provides stability, mechanical strength, and a platform for biomolecular immobilization. By judiciously selecting nanoparticle types and incorporating them into the matrix, nanocomposite-based biosensors can achieve superior sensitivity, rapid response, and the capability to detect a wide range of analytes, including biomolecules and pathogens. For instance, incorporating hemin and silver-coated gold nanoparticles into a graphene oxide sheet led to a highly stable catalytic nanozyme with excellent detection performance [53]. One-dimensional (1D) nanomaterials have nanoscale dimensions in two directions, while the third dimension is in the micrometer range. Carbon nanotubes (CNTs) and nanowires are noteworthy examples. CNTs are cylindrical nanostructures of carbon atoms arranged in a hexagonal lattice, forming a tubular shape. Due to their unique atomic arrangement, they exhibit remarkable mechanical, electrical, and thermal properties. CNTs can be single-walled (SWCNTs) or multi-walled (MWCNTs), with differing properties based on their structure [54, 55]. SWCNTs have extraordinary electrical conductivity and can be semiconducting or metallic, making them ideal for various electronic and energy storage applications. MWCNTs, on the other hand, possess exceptional strength and are used in reinforcement materials. Their high aspect ratio, surface area, and tunable properties have led to their utilization in diverse fields, including nanotechnology, materials science, electronics, and biomedical applications. Two-dimensional (2D) nanomaterials have nanoscale dimensions in one direction while the other two remain relatively larger. The most notable example is graphene, a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice. Graphene has garnered immense attention for its exceptional properties and diverse applications, particularly in biosensing [56, 57, 58]. Its remarkable electrical conductivity, high surface area, and biocompatibility make it Figure 3: Schematics illustration of biomolecules-based biosensors. **(a)** antibody-based; **(b)** enzyme-based biosensors; **(c)** DNA/RNA-based biosensors. a promising biosensor candidate. Graphene-based biosensors can detect biomolecules with high sensitivity and specificity, as the binding of target molecules leads to changes in their electrical properties. Its two-dimensional nature enables efficient interaction with biomolecules, enhancing sensor performance. Additionally, graphene's ease of functionalization allows the attachment of specific biomolecular recognition elements, enhancing selectivity [59, 60]. Three-dimensional (3D) nanomaterials are advanced structures that extend into the nanoscale in three spatial dimensions, offering unique properties and a high degree of control over their physical and chemical characteristics. These materials are recognized for their exceptional electroactive surface area, which allows for a higher loading capacity of recognition elements, such as antibodies or aptamers, thereby making them highly effective in targeting specific analytes, amplifying signals, and facilitating efficient biosensing with increased sensitivity and specificity. This category includes intricate hierarchical nanoscale structures and nanocomposites, which play a significant role in 3D materials [61]. A notable example of 3D nanomaterials used in biosensing is the utilization of 3D graphene nanostructures. For instance, Chen et al.[62] developed a three-dimensional electrochemical DNA biosensor utilizing silver nanoparticles decorated on a 3D graphene foam to detect CYRRA21-1 in lung cancer samples. Another study employed a graphene-metallic hybrid trimetallic nanoflower composite (3D GR/AuPtPd) to detect epidermal growth factor receptor (EGFR) ctRNA in human serum [63]. Moreover, 3D hollow photoactive nanomaterials (such as Hollow CdS@Au nanospheres) have been instrumental in constructing multimodal biosensors for carcinoembryonic antigen detection, offering increased sensitivity through enhanced light capture attributed to their unique hollow nanostructures [64]. Other types of classifications of nanomaterials (e.g., organic, carbon, and inorganic) have been extensively discussed in several published articles [65, 66]. In biomedicine, nanomaterials have shown significant promise in drug delivery systems, where nanoparticles can be functionalized to carry therapeutic agents and selectively target specific cells or tissues. Additionally, nanomaterials are utilized in diagnostic imaging and biosensing applications, where their unique properties enable susceptible and specific detection of biological analytes. However, despite their promising advantages, nanomaterials also raise concerns regarding their potential toxicity and environmental impact [67, 68, 69]. Due to their small size, nanomaterials can easily penetrate biological barriers and interact with living organisms in ways that larger particles cannot. Therefore, extensive research is ongoing to understand and mitigate the potential risks of using nanomaterials. Nanomaterials present a wealth of opportunities for groundbreaking innovations in diverse fields. Their unique size-dependent properties and versatility allow for tailoring material behavior to specific applications, leading to advances in electronics, medicine, energy, environmental remediation, and beyond. As nanotechnology continues to evolve, responsible and sustainable development of nanomaterials remains critical to ensure their safe and beneficial integration into various technological and biomedical applications. ## 3 Organic Bioelectronic Devices ### Organic Field-Effect Transistors (OFETs) Organic Field-Effect Transistors (OFETs) are semiconductor devices that utilize organic materials as the active channel to control the flow of charge carriers (electrons or holes) between the source and drain electrodes, modulated by an externally applied electric field at the gate electrode. OFETs have gained considerable attention recently due to their potential for low-cost, flexible, and large-area electronic applications, such as displays, sensors, and integrated circuits [70, 71]. The basic structure of an OFET consists of three main components: the source, drain, and gate electrodes, all deposited on a substrate. The active channel material, typically an organic semiconductor, forms a thin film between the source and drain electrodes. A gate insulator layer separates The gate electrode from the channel material, often made of organic or inorganic dielectric [72, 73]. Figure 5(a) illustrates the basic components of the OFET. The operation of an OFET relies on applying a gate voltage, which creates an electric field across the gate insulator and the channel material. This electric field either enhances or depletes the concentration of charge carriers in the channel, depending on the type of OFET (n-type or p-type). In an n-type OFET, the applied gate voltage increases the concentration of electrons in the channel, while in a p-type OFET, it increases the concentration of holes. The modulation of charge carriers in the channel material leads to a change in the conductivity between the source and drain electrodes. This change in conductivity is responsible for amplifying the input signal at the gate and producing a corresponding output signal at the drain, making OFETs function as amplifiers or switches. One of the significant advantages of OFETs is their compatibility with low-cost, large-area manufacturing processes, such as solution-based deposition techniques like spin-coating or inkjet printing. The solution processability of organic semiconductors allows for the fabrication of flexible and stretchable devices on various substrates, including plastic and paper. The versatility of organic materials enables tailoring the active channel's electronic properties to specific application requirements. By modifying the molecular structure or introducing chemical dopants, researchers can optimize the charge transport behavior, charge carrier mobility, and overall device performance of OFETs. OFETs find applications in various electronic devices, including electronic paper, flexible displays, RFID tags, biosensors, and logic circuits [74]. Additionally, OFET-based sensors have been developed for detecting various environmental and biological analytes, making them attractive for applications in healthcare, environmental monitoring, and point-of-care diagnostics. However, despite their advantages, challenges in OFET technology remain, such as improving charge carrier mobility, stability, and reproducibility [75, 76]. Researchers continue to explore novel materials, device architectures, and fabrication techniques to enhance the performance and reliability of OFETs, paving the way for their integration into next-generation electronics and wearable technologies. ### Organic Electrochemical Transistors (OECTs) OECTs are electronic devices that utilize organic materials to enable ion-mediated modulation of electrical conductivity. These transistors have gained significant attention due to their unique properties, such as low operating voltage, biocompatibility, and mechanical flexibility, making them suitable for various applications, including biosensing, neuromorphic computing, and bioelectronics [77]. The basic structure of an OECT consists of three main components: the source, drain, and gate electrodes, all integrated into a substrate [78]. Figure 5(b) shows a typical OECT schematic diagram. The operation of an OECT relies on the electrochemical doping and de-doping of the organic channel material. When a voltage is applied to the gate electrode, ions from the electrolyte solution penetrate the organic channel material, creating mobile charge carriers, either positively charged holes or negatively charged ions. This process is known as redox doping or ion-electron coupling. The presence of mobile charge carriers in the channel material modulates its electrical conductivity, affecting the current flow between the source and drain electrodes [79]. The channel's doping level can be adjusted by controlling the gate voltage, amplifying the input signal, and resulting in large changes in the output current [80]. This unique ion-modulated transistor behavior sets OECTs apart from traditional field-effect transistors (FETs), where the current flow is regulated by applying an electric field across the gate-insulator interface. Figure 4: Schematic illustration of nano-structured materials classified based on dimensionality. The OECT devices work in two modes: depletion and accumulation modes [81]. By default, the depletion mode OECT operates with its channel in a conducting (ON) state, requiring an applied gate voltage to reduce its conductivity or switch it OFF. This type of organic transistor is constructed using organic semiconductor materials (e.g., PEDOT:PSS) and relies on ion transport within the semiconductor to modulate its conductivity. In contrast, the accumulation mode OECT remains in a non-conducting (OFF) state until a negative gate voltage is applied. This voltage accumulates charge carriers within the organic semiconductor channel, allowing current to flow and turning the transistor ON. Like Depletion Mode OECTs, Accumulation Mode OECTs also employ organic semiconductors (e.g., p(g2T-TT)) and ion transport for their operation, but their default state is non-conductive, requiring a gate voltage to activate them. One of the key advantages of OECTs is their biocompatibility, which enables their integration into biological systems without inducing significant adverse effects. This property makes OECTs ideal for interfacing with living cells and tissues, enabling applications in neural interfaces and bioelectronic devices [82]. Additionally, OECTs operate at low voltages, reducing power consumption and enabling the development of energy-efficient electronic systems. OECTs are widely employed in biosensing applications because they can transduce ion concentrations into electrical signals. By functionalizing the OECT channel with specific biomolecules or enzymes, the device can selectively detect and quantify target analytes, such as ions, neurotransmitters, glucose, or DNA, with high sensitivity and specificity. These biosensors find applications in medical diagnostics, environmental monitoring, and wearable health monitoring. Furthermore, OECTs have been utilized in neuromorphic computing, where they mimic the behavior of biological neurons in artificial neural networks. Their ion-mediated operation allows for dynamic signal processing and synaptic-like behavior, making them promising candidates for brain-inspired computing and pattern recognition tasks. While OECTs offer numerous advantages, challenges remain in optimizing their stability, reproducibility, and scalability for large-scale production. Researchers continue to explore novel materials, device architectures, and fabrication methods to enhance the performance and reliability of OECTs, paving the way for their widespread adoption in cutting-edge electronic and bioelectronic technologies. ### Organic Electronic Ion Pumps (OEIPs) Organic electronic ion pumps represent a burgeoning area of research in bioelectronics, where the principles of organic materials and electronics converge to create advanced systems for ion transport [83, 84]. These ion pumps utilize organic materials with specific ion-selective properties to enable controlled and precise transport of ions, such as cations, anions, protons, or other charged species. The underlying principle involves utilizing organic materials that can change their state, conductivity, or permeability when exposed to external stimuli such as voltage or chemical signals. By applying an electrical potential, these materials can effectively regulate the movement of ions across a membrane or interface. Figure 5(c) depicts the typical device configuration of a potential ion-selective OEIP and a cation-selective OEIP. As shown, an OEIP comprises two electrodes separated by an ion-exchange membrane. When a voltage is applied between the two electrodes, one of which is positioned beneath the ion reservoir and the other situated at the target area, cations (or anions) migrate from the reservoir through the respective exchange membrane to the delivery site [85]. The OEIPs' capability to manipulate ion transport holds significant implications across diverse domains, ranging from addressing therapeutic challenges through targeted drug delivery and neural modulation to applications in biotechnology and bioengineering. Examples of OEIPs encompass triggering cell signaling in vitro [86, 87], controlling epileptiform activity in brain slice models [88], influencing sensory functions in vivo [89], serving as pain therapy in awake animals [90], and even regulating plant growth through the delivery of phytohormones [91]. Organic electronic ion pumps offer several advantages, including biocompatibility, flexibility, and the potential for miniaturization. These properties make them well-suited for integration into bioelectronic devices and implantable systems [92, 93]. OEIPs can be designed to work in tandem with other components like sensors, actuators, and communication modules. This integration allows for dynamic feedback loops, enabling real-time adjustments in ion transport based on physiological responses or external triggers. As research advances, the development of organic electronic ion pumps has the potential to revolutionize the field of bioelectronics, opening up new avenues for creating smart and responsive bio-integrated systems that interface seamlessly with biological environments and hold promise for a range of medical, therapeutic, and biotechnological applications. ### Organic Photodetectors (OPDs) Organic photodetectors (OPDs) are optoelectronic devices that convert incident light into electrical signals through the photoelectric effect, utilizing organic materials as the active absorbing layer. These devices have gained significant attention due to their potential for low-cost, flexible, and large-area optoelectronic applications, including image sensors, photodiodes, and light detectors [97]. Figure 5: **(a)** Schematic diagram of organic field-effect transistors (OFETs); **(b)** Typical structure of an organic electrochemical transistor (OECT). Adapted from Friedlein et al. [78], _Organic Electronics 2018, 63, 398–414_, ©2018 The Authors, licensed under a Creative Commons license.; **(c)** Schematic illustration of OEIP device configuration and the working principle of a potential ion-selective OEIP (top) and a cation-selective OEIP (bottom). As illustrated, applying a potential between electrodes establishes an electrochemical circuit. Within this circuit, cations or anions from a source electrolyte are selectively conveyed to a target electrolyte through an ion exchange membrane. Adapted with permission from Cherian et al. [94], _Flex. Print. Electron. 2019, 4, 02200_. ©2019 The Authors, published by IOP Publishing Ltd. under the terms of the Creative Commons Attribution 3.0 license.; **(d)** Device configurations of OPDs: organic photoconductor (PC-OPD), organic photoresistor (PT-OPD), and organic photodiode (PD-OPD). _ETL-_electron transport layer, and _HTL-_ hole transport layer. Adapted with permission from Liu et al.[95], _Solar Rrt 2020, 4, 7, 2000139_, ©2020 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.; **(e)** Schematic of a vIGT (left), L: vertical channel length, S: source, G: gate, D: drain, Colourized cross-section scanning electron microscopy image of a vIGT (center). The pink and blue regions are the source and drain contacts, respectively, and the optical micrograph displays the top view of an individual vIGT (right), blue color: drain contact, pink: source contact, green: ion membrane. Reprinted with permission from Cea et al. [96], _Nat. Mater. 2023, 22, 1227–1235_. ©2022 The Authors, published by Springer Nature under a Creative Commons Attribution 4.0 International License. The basic structure of an organic photodetector typically comprises an organic semiconductor layer sandwiched between two electrodes, acting as the anode and cathode. The organic semiconductor layer absorbs photons from incident light, generating electron-hole pairs in the material. Depending on the type of OPD, either electrons or holes are transported through the organic layer to the respective electrodes. The operation of an OPD is based on the efficient generation, separation, and collection of photo-excited charge carriers. When photons with energy equal to or greater than the semiconductor bandgap are absorbed, excitons (electron-hole pairs) are created. These excitons must be efficiently dissociated into free-charge carriers to generate a photocurrent. To enhance exciton dissociation, OPDs often incorporate donor-acceptor heterojunctions, where the energy levels of the donor and acceptor materials promote efficient charge separation. The photocurrent generated in the organic layer is collected at the electrodes, and the magnitude of the photocurrent is proportional to the intensity of the incident light. Figure 5(d) shows the three different architectures of OPDs, namely, photoconductors-based OPDs (PC-OPDs), phototransistors-based OPDs (PT-OPDs), and photodiodes-based OPDs (PD-OPDs). The PT-OPDs comprised three electrodes: gate, source, and drain. In contrast, PC-OPDs and PD-OPDs are configured based on two electrodes (i.e., anode and cathode). PC-OPDs leverage photoconductivity in organic materials to detect light, offering sensitivity across a wide spectrum. PT-OPDs employ a transistor structure for amplified sensitivity, making them ideal for low-light conditions. PD-OPDs combine organic semiconductors with photodiode principles, delivering high-speed and efficient light detection, which is crucial for applications like optical communication and rapid imaging. Each OPD type caters to specific needs, providing a versatile toolkit for various optoelectronic applications. Apart from Sandwich types, planar-type photodetectors have also been used. These photodetectors are semiconductor devices with a planar structure designed for efficient light detection and conversion into electrical signals. These devices, typically made from semiconductor materials like silicon (Si) and gallium arsenide (GaAs) [98, 99, 100], operate on a fundamental principle where incident photons with energy greater than the semiconductor's bandgap generate electron-hole pairs when they strike the device's surface. The resulting free electrons and holes are then separated and collected by an internal electric field, creating a photocurrent or a change in voltage, which is directly proportional to the intensity of the incident light. Planar-type photodetectors encompass various designs, including photodiodes, phototransistors, and avalanche photodiodes [101, 102, 103]. Photodiodes collect the separated carriers directly, offering a linear response to incident light. Phototransistors amplify the signal by using the generated carriers to control a larger current flow, while avalanche photodiodes, intended for applications requiring high sensitivity, leverage avalanche multiplication to produce a substantial number of charge carriers. These photodetectors are extensively applied in optical communication systems, imaging devices, optical sensors, and many applications demanding light detection. OPDs exhibit high responsivity, low dark current, and fast response times, making them suitable for a wide range of light detection applications. One of the key advantages of OPDs is their compatibility with solution-based deposition techniques, enabling the fabrication of large-area and flexible devices on various substrates. The tunability of organic materials allows for optimizing their light absorption properties to match specific wavelengths or spectral ranges, making OPDs versatile for various optical sensing and imaging applications. OPDs are used in diverse optoelectronic devices, such as image sensors [104, 105, 106], light-sensitive arrays, photodetector arrays [107], and position-sensitive detectors [108, 109, 110]. They find applications in digital cameras, medical imaging, light-based communication systems, and optical sensors for environmental monitoring and industrial applications. Additionally, organic materials' flexibility and lightweight nature enable the development of wearable and conformable photodetectors for wearable health monitoring, biometric sensing, and smart textiles. Despite their advantages, challenges in OPD technology include improving the external quantum efficiency, enhancing the stability of organic materials under prolonged light exposure, and achieving high-speed response times for rapid optical sensing applications [95]. Researchers are actively exploring novel organic materials, device architectures, and engineering strategies to overcome these challenges and unlock the full potential of organic photodetectors in the emerging field of organic optoelectronics. ### Organic Bioelectronic Implants Organic bioelectronic implants are advanced medical devices integrating organic electronic materials and components into living tissues to enable various therapeutic or diagnostic functionalities. These implants represent a cutting-edge field of research and development in the intersection of organic electronics and biomedicine, offering unique advantages for medical applications [111, 112]. Organic bioelectronic implants constitute a complex assembly of crucial components aimed at interfacing with biological systems while delivering therapeutic or monitoring functions. Central to their design are organic semiconductors, conductive polymers like poly(3,4-ethylene dioxythiophene): poly(styrene sulfonate) (PEDOT:PSS), and specialized organic electronic materials meticulously chosen for their biocompatibility, mechanical flexibility, and ability to seamlessly integrate with biological tissues, all while evading significant immune responses [90]. These materials serve as the foundation for the implant's active elements. Organic bioelectronic implants exhibit adaptability by incorporating sensors to monitor vital physiological parameters like pH, temperature, glucose levels, or specific biomarkers. Additionally, they integrate stimulating components such as electrodes or transducers capable of delivering targeted electrical or chemical signals. These signals serve therapeutic objectives such as deep brain stimulation or promoting neural regeneration. To ensure the longevity and efficacy of the implant, it is encapsulated within biocompatible materials or coatings. This encapsulation acts as a protective barrier against unwanted interactions with the surrounding biological environment. In a recent example, Cea et al. [96] developed a tiny, fully organic bioelectronic device that acquires and transmits brain signals and self-powers. The device is about 100 times smaller than a human hair and is based on an IGT (internal-ion-gated organic electrochemical transistor) architecture, the vIGT (vertical internal ion-gated organic electrochemical transistor) that incorporates a vertical channel made of PEDOT:PSS and a miniaturized water conduit (H-via) from the surface of the device through the ion membrane layer to permit channel hydration, demonstrating long-term stability, high electrical performance, and low-voltage operation to prevent biological tissue damage. Figure 5(e) demonstrates the device architecture schematics, SEM image and optical micrograph of the vIGT. Furthermore, these implants harness wireless communication, enabling connectivity with external devices for data collection, remote control, and programming. This breakthrough promises a revolution in patient monitoring and treatment optimization, as demonstrated by recent studies [113, 114, 115]. They also employ innovative power management systems, including energy harvesting and wireless charging, ensuring sustainable operation and reducing the need for frequent battery replacements. An additional advantage lies in the mechanical flexibility of organic bioelectronic implants, enabling seamless integration with irregular and dynamic tissue shapes and movements. This adaptability proves invaluable when implants are placed in soft, curved body regions like the brain, heart, or spinal cord. Moreover, recent advancements have led to the development of biodegradable organic bioelectronic implants. These designs gradually dissolve over time, minimizing harm to surrounding tissues and eliminating the need for additional surgical removal. These implantable bioelectronic devices offer immense potential across diverse medical applications. Organic sensors can precisely monitor drug release rates and tailor dosages for personalized drug administration. Moreover, they facilitate tissue regeneration by offering electrical or biochemical cues to spur cell growth and tissue repair. Notably, these devices find application in neuroprosthetics, including cochlear implants for hearing restoration and retinal implants for vision enhancement [116]. Additionally, they are employed for simulating peripheral nerves to treat disorders resistant to traditional pharmacological interventions. As organic bioelectronic implants advance, ongoing research focuses on optimizing biocompatibility, stability, and long-term functionality and addressing challenges related to immune responses, long-term biointegration, and regulatory approvals. With continued innovations, organic bioelectronic implants have the potential to revolutionize personalized medicine, ushering in a new era of advanced healthcare and improved quality of life for patients. ## 4 Fabrication methods Organic bioelectronic devices are predominantly fabricated/patterned using several approaches, such as organic thin-film deposition methods, patterning techniques, 3D printing, and organic synthesis. **Organic thin-film deposition**: These methods are widely used for depositing thin films of organic materials on substrates with controlled thickness and uniformity. One common technique is spin-coating. In spin-coating, an organic material solution, such as semiconductors, conductive polymers, or other active components, is deposited onto a flat substrate, typically a silicon wafer or glass, which can be further integrated into a device (see schematics in Figure 6a). As the substrate spins at high speeds, centrifugal forces evenly distribute the material, resulting in a thin, uniform film. The spin coating offers precise control over the thickness and quality of the deposited organic films, allowing researchers to optimize these devices' electrical and optical properties. This method is ideal for producing organic semiconductor layers used in devices like organic field-effect transistors (OFETs) and organic photodetectors [117, 118]. With its scalability, cost-effectiveness, and ongoing refinements, spin coating plays a central role in various applications, from flexible electronics to medical diagnostics and wearable health monitoring, ensuring the advancement of organic bioelectronics in diverse fields. Vacuum evaporation is another thin-film deposition method. It facilitates the precise deposition of organic materials onto various substrates under reduced pressure conditions. In this process, organic materials, such as semiconductors, conductive polymers, and other key bioelectronic components, are heated to their vaporization points and then allowed to condense onto the target substrate, creating thin organic films with exceptional uniformity and precise thickness control. This level of control is indispensable in developing organic electronic devices, including organic field-effect transistors (OFETs) and organic photodetectors, where the properties of the organic layer directly influence device performance. Vacuum evaporation enables the sequential deposition of multiple organic layers, making it possible to design complex device architectures. This capability is invaluable as organic bioelectronic devices often require distinct functional layers for sensing, signal processing, and data transmission. Additionally, vacuum evaporation is a low-temperature deposition technique that safeguards the structural integrity of heat-sensitive organic materials. It also provides a pristine vacuum environment that minimizes contamination, ensuring the quality of the deposited organic films. In the realm of organic bioelectronics, vacuum evaporation plays a critical role in manufacturing devices like biosensors, organic photovoltaics, and implantable bioelectronic systems. For instance, vacuum evaporation is often used in OLED fabrication [119, 120]. **Patterning techniques**: Several methods are employed for patterning electrodes for organic electronic devices, such as photolithography, e-beam lithography, dip-pen lithography, inkjet printing, micro-contact printing, screen printing, direct ink writing, laser writing, etc. _Photolithography_: It is a well-established technique for patterning organic materials at micron and submicron scales. The photolithography process begins with a substrate, typically made of silicon or glass, coated with a layer of photoresist, a photosensitive organic material. A photomask containing the desired pattern is placed near the photoresist-coated substrate, and the entire assembly is exposed to ultraviolet (UV) light. The exposed regions undergo a chemical change, making them either more soluble (in the case of positive photoresists) or less soluble (for negative photoresists) in a developer solution, depending on the type of photoresist used. The developer solution is applied to the substrate, removing the selected areas and leaving behind the desired pattern (see schematics in Figure 6b). Photolithography stands out due to its exceptional resolution and accuracy, making it capable of crafting intricate micro and nanoscale structures relevant to organic bioelectronics. The adaptability of this technique to a range of organic materials facilitates the fabrication of diverse bioelectronic components. Nonetheless, cautious handling of organic materials is essential, as some may be sensitive to UV exposure and chemical developers. Additionally, meticulous design of photomasks is imperative to achieve the desired patterns. Photolithography is employed in organic bioelectronic device fabrication to create features like electrodes, sensor structures, and microfluidic channels [121, 122, 123, 124]. _Electron beam (e-beam) Lithography_: e-beam lithography or EBL is an advanced nanofabrication technique that operates on the fundamental principle of using a focused beam of electrons to create incredibly fine patterns and structures at the nanometer scale. It has found applications in various fields, including semiconductor device fabrication, nanotechnology, and micro-electromechanical systems (MEMS). Unlike conventional photolithography, EBL can achieve unparalleled resolution, crafting features with dimensions down to just a few nanometers. This capability arises from its direct-write process, where a precisely controlled electron beam moves across an electron-sensitive resist on a substrate to define intricate custom patterns. While slower and more complex than some alternatives, e-beam lithography is crucial in developing advanced nanoscale devices, specialized structures in research laboratories, and creating masks and photomasks for semiconductor manufacturing. _Dip-Pen Nanolithography (DPN)_: DPN is an advanced nanofabrication technique that leverages the precision of scanning probe microscopy (SPM) for the controlled deposition of molecules, nanoparticles, or biomolecules onto a substrate with nanometer-scale precision. In this method, an atomic force microscope (AFM) tip coated with an "ink" material is submerged, or "dipped," into the ink and then brought into contact with a substrate under the guidance of the AFM (schematics in Figure 6c). DPN is renowned for its extraordinary sub-10 nanometer resolution, making it a pivotal tool in various domains, such as nanoelectronics, nanophotonics, and nanobiotechnology. Its remarkable versatility extends to the patterning of diverse materials, including conducting polymers, biological compounds like proteins or DNA, nanoparticles, and more, enabling the creation of various structures, from lines and dots to intricate two-dimensional and three-dimensional designs. DPN finds applications across several domains: in nanoelectronics for the development of nanoscale electronic components and features on semiconductor chips, in nanophotonics for crafting optical devices, photonic circuits, and metamaterials, in biosensing for creating highly sensitive and specific biosensors, in surface functionalization for engineering specific surface properties, and in nanomaterials synthesis for precise control of nanoparticle properties. While DPN offers exceptional precision, it can be a relatively slow and serial process, limiting its application for large-scale manufacturing, and the choice of ink, tip, substrate, and environmental conditions significantly influence pattern quality and reproducibility. _Micro contact printing (\(\mu\)CP)_: \(\mu\)CP is a widely used soft lithography technique employed for precise and controlled deposition of materials, often in the form of self-assembled monolayers (SAMs), on a substrate. The process is akin to conventional rubber stamp printing but on a micro- and nanoscale. \(\mu\)CP employs an elastomeric stamp, usually made of polydimethylsiloxane (PDMS), engineered with relief microstructures or patterns on its surface. The stamp is coated with an "ink" or material, which adheres only to the relief patterns. The stamp is then gently brought into contact with a substrate, transferring the material onto the substrate in the desired pattern (see schematics in Figure 6d). This process offers several advantages, including simplicity, cost-effectiveness, and the ability to create well-defined and precisely placed chemical patterns on various substrates, including metals, semiconductors, and organic materials. \(\mu\)CP is particularly valuable for creating surface chemistry modifications and developing biomolecule arrays and microscale patterning for various applications, including biosensors, microelectronics, and microfluidics. However, \(\mu\)CP also has some limitations. It may be less suitable for large-scale or high-throughput manufacturing processes, as it is inherently a serial process. The resolution of \(\mu\)CP depends on the stamp's relief structures and may not achieve the sub-10-nanometer scale of some advanced lithography techniques. Additionally, controlling the uniformity of the ink layer and ensuring consistent contact between the stamp and substrate can be challenging. Despite these limitations, micro-contact printing remains a powerful tool for many micro- and nanofabrication applications, particularly in research and prototyping scenarios. _Inkjet printing_: Inkjet printing, a highly versatile technique, has become integral to the realm of organic bioelectronics. The process involves depositing minuscule ink droplets onto a substrate, enabling controlled patterning of various functional materials, including organic semiconductors, conductive polymers, and biologically relevant molecules. Its prominence in this field stems from multiple advantages, such as exceptional precision and resolution, broad material compatibility, reduced material wastage due to its additive nature, high levels of customization to adapt complex designs for specific applications, non-contact printing, and scalability that accommodates everything from research-level prototyping to large-scale production [125, 126, 127, 128]. Inkjet printing plays a pivotal role in fabricating components for organic bioelectronic devices, including sensors, transistors, and electrochemical systems, and excels in the precise deposition of biomolecules crucial for biosensing and detection applications. This technology is a cornerstone in developing advanced medical diagnostics, wearable health monitoring devices, and implantable bioelectronics, promising significant contributions to healthcare and environmental monitoring. _Laser Writing_: Laser writing, also known as laser-induced forward transfer (LIFT), is an advanced microfabrication technique that employs a high-intensity laser beam to transfer material from a donor layer to a receiver substrate, enabling the precise deposition of micro- or nanoscale features. A laser pulse generates a shockwave within the donor material, propelling a small amount of material toward a transparent receiver substrate placed above it. This method offers exceptional precision, allowing for fine control over the position and size of the deposited material, making it ideal for creating intricate patterns, microarrays, and electronic devices. One of its significant advantages is versatility, as it can be used with various materials, including organic polymers, conductive substances, and biological compounds, making it suitable for applications ranging from organic electronics to biosensors. Due to its non-contact nature and direct-write capabilities, laser writing is precious for handling sensitive materials and enabling rapid prototyping. With the potential to achieve sub-micron resolutions, this technique has widespread applications in microelectronics, flexible electronics, organic photovoltaics, microfluidics, and tissue engineering, where high-resolution and customized structures are paramount for research, development, and specialized manufacturing processes. **3D printing**: The field of bioelectronics has witnessed remarkable progress with the integration of 3D printing technologies. These technologies are known for their streamlined processes, which empower the creation of intricate three-dimensional structures with exceptional precision, scalability, and adaptability [129, 130, 131]. Various 3D printing techniques, including fuse deposition modeling (FDM), stereolithography (SLA), digital light processing (DLP), selective laser sintering (SLS), and direct ink writing (DIW), have been instrumental in patterning and fabricating materials with diverse strategies. Nevertheless, many of these technologies are often associated with specific material classes, such as thermoplastic polymers for FDM, photopolymer resins for SLA and DLP, and powdered polymers or metals for SLS, which impose limitations on the customization of inks. Within this landscape, DIW, an extrusion-based 3D printing technique that constructs 3D structures layer-by-layer through the precise deposition of inks via fine nozzles ((schematics in Figure 6e)), has emerged as the most versatile 3D printing technology, offering unprecedented capabilities for the development of bioelectronics. These inks may encompass various materials, spanning metals, ceramics, polymers, carbons, and even biocompatible substances such as cells or gels. The DIW printer follows a computer-generated design to create intricate and customized objects layer by layer [132]. **Chemical methods**: Organic bioelectronic devices can also be fabricated through diverse chemical methods, including polymerization, chemical vapor deposition (CVD), and self-assembly [133, 134, 135, 136, 137, 138, 139]. These methods allow for precise control over the molecular structure of materials, enabling the design of custom organic semiconductors, conductive polymers, and biocompatible coatings. Polymerization involves the creation of organic materials through the reaction of monomers, resulting in polymers with desired properties. CVD entails depositing thin films of organic materials from vapor-phase precursors, ensuring uniform and controlled material growth. Self-assembly allows organic molecules to spontaneously arrange into ordered structures, which can be fine-tuned for targeted functionalities [140, 141]. Table 1 summarizes the fabrication techniques used for fabricating organic bioelectronic devices. These fabrication methods provide versatility in designing organic bioelectronic materials with unique characteristics, such as high sensitivity, flexibility, and biocompatibility. By leveraging organic thin-film deposition and organic synthesis techniques, Figure 6: Schematic diagram of various fabrication methods: **(a)** spin-coating process; **(b)** photolithography ; **(c)** dip-pen nanolithography (DPN); **(d)** Micro contact printing (\(\mu\)CP); **(e)** Direct ink writing (DIW). researchers can engineer materials tailored to the requirements of biosensing, medical diagnostics, and wearable health monitoring applications, among others. Continued advancements in organic bioelectronic material fabrication hold great potential in revolutionizing the landscape of bioelectronics and contributing to breakthroughs in medical technologies and personalized healthcare. ## 5 Biosensing Mechanisms A typical biosensor comprises several fundamental components: the target analytes, receptors or biorecognition elements, a transducer, and output systems [172; 173]. The target analyte is the specific substance under investigation, such as glucose, ammonia, alcohol, or lactose. Bioreceptors are biomolecules or biological entities capable of recognizing and binding to the target analyte. Examples of biorecognition components include enzymes, cells, aptamers, DNA/RNA strands, and antibodies. The role of the transducer is to convert the biorecognition event into a measurable signal, typically in the form of an electrical signal, which correlates with the quantity or presence of the chemical or biological target. This conversion process is known as signalization. Transducers generate optical or electrical signals directly corresponding to the interactions between analytes and bioreceptors. Finally, output systems encompass signal processing, amplification, and display units, facilitating the interpretation and presentation of the biosensor's results. Figure 7 illustrates the components of the typical biosensor. ### Electrochemical Sensing Electrochemical sensing is a powerful mechanism utilized in organic bioelectronics for detecting and quantifying various biomolecules and chemical species. This sensing platform measures electrical signals generated during electrochemical reactions at the interface between the organic material and the target analyte. Organic electrochemical sensors offer high sensitivity, rapid response times, and excellent selectivity, making them valuable medical diagnostics, environmental monitoring, and point-of-care testing tools. The fundamental principle behind electrochemical sensing in organic bioelectronics lies in the redox properties of organic materials, which can undergo reversible electron transfer reactions [174; 175]. These redox-active organic materials, such as conducting polymers, redox enzymes, or organic nanoparticles, are integrated into the sensing platform to act as the transducer element. Electrochemical sensing involves two main components: electrode and redox reaction with the target analyte. The sensing platform typically comprises working (or indicator electrode), reference, and counter electrodes (in some cases, the two-electrode system can be used for electrochemical sensing) [176; 177]. The working electrode (WE) is coated with the redox-active organic material, where the electrochemical reaction with the target analyte occurs. The reference electrode (RE) maintains a constant potential against which the working electrode's potential is measured. The counter electrode (CE) completes the \begin{table} \begin{tabular}{l l l} \hline \hline **Fabrication techniques** & **material** & **References** \\ \hline Spin coating & 2D crystalline film from 2,7- & [142; 143; 144; 145] \\ & dioctyl[1]benzoth[3,2-b]benzoth & \\ & (C8-BTBT), PDMS, organic semiconductor films, & \\ & PEDOT:PSS & \\ Photolithography & PEDOT:PSS, OLED & [121; 146; 147; 148] \\ E-beam lithography & PPy, poly(chloro-p-xylylene) (Parylene C), Biomolecules & [149; 150; 151; 152] \\ Dip-pen nanolithography & sulfonated polyaniline (SPAN), PPy, PEDOT, ferroelectric copolymer poly (vinylidene fluoride– trifluorethylene) & [153; 154] \\ Inkjet printing & PEDOT:PSS, PPy & \\ Micro contact printing & PPy, PEDOT, Proteins, Ultrathin Gate Dielectrics, alkyl and fluoroalkylphosphonic acid & [125; 126; 127; 155] \\ Laser writing & PEDOT, PANI, laser-induced porous graphene & [156; 157; 158; 159; 160] \\ Direct ink writing & PEDOT:PSS, PEDOT:PSS-PEO, holey graphene & [161; 162; 163; 164; 165; 166; 167; 168; 169] \\ & oxide (HGO), eutectic gallium–indium (EGaIn)-based liquid metal embedded elastomers, AgNPs, MWCNT, rGO/CNT, silicone & \\ Chemical vapor deposition & Poly(p-xylylene), PEDOT & [170; 171] \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of fabrication techniques for organic electronics. electrical circuit and balances the current flow during the electrochemical reaction. When the target analyte comes into contact with the redox-active organic material on the working electrode, it induces an electrochemical reaction. The redox-active organic material is reversibly oxidized or reduced, transferring electrons between the analyte and the electrode surface. This electron transfer generates an electrical signal, such as a current or potential, which is proportional to the concentration of the target analyte. Different electrochemical sensing modalities employed in organic bioelectronics include: **Amperometric Sensing**: Amperometric biosensors are a type of electrochemical biosensor used for quantitatively detecting and analyzing biological analytes. These biosensors rely on the measurement of current generated from an electrochemical redox reaction at the sensor's working electrode surface when the target analyte interacts with a biorecognition element (such as enzymes, antibodies, or nucleic acids) immobilized on the electrode. The basic setup of an amperometric biosensor typically consists of three main components: the working electrode, the reference electrode, and the counter electrode. The biorecognition element is immobilized in the working electrode, and the redox reaction occurs upon the target analyte's binding. The reference electrode maintains a constant potential, while the counter electrode completes the electrical circuit, allowing the flow of electrons during the redox reaction. When the target analyte binds to the biorecognition element on the working electrode's surface, it triggers the redox reaction, producing or consuming electroactive species (e.g., hydrogen peroxide or oxygen). The current generated from this redox reaction is directly proportional to the concentration of the target analyte in the sample. As the concentration of the analyte changes, the current also varies, providing quantitative information about the analyte concentration. Figure 8(a) shows the schematics of amperometric-based biosensors. Figure 7: Schematic illustration of key components of a typical biosensor. **Voltammetric biosensing**: Voltammetric biosensors are a type of electrochemical biosensor that relies on the measurement of current as a function of an applied voltage or potential at the sensor's working electrode. These biosensors use the principles of voltammetry to detect and quantify the target analyte in a sample. The basic setup of a voltammetric biosensor includes a working electrode coated with a biorecognition element, a reference electrode, and a counter electrode. When an increasing or decreasing voltage is applied to the working electrode, a redox reaction occurs at the electrode surface, involving the oxidation and reduction of electroactive species. In the presence of the target analyte, the biorecognition element at the working electrode surface interacts with the analyte, leading to changes in the redox reaction of the electroactive species. These changes result in variations in the current measured at the working electrode, which can be correlated with the concentration of the target analyte in the sample. **Potentiometric Sensing**: Potentiometric biosensors are a type of electrochemical biosensor used for the quantitative detection and analysis of biological analytes. Unlike amperometric biosensors that measure the current generated from a redox reaction, potentiometric biosensors rely on measuring potential or voltage changes at the sensor's working electrode surface when the target analyte interacts with a biorecognition element. The basic setup of a potentiometric biosensor includes a working electrode and a reference electrode (Figure 8b) [178]. The working electrode is coated with a biorecognition element, such as enzymes, antibodies, or nucleic acids, which interacts with the target analyte in the sample. The reference electrode maintains a constant potential, serving as a reference point to measure the potential changes at the working electrode. When the target analyte binds to the biorecognition element on the working electrode's surface, it changes the local charge distribution, resulting in a potential difference. This potential change is directly related to the concentration of the target analyte in the sample. Potentiometric biosensors offer several advantages, including high specificity, label-free detection, and simple instrumentation [179]. They are particularly suitable for measuring ion concentrations, pH levels, and other analytes directly affecting local charge distribution. **Impedimetric Sensing**: Impedimetric biosensors are a type of electrochemical biosensor that measures the electrical impedance or resistance changes at the sensor's working electrode surface in response to the interaction between a biorecognition element and the target analyte (Figure 8c). This label-free and real-time detection method is highly sensitive and enables the study of various biomolecular interactions, making it valuable in biosensing applications. The basic setup of an impedimetric biosensor includes a working electrode coated (or functionalized) with a biorecognition element (such as antibodies, enzymes, or DNA probes), a reference electrode, and a counter electrode. When the target analyte (e.g., antigen, enzyme substrate, or complementary DNA strand) binds to the biomolecules, it causes a change in the dielectric properties or the electrical double layer at the electrode surface. When an AC signal is applied to the working electrode, the impedance of the sensor changes due to the binding events between the biorecognition element and the target analyte. These changes in impedance are then measured and correlated with the concentration of the target analyte in the sample. Impedance-based biosensors can be classified into two main types: capacitive and conductive. Capacitive impedance biosensors rely on changes in the dielectric properties of the interface between the sensing element and the target analyte. When the analyte binds to the immobilized biomolecules, it alters the dielectric constant and thickness of the insulating layer, leading to changes in the electrode's capacitance. These changes are then measured and related to the concentration of the analyte. Conductive impedance biosensors work based on changes in the electrical resistance at the electrode interface. The binding of the analyte to the sensing element causes changes in the electrical properties of the surface layer, leading to variations in resistance. These changes are measured to quantify the analyte concentration. Impedimetric biosensors offer several advantages, including label-free detection, high sensitivity, and real-time monitoring capabilities. They are particularly suitable for detecting biomolecular interactions, such as antigen-antibody binding, enzyme-substrate reactions, and nucleic acid hybridization. Impedimetric biosensors are versatile and can detect various analytes, including proteins, nucleic acids, and small molecules. Although highly sensitive, Impedance-based biosensors may encounter challenges related to specificity, as they could exhibit cross-reactivity with similar molecules. Careful calibration is essential due to the influence of surface effects on impedance measurements. Additionally, complex sample matrices, such as blood or soil, might interfere with impedance measurements, potentially impacting result accuracy. Addressing these issues is crucial to ensure the reliability and applicability of impedance-based biosensors in various scientific and biomedical applications. ### Optical Sensing Optical sensing utilizes the interaction between light and organic materials to detect and quantify biological or chemical analytes. These sensing platforms employ organic materials, such as organic semiconductors, fluorescent dyes, or organic nanoparticles, integrated into photonic or optoelectronic devices to facilitate sensitive and selective detection of target molecules. The fundamental principle behind optical sensing in organic bioelectronics relies on the optical properties of the organic materials, which can absorb, emit, or scatter light in response to changes in their environment. Figure 8: Schematics configuration of different types of electrochemical sensors. **(a)** amperometric/voltammetric biosensor, **(b)** potentiometric biosensor, **(c)** impedimetric biosensor (Cdl = double-layer capacitance of the electrodes, Rsol = resistance of the solution, Cde = capacitance of the electrode, Zcell = impedance introduced by the bound nanoparticles). Adapted from Naresh and Lee [173], _Sensors, 2021, 21, 4, 1109_, ©2021 MDPI. Within the realm of optical biosensors, various types have been developed, each catering to specific applications and detection requirements [180, 181]. Surface plasmon resonance (SPR) biosensors, one of the most well-known optical biosensors, rely on the principle of plasmon resonance, which occurs when light interacts with the collective oscillations of electrons on a metal surface [182]. Changes in refractive index due to binding events on the sensor surface lead to alterations in the resonance angle, enabling label-free and real-time detection of molecular interactions. Figure 9(a) shows the schematic of the SPR-based biosensor. SPR biosensors find applications in drug discovery, medical diagnostics, and environmental monitoring [183]. Surface-enhanced Raman scattering (SERS) biosensors leverage the enhancement of Raman scattering signals when molecules are adsorbed on roughened metal surfaces. Molecules adsorbed on these surfaces generate unique Raman spectra, enabling molecular identification and quantification (Figure 9b). SERS stands out as an exceptionally sensitive method for identifying low-concentration molecules. It excels in detecting various substances, such as DNA, microRNA, proteins, blood components, and bacteria. Furthermore, it facilitates the detection and characterization of individual cells, aids in bioimaging, and plays a pivotal role in diagnosing various diseases. Its unique ability to offer extensive structural insights into biological analytes adds significant value to the field of analytical science and diagnostics [184]. Fluorescence is a widely used optical phenomenon for biosensing [185]. In fluorescence-based optical sensing, organic fluorescent dyes or fluorophores are used as the sensing elements. When excited with a specific wavelength of light, these fluorescent molecules absorb energy and become excited to higher energy states. Subsequently, they release this excess energy through fluorescence emission at a longer wavelength. The intensity of the emitted fluorescence signal is directly proportional to the concentration of the target analyte, enabling quantitative detection. Fluorescence-based organic bioelectronic sensors offer high sensitivity and excellent selectivity, making them valuable tools in molecular imaging, cellular assays, DNA sequencing, protein-protein interaction studies, and diagnostic applications. Photonic crystal optical biosensors harness the unique properties of photonic crystals to enable sensitive and specific detection of biomolecular interactions [186]. These biosensors operate on the principle of modifying the transmission or reflection of light when target molecules bind to the sensor surface. Photonic crystals are engineered materials with periodic structures that create band gaps in the electromagnetic spectrum (Figure 9c). These band gaps prevent the propagation of certain wavelengths of light, resulting in specific optical properties. When biomolecules bind to the sensor surface, they cause changes in the refractive index or the dielectric environment. This perturbation affects the photonic band gap, leading to light transmission or reflection alterations. These shifts are then used to quantify the presence or concentration of the target analyte. Interferometric biosensors utilize the interference patterns generated when light waves interact. By measuring changes in phase or intensity, these sensors detect biomolecular interactions. Fabry-Perot interferometers and Mach-Zehnder interferometers (see Figure 9d) are commonly used in this category. A Fabry-Perot interferometer exploits multiple-beam interference within a resonant optical cavity to precisely measure the wavelengths of light. It consists of two parallel mirrors with a small separation distance, creating a resonant cavity. When light is introduced into the cavity, it reflects repeatedly between these mirrors, leading to constructive and destructive interference between the multiple reflected beams. Constructive interference enhances the intensity of light at specific wavelengths, while destructive interference reduces it at others, producing a pattern of interference fringes. By analyzing these fringes and their variations, Fabry-Perot interferometers can be used to determine the wavelengths of light and facilitate high-resolution spectral analysis. Mach-Zehnder interferometers are typically used in integrated optical biosensors. They consist of two parallel waveguides; one is exposed to the sample, and the other serves as a reference. Biomolecular interactions on the sample waveguide cause changes in optical path length, leading to interference patterns that can be used to quantify the interactions. Interferometric biosensors have applications in medical diagnostics and environmental monitoring. Optical fiber biosensors employ optical fibers as a core component for detecting and quantifying biological or chemical substances. These sensors are characterized by their capacity to harness light transmission through optical fibers for sensitive and real-time detection. The basic operation typically involves a recognition element, such as antibodies, enzymes, or other bioactive molecules, immobilized on the fiber's surface. When the target analyte binds to this recognition element, it changes the fiber's optical properties, such as light intensity, wavelength, or polarization. These changes are then quantified and correlated to the concentration of the target analyte. These sensors are compact, versatile, immune to electromagnetic interference, and suitable for remote sensing. Organic bioelectronic optical sensors offer several advantages, including label-free detection, high sensitivity, rapid response times, and the potential for miniaturization and integration with other electronic components. As organic bioelectronics advances, further research and development of novel organic materials and innovative sensing platforms are expected to drive progress in optical sensing and its applications in various scientific and technological domains. ### Piezoelectric Sensing Piezoelectric biosensing is a powerful and versatile real-time mechanism to detect and quantify biomolecular interactions. This sensing mechanism leverages the piezoelectric effect of certain materials, such as quartz or piezoelectric polymers, to transduce biomolecular binding events into measurable electrical signals. These mass-based biosensors are widely used in biomedical research, diagnostics, and pharmaceutical development due to their label-free, sensitive, and rapid detection capabilities. The fundamental principle behind piezoelectric biosensing lies in the piezoelectric materials' ability to convert mechanical stress into electrical signals. The biosensing platform typically consists of a piezoelectric transducer, such as a quartz crystal microbalance (QCM) or a piezoelectric polymer-coated cantilever, functionalized with specific biorecognition elements [189, 190]. These biorecognition elements, such as antibodies, DNA, or enzymes, are carefully immobilized on the surface of the piezoelectric material. When the biosensing platform comes into contact with a biological sample, such as a solution containing biomolecules of interest (e.g., proteins, DNA, or antigens), the biorecognition elements interact selectively with the target biomolecules. This interaction leads to the forming of biomolecular complexes, causing an increase in the mass or stiffness of the layer attached to the piezoelectric material. Figure 9: Schematics diagrams of optical biosensors. **(a)** Surface plasmon resonance (SPR) biosensor; **(b)** Surface-enhanced Raman scattering (SERS) biosensor; **(c)** Illustration of the sensing mechanism of a photonic crystal (PC) biosensor. Adapted from Chen et al. [187], _Biosensors 2020, 10, 12, 209_, ©2020 MDPI; **(d)** optical waveguide (Mach–Zehnder) interferometer biosensor, adapted with permission from Kozma et al. [188], _Biosens. Bioelectron., 2014, 58, 287-307_, 62014 Elsevier B.V. As the biomolecular complexes form, the mechanical stress on the piezoelectric material changes, inducing a shift in the resonant frequency of the piezoelectric transducer [191]. This frequency shift is directly proportional to the mass or stiffness change on the transducer's surface and is known as the frequency shift or resonance frequency shift. The piezoelectric material converts this mechanical deformation into an electrical signal, generating a characteristic impedance change or charge distribution on the electrode surfaces. Figure 10 shows the basic concept of piezoelectric sensor-based virus detection. The interaction between the biorecognition elements and target biomolecules can be quantified and analyzed by monitoring the real-time frequency shift or electrical signals. This label-free detection approach directly measures biomolecular interactions without fluorescent or radioactive labels, which can alter biomolecules' behavior and affect the measurements' accuracy. Piezoelectric biosensors offer several advantages in bioanalytical applications, including higher sensitivity, real-time monitoring-label-free sensing, multiplexing-enabling simultaneous detection of multiple target biomolecules in a single experiment, and require low sample volumes-making them suitable for analyzing limited or precious samples. ## 6 Biosensing Applications ### Medical diagnostics Organic bioelectronics has emerged as a promising technology in medical diagnostics, offering unique advantages for non-invasive and point-of-care testing. By leveraging organic materials' electrical and biological properties, organic bioelectronics facilitates the development of sensitive, portable, and cost-effective diagnostic devices [192; 193; 194]. Figure 10: **(a)** Basic concept of target antigen detection mechanism using piezoelectric biosensing, **(b)** schematics of voltage vs. time, and **(c)** and amplitude vs. frequency plots during detection. Organic bioelectronic biosensors have opened up new possibilities in disease biomarker detection, enabling the identification of specific biomolecules in biological fluids like blood, saliva, and urine [195, 196]. These biosensors can be customized to detect disease-related biomarkers associated with conditions such as cancer, cardiovascular disorders, and infectious diseases, facilitating early diagnosis and timely intervention. In the realm of diagnostics, organic bioelectronics plays a central role in the miniaturization of diagnostic platforms, giving rise to lab-on-a-chip (LOC) devices [197, 198]. LOC diagnostics offer rapid and multiplexed testing with minimal sample volume requirements, making them ideal for point-of-care settings and reducing the strain on centralized healthcare facilities. The use of organic bioelectronics extends to electrochemical and electronic immunoassays, providing highly sensitive and specific detection of antigens and antibodies. These assays allow for precise quantification of disease-related molecules, supporting accurate diagnosis and monitoring of disease progression. Nucleic acid analysis is another application of organic bioelectronics, enabling the detection of DNA and RNA sequences associated with genetic disorders and infectious agents [199, 200]. This technology is essential for genetic screening, personalized medicine, and pathogen identification. In medical imaging, organic bioelectronics has shown promise in developing imaging probes and contrast agents, enhancing the resolution and sensitivity of imaging techniques like magnetic resonance imaging (MRI)[201, 202]. Additionally, organic bioelectronics has contributed to advancing microfluidic systems for cell analysis, enabling cell sorting, counting, and characterizing cellular responses to external stimuli [203, 204, 205]. These systems have diverse applications in cancer diagnostics, drug screening, and stem cell research. Furthermore, the potential for smart drug delivery systems arises from organic bioelectronics, allowing for targeted drug delivery that responds to specific biological signals or conditions, enhancing drug efficiency while minimizing side effects. Also, the portability and affordability of organic bioelectronic devices have made them a viable option for point-of-care diagnostics in resource-limited settings, offering timely and reliable medical testing in underserved regions. Altogether, organic bioelectronics is proving to be a transformative technology in medical and environmental applications, contributing to improved healthcare, diagnostics, and research endeavors. Figure 11 displays diverse applications of organic bioelectronics in the field of medical diagnostics. Deng et al. [206] introduced a wireless, flexible, and highly sensitive biosensor employing organic electrochemical transistors (OECTs) for continuous and wireless nitric oxide (NO) detection within biological systems. Their OECT device, depicted in Figure 11, incorporated a PEDOT:PSS channel, gold (Au) thin film electrodes (source, drain, and gate), a poly-5A1N-coated gate, and electrical contacts on a polyimide (PI) substrate. This sensor was successfully implanted in a rabbit for real-time NO monitoring, with data transmitted wirelessly to a mobile phone via a custom Bluetooth module. Tang et al. [207] developed a low-power organic field-effect transistor (OFET)-based biochemical sensor with high transconductance efficiency for label-free miR-21 detection, as seen in Figure 11(b). Additionally, Chen et al. [208] presented a compact wireless magnetoelectric endovascular neural stimulator illustrated in Figure 11(c), specifically designed for battery-free implants, enabling stimulation of peripheral nerves that are typically challenging to access via traditional surgical means. ### Wearable Health Monitors Organic bioelectronics has gained considerable traction as a technology for wearable health monitoring systems, offering exceptional versatility and performance. Wearable devices can seamlessly integrate into daily life by leveraging organic materials' unique properties, including flexibility, biocompatibility, and tunable electronics [210, 211]. Applying organic bioelectronic sensors allows for the continuous and non-invasive monitoring of vital signs, such as heart rate [212], blood pressure [213], respiration rate [214, 215], body temperature [216], pulse [217], glucose levels in individuals with diabetes [218], pH levels [219], and the human stress hormone cortisol [220]. Also, organic wearable bioelectronics has been widely used for chronic wound biosensing and on-demand therapy administration [221, 222]. Furthermore, organic bioelectronics enables the recording of electrocardiogram (ECG) signals for early detection of cardiac abnormalities while monitoring skin conditions, muscle activity during physical activities, sleep patterns, stress levels, and emotions, contributing to comprehensive health assessment [223, 224, 225]. These wearable systems can also track environmental factors like air quality and temperature and provide secure biometric authentication for enhanced data security. By combining diverse functionalities, organic bioelectronics empowers individuals to control their health proactively, enabling real-time remote monitoring, personalized drug delivery, and improved overall health management and outcomes [226, 227]. As research continues, further advancements in organic bioelectronics promise to revolutionize wearable health monitoring technology and its potential impact on healthcare. Figure 12 exemplifies applications of organic bioelectronics in wearable health monitoring and neuromodulation. Seessaard and Wongchoosuk [228] introduced a fabric-based piezoresistive force sensor array composed of a Ti\({}_{3}\)AlC\({}_{2}\)/PEDOT:PSS nanocomposite with ultrahigh sensitivity (up to 1.51 N\({}^{-}\)1) suitable for wearable E-textile applications. In another study, Mao et al. [229] developed a soft, stretchable photodiode with a composite light absorber and an organic bulk heterojunction within an elastic polymer matrix for reliable cardiovascular variable Figure 11: **(a) Schematic illustration of a flexible OECT Biosensor with Wireless Integration for Real-time NO Detection in an Articular Cavity. The NO sensor features a PEDOT:PSS channel, Au thin film electrodes (source, drain, gate), poly-5A1N selective membrane on the gate, and SU-8 encapsulation exposing specific regions on a PI substrate. NO-induced electrochemical reactions on the gate electrode modulate PEDOT:PSS channel doping, enabling NO sensing via current measurements. Implanted in a New Zealand White rabbit with ACL rupture, the sensor provides real-time NO monitoring, transmitting data to a mobile phone via a Bluetooth-enabled custom wireless module. Deng et al. [206], _PNAS, 2022, 119, 34, e2208060119_, ©2022 the Author(s), licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0); **(b)** Photo images of the fabricated low-voltage OFET miRNA sensor on PEN substrate, and the sensor tags consisting of an encapsulated OFET and contacts for extended-gate sensing electrode and reference electrode. Reprinted from Tang et al. [207], _npj Flex. Electron., 2022, 6,18_, licensed under a Creative Commons Attribution 4.0 International License; **(c)** Specific illustration of MagnetoElectric-powered Bio ImplanT (ME-BIT) device implanted proximally to a blood vessel deep within tissue and wirelessly powered through a magnetic coil in a pig. A rendering of the implant (_bottom left_) is shown with all the external components, including the system on a chip (SoC), external capacitor, and the ME transducer. Photograph of the fully packaged device inside a 3D-printed capsule resting in a clear sheath (_bottom right_). Reprinted with permission from Chen et al. [208], _Nat. Biomed. Eng., 2022, 6, 706-716_, licensed under a Creative Commons Attribution 4.0. ; **(d)** Nanostructured Optical Photonic Crystal Biosensor for HIV Viral Load Measurement. reprinted with permission from Shafiee et al.[209], _Sci. Rep., 2014, 4, 4116_, licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. measurements. The developed photodiode effectively monitors variables such as heart rate variability and oxygen saturation over extended periods. Fan et al. [230] fabricated flexible wearable pressure sensors using free-standing conductive nickel metal-organic framework nanowire arrays on carbon cloth. The developed sensor could monitor human activities, including elbow, knee, and wrist bending, as illustrated in Figure 12(c). Yang et al. [231] designed a flexible piezoresistive sensor with a hierarchical polyaniline/polyvinylidene fluoride nanofiber film for monitoring physiological signals and movement (see Figure 12(d)). Additionally, organic bioelectronics has found application in deep brain stimulation (DBS) for neuromodulation in movement disorders, such as Parkinson's disease, where they connect brain electrodes to neurostimulators for therapeutic purposes, as depicted in Figure 12(e). ### Environmental Monitoring Organic bioelectronics has demonstrated significant promise across diverse environmental monitoring applications due to its unique attributes, cost-effectiveness, and compatibility with biological systems. These applications achieve more efficient and sustainable monitoring solutions by leveraging organic electronic devices. Key areas of organic bioelectronics application in environmental monitoring include water quality management and monitoring, enabling real-time detection of various pollutants in water bodies; air quality monitoring to track air pollution levels continuously; and soil health assessment, aiding precision agriculture. In water quality management, organic bioelectronics is crucial in detecting and quantifying water pollutants such as heavy metals, organic compounds, and microorganisms [233, 234]. Organic bioelectronic sensors offer high sensitivity and selectivity, enabling real-time water quality monitoring in lakes, rivers, and wastewater treatment facilities [235, 236]. These sensors can help identify contamination sources, assess the effectiveness of water treatment processes, and ensure compliance with regulatory standards, contributing to preserving water resources and safeguarding aquatic ecosystems. Similarly, organic bioelectronic sensors can assess essential parameters such as nutrient levels, pH, moisture content, and contaminants in soil quality monitoring [237, 238]. Continuous soil monitoring using these sensors aids in precision agriculture, optimizing fertilizer usage, improving crop yield, and preventing soil degradation. By providing accurate and timely data on soil health, organic bioelectronics supports sustainable land management practices and agriculture waste management and promotes soil conservation [239]. Additionally, organic bioelectronics is utilized for gas sensing, including greenhouse gases and harmful substances, which is critical for climate change studies and emissions control. For example, Alizadeh and colleagues introduced a molecularly imprinted polymer (MIP)-based electrochemical sensor designed to detect 2,4,6-trinitrotoluene (TNT) in environmental samples such as natural waters and soil [240]. The sensor operates using electrochemical principles, where the interaction between the imprinted polymer and TNT molecules leads to changes in the sensor's electrical properties. Electrochemical techniques can measure and quantify this interaction, offering a sensitive and reliable means of detecting TNT. Moreover, integrating organic bioelectronics into wearable devices enables individuals to monitor personal exposure to environmental pollutants and allergens, facilitating informed decisions to minimize exposure risks. These sensors' lightweight and portable nature also makes them ideal for monitoring environmental parameters in remote and challenging-to-access areas, valuable for ecological studies and conservation efforts. The ability to network organic bioelectronic devices creates real-time large-scale environmental monitoring networks, contributing to predictive modeling, early warning systems, and informed environmental management decisions. Moreover, organic bioelectronic biosensors offer rapid and precise detection and quantification of water and soil contaminants, including pesticides and heavy metals, aiding analytical assessments of environmental samples. Applying organic bioelectronics in environmental monitoring demonstrates its potential to enhance environmental sustainability, advance ecological understanding, and drive effective decision-making in various domains. Figure 13 visually illustrates various organic bioelectronics sensors tailored for environmental monitoring applications. For example, Han et al. [241] introduced a highly efficient ammonia gas sensor by combining an organic field-effect transistor (OFET) with a ZnO/PMMA hybrid dielectric through a simple blending process. This sensor exhibited remarkable sensitivity across a wide range of NH\({}_{3}\) concentrations, from 25 ppm to 250 ppm, as observed in Figure 13(a) through the time-dependent changes in the drain-source current following multiple NH\({}_{3}\) exposure and evacuation cycles. In a separate study, Mathur et al. fabricated CuMoO4 nanorods to create an acetone chemriser, enabling non-invasive breath-based diabetes diagnosis and environmental monitoring (depicted in Figure 13(b)). Khan et al.[242] utilized a cellulose fiber and graphene oxide matrix to develop humidity sensors suitable for both environmental humidity monitoring and human respiration detection, as demonstrated in Figure 13(c). Figure 12: Examples of organic bioelectronics-based sensors for wearable health monitoring applications. **(a)** fabricated piezoresistive force sensor array based on Ti3AlC2/PEDOT:PSS nanocomposite for wearable E-textile applications. Reprinted with permission from Seesaard and Wongchoosuk [228], _Org. Electron., 2023, 122, 106894. ©_2023 Elsevier B.V.; **(b)** photographs of a stretchable photodiode made of a composite light absorber (P3HT:PCBM:SIS = 1:1:5) on a PDMS substrate before and after being stretched to about 25% strain. Adapted with permission from Mao et al. [229]_ACS Appl. Mater. Interfaces 2023, 15, 28, 33797-33808_, ©2023 American Chemical Society.; **(c)** illustration depicting nickel-based metal-organic framework (MOF) nanowires employed as dual-purpose electrodes in wearable pressure sensor technology. Reprinted with permission from Fan et al. [230], _Science 2023, 26, 8, 107397_, ©2023 The Author(s); **(d)** hierarchically microstructure-biosnipred flexible piezoresistive sensor or human-machine interaction and human health monitoring. The sensor incorporates a hierarchical polyaniline/polyvinylidene fluoride nanofiber (HPPNF) film positioned between two interlocking electrodes featuring a microdome structure. Reprinted with permission from Yang et al. [231], _ACS Nano 2021, 15, 7, 11555-11563_, ©2021 American Chemical Society; **(e)** Schematic diagram for clinical application of deep brain stimulation (DBS) System: The brain electrode delivers therapeutic electrical currents, while the extension lead links it to the neurostimulator (internal pulse generator, IPG), which serves as the implanted power source. Reprinted with permission from Jacobs et al. [232]_EMBO Molecular Medicine, 2019, 11, e9575_, ©2019 The Author(s), published under the terms of the CC BY 4.0 license. Figure 13: **(a)** Schematic structure of OFET biosensor for ammonia gas sensing (_left_). In this sensor, poly(methyl methacrylate) (PMMA) blended with zinc oxide (ZnO) nanoparticles is used as a gate dielectric layer. Response curves (_right_) of devices A (ZnO/PMMA hybrid dielectric) exposure to NH\({}_{3}\) in higher concentrations (25–250 ppm). Reprinted with permission from Han et al. [241], _Sens. Actuators B: Chem., 2014, 203, 9-16_, ©2014 Elsevier B.V.; **(b)** schematic representation of the CuMoO\({}_{4}\) nanorods-based acetone sensing measurement setup (left) and non-invasive breathomic-diagnosis of human diabetes and environmental monitoring strategy (right). Reprinted with permission from Mathur et al. [243], _Environ. Res., 2023, 229,115931_, ©2023 Elsevier Inc.; **(c)** biocompatible paper cellulose fiber graphene oxide matrix-based humidity sensors for human health and environment monitoring. Reprinted with permission from Khan et al. [242], _Sens. Actuators B: Chem.,2023,393,134188_, ©2023 Elsevier B.V.. ### Food Safety and Quality Control Organic bioelectronics has emerged as a promising technology for food safety and quality control applications [244, 245, 246, 247]. Its unique properties, including biocompatibility and sensitivity to biological molecules, make it well-suited for detecting contaminants, spoilage, and quality indicators in food products. Key applications include detecting food contaminants like pesticides and pathogens, monitoring food spoilage, assessing food quality indicators, and detecting allergens. Organic bioelectronics allows for real-time monitoring of food production processes and on-site testing, contributing to consistent quality and safety. Additionally, it can be integrated into smart packaging to monitor food quality during storage and transportation. This technology aids in verifying food authenticity, detecting adulteration, and ensuring agricultural production safety by monitoring pesticide residues on crops. Embracing organic bioelectronics in food safety and quality control enhances consumer protection, reduces food waste, and strengthens food safety regulations. Figure 14 illustrates the diverse applications of biosensors in food safety and quality control. Sharova et al. [248] introduced a low-voltage edible electronic circuit, serving as an invaluable testbed for exploring non-toxic printable semiconductors within the domains of edible and bioelectronic technologies. Their work, presented in Figure 14(a), showcased successful inkjet printing of water-based gold ink on both traditional and edible substrates, achieving exceptional precision with critical lateral features as small as 10 \(\mu\)m. Furthermore, they demonstrated the fabrication of chitosan-gated complementary n- and p-type transistors and logic circuits, including inverting logic gates, all operating at low voltages (<1 V) on flexible edible ethyl cellulose substrates. These devices exhibited promising electronic performance characteristics, such as high mobility-capacitance product, impressive on-off current ratios, operational stability in ambient air, and a shelf life of up to one month. These devices' compact, flexible nature allows for seamless integration into edible carriers, such as pharmaceutical capsules. In a separate study, Ding et al. [249] introduced a hydrogel containing silver-doped Prussian blue nanoparticles (SPB NPs) for the detection of trimethylamine (TMA) and the real-time monitoring of shrimp and fish freshness, as depicted in Figure 14(b). Additionally, Luo et al. [250] explored using carbon dots anchored to ferrocene metal-organic framework nanosheets for the multi-mode sensing of glyphosate, a herbicide. In another application, Chen et al. [251] employed a DNA hydrogel fishing network for the ultrasensitive detection of the antibacterial agent kanamycin. These diverse applications underscore biosensors' remarkable versatility and potential in enhancing food safety and quality control. ## 7 Challenges and Future Perspectives ### Stability and Longevity Stability and longevity are paramount considerations in applying organic electronics in biosensing, given their unique properties, such as flexibility and biocompatibility [252]. However, certain challenges contribute to potential degradation and performance fluctuations over time. First, organic materials are susceptible to environmental factors like moisture, oxygen, and temperature variations, leading to material degradation and subsequent changes in electrical properties, diminishing sensor performance [253]. Second, ensuring long-term biocompatibility when these devices interact with biological samples is critical to avoiding adverse reactions and preserving reliable sensing capabilities [254]. Third, the stability of the interface between the organic material and biomolecules significantly impacts biosensor performance, with changes from material degradation or biofouling affecting sensitivity and selectivity [255]. In wearable or implantable devices, the organic materials must endure mechanical stress without functional compromise, and mechanical strain may cause cracks or delamination, jeopardizing stability and longevity [256]. Additionally, variations in performance over time due to charge trapping, ion migration, and relaxation processes can lead to sensor response drift, hampering accuracy. Sensitivity to chemicals and solvents can also affect stability and performance, a critical concern in real-world applications where chemical exposure is expected. Moreover, organic materials may experience photochemical degradation when exposed to light, especially UV radiation, impacting their electrical properties and sensor performance [257]. Finally, achieving manufacturing consistency and uniformity in organic electronic devices presents challenges, as variations in fabrication processes may lead to device-to-device performance differences, influencing reproducibility and reliability. Addressing these stability and longevity concerns is essential for enhancing organic electronics' long-term viability and effectiveness in biosensing applications. Several strategies can be employed to address the stability and longevity issues associated with organic electronics in biosensing. These include careful material selection, implementing encapsulation techniques and barrier layers to protect the devices from environmental factors, optimizing device design for mechanical robustness, and performing rigorous testing and validation under relevant environmental conditions. Additionally, surface modifications and integrated control systems can enhance organic biosensors' stability and operational performance. Overall, addressing Figure 14: **(a)** Characterization of inkjet-printed gold electrodes on various conventional and edible substrates. _a-d_: depiction of gold interdigitated electrodes inkjet-printed on diverse substrates: poly(ethylene 2,6-naphthalate) (PEN), glass, edible ethyl cellulose biopolymer (food additive E462), and edible tattoo paper. _g,h_: visual representations of gold electrodes transferred onto distinct surfaces: (top) peach, apple, and (bottom) fingertip. Reprinted with permission from Sharova et al. [248], _Nanoscale, 2023,15, 10808-10819_, ©2023 The Royal Society of Chemistry; **(b)** schematic representation of colorimetric and photothermal assessment of shrimp and fish freshness utilizing a portable silver-doped Prussian blue nanoparticles (SPB NPs) hydrogel, facilitated by a smartphone and handheld thermal imager. Reprinted with permission from Ding et al. [249], _Sens. Actuators B: Chem., 2022, 363, 131811_, 62022 Elsevier B.V.; **(c)** carbon dots anchoring ferrocene metal-organic framework nanosheet for multi-mode glyphosate (e.g., herbicide) sensing. Reprinted with permission from Luo et al. [250], _J. Hazand. Mater., 2023, 443, 130277_, ©2022 Elsevier B.V.; **(d)** schematic illustration of SERS aptasensor based on DNA hydrogel fishing network for ultrasensitive detection of antibacterial kanamycin(KANA). Reprinted with permission from Chen et al. [251], _Biosens. Bioelectron., 2022, 207, 114187_, ©2022 Elsevier B.V. these challenges will pave the way for the successful integration of organic electronics into cutting-edge biosensing technologies, enabling advancements in medical diagnostics, environmental monitoring, and other critical applications. ### Biocompatibility, Biofouling, and Cross-sensitivity Biocompatibility and biofouling pose significant challenges when utilizing organic electronics in biosensing applications. Despite the advantages of organic materials, such as flexibility and tunable properties, ensuring their compatibility with biological systems and mitigating the impact of biofouling is critical for reliable and long-term biosensor performance. In the realm of biocompatibility challenges, implantable biosensors necessitate favorable interactions between organic electronic materials and surrounding tissues to avoid inflammation or immune responses that may compromise the biosensor's functionality and lifespan. Issues like cytotoxicity and impaired cell adhesion when in contact with biological fluids can disrupt stable biomolecule interactions, leading to unreliable measurements. Additionally, an inflammatory response triggered by organic materials could result in encapsulation or scarring around the biosensor, hindering target analyte diffusion and affecting sensor sensitivity [258]. Moreover, leaching specific molecules from organic materials into the biological environment may compromise the biosensor's accuracy and specificity. On the other hand, biofouling challenges encompass the non-specific binding of biomolecules, proteins, or cells to the biosensor surface, generating unwanted signals and reducing sensitivity [259]. Accumulation of biofilm or organic material on the sensor surface can alter the electrical properties of the organic material, leading to a decline in sensor performance over time. Moreover, biofouling can hinder the diffusion of target analytes to the sensing elements, causing delayed or inaccurate readings and impacting the biosensor's response time. Tackling these biocompatibility and biofouling challenges requires careful material selection, surface modifications, and continuous research and development of innovative strategies to ensure the successful integration of organic electronics in biosensing applications. Furthermore, cross-sensitivity within the domain of organic bioelectronics encompasses a significant challenge whereby sensors and devices, originally engineered to discern and respond to specific target analytes, also manifest responses to unintended analytes, thereby introducing ambiguity and inaccuracies into the device's output. This pervasive issue permeates throughout the sensor and biosensor realm, including the specialized domain of organic bioelectronics. The implications of cross-sensitivity are noteworthy, encompassing potential distortions or falsifications of data, thereby diminishing the overall precision and reliability of the sensor. Several intricate factors contribute to cross-sensitivity in the context of organic bioelectronics. Firstly, material interactions stemming from the inherent properties of organic materials utilized in bioelectronic devices can predispose them to interactions with multiple analytes, exemplified by conducting polymers that may exhibit sensitivity to variances in pH, humidity, or temperature, potentially fostering cross-sensitivity unless these issues are meticulously mitigated. Secondly, the propensity for analytes with resembling properties to induce overlapping sensor responses poses a significant challenge. For instance, two distinct gases may evoke analogous alterations in electrical conductivity, thus complicating their differentiation. Thirdly, the adsorption characteristics of the sensor's surface may occasion unforeseen interactions with analytes, particularly in sensors reliant on specific binding events, such as antibody-antigen interactions. This can give rise to cross-reactivity when analytes bearing similar structural or property traits adhere to the sensor's surface. Lastly, environmental variables, including shifts in temperature, humidity, or interference from electromagnetic fields, may influence the sensor's response, potentially culminating in unwanted noise or disruptions that aggravate cross-sensitivity concerns. Cross-sensitivity challenges require meticulous consideration when designing, implementing, and employing organic bioelectronic devices, particularly in mission-critical applications such as medical diagnostics and environmental monitoring, where precision and fidelity are indispensable. Researchers and engineers have explored various strategies to address the biocompatibility and biofouling challenges associated with organic electronics in biosensing [260, 261]. Surface engineering techniques, such as functionalization with biocompatible coatings or polymers, enhance the biocompatibility of organic materials and reduce non-specific binding [262, 263]. Implementing biocompatible encapsulation materials or membranes isolates the organic electronics from direct contact with biological fluids, minimizing adverse tissue interactions. Coating the sensor surface with antifouling agents prevents the adhesion of biomolecules and reduces the impact of biofouling on sensor performance. Rigorous in vitro and in vivo biocompatibility testing is crucial to identify potential cytotoxicity or inflammatory responses early in development. Employing regeneration methods, such as chemical or enzymatic cleaning, helps restore sensor functionality and combat the effects of biofouling. Continual research and development of new organic materials with improved biocompatibility and resistance to biofouling are essential to advance organic electronics for biosensing applications. Additionally, addressing cross-sensitivity in organic bioelectronics necessitates a multifaceted approach, spanning judicious material selection, intelligent surface functionalization strategies, advanced data processing techniques, and rigorous calibration measures to rectify inaccuracies arising from environmental factors. This multi-pronged strategy not only underscores the complexity of addressing cross-sensitivity in organic bioelectronics but also highlights the necessity for a holistic and integrated approach, where material science, surface engineering, advanced data analysis, and robust calibration regimes converge to mitigate the challenges posed by cross-sensitivity, ultimately contributing to the enhanced accuracy and reliability of organic bioelectronic devices. By addressing these challenges, researchers can enhance the reliability and longevity of organic electronic biosensors, paving the way for their successful integration in a wide range of biosensing applications, from medical diagnostics to environmental monitoring and beyond. ### Manufacturing Scalability Manufacturing scalability poses a critical challenge in utilizing organic electronics for biosensing applications despite the advantages of flexibility and cost-effectiveness offered by organic materials. The endeavor to achieve large-scale and reproducible manufacturing encounters several obstacles. These include maintaining material consistency to ensure uniform sensor characteristics and reliable performance, tackling challenges in scaling up deposition techniques like inkjet printing and spin-coating while preserving sensor integrity, and addressing the complexities of device integration with multiple functional layers. Moreover, managing yield and reproducibility risks, achieving cost-effectiveness, and ensuring stability and reliability in large-scale production are paramount. Robust quality control measures are indispensable for the early identification and resolution of manufacturing issues, encompassing material testing, sensor characterization, and performance validation. A reliable supply chain for high-quality organic materials is also crucial for sustained sensor performance and product reliability in the realm of organic electronics biosensing. To address the manufacturing scalability challenges related to organic electronics in biosensing, researchers and industry stakeholders are exploring various approaches. Developing innovative and scalable manufacturing techniques, such as lithography, roll-to-roll printing, and spray coating, can improve production efficiency and material utilization [264, 265]. Establishing standardized protocols and optimizing manufacturing processes can enhance yield, reproducibility, and material consistency. Integrating real-time quality control measures during manufacturing can detect deviations and ensure uniform sensor performance. Conducting rigorous, long-term stability testing under various environmental conditions is crucial to assessing sensor performance and reliability over extended periods. The widespread adoption of organic electronics in biosensing applications can be realized by tackling these obstacles, paving the way for developing cost-effective, high-performance biosensors capable of transforming healthcare, environmental monitoring, and other critical domains. ### Integration and Miniaturization Incorporating organic bioelectronics into biosensing devices poses significant challenges in integration and miniaturization. Although organic materials offer unique advantages like flexibility and biocompatibility, achieving seamless integration into compact and multifunctional biosensors requires overcoming various obstacles. Key issues encompass multifunctional integration to create advanced biosensors capable of detecting multiple analytes and coordinating interactions between organic electronic components. Optimizing the sensor-substrate interface when integrating onto diverse substrates is essential to avoid performance degradation [266]. Power supply and energy efficiency become crucial in miniaturized biosensors operating on limited power sources [267]. Maintaining high sensing performance and signal-to-noise ratio in shrinking biosensors is challenging due to signal interference and noise [268]. Precision in fabrication processes and high yield rates are crucial to achieving accurate dimensions and meeting demand while reducing production costs. Efficient data communication and onboard data processing are vital for real-time data transmission in miniaturized biosensors. Ensuring stability, longevity, enhanced biocompatibility, and addressing biofouling challenges are critical to maintaining reliable sensor performance over time in downsized organic bioelectronic components. Researchers and engineers have employed various strategies to address integration and miniaturization challenges related to organic bioelectronics in biosensing. State-of-the-art microfabrication techniques enable precise control over sensor dimensions and facilitate multi-component integration. Selecting suitable materials and optimizing sensor-substrate interfaces ensures compatibility and mechanical stability in miniaturized biosensors [255]. Designing low-power circuits and exploring energy-efficient strategies (e.g., self-powered sensors) extend miniaturized biosensors' battery life and autonomy [269, 270]. Implementing noise reduction techniques and signal amplification methods enhance the signal-to-noise ratio in miniaturized biosensors [271, 272, 273]. Utilizing automated manufacturing processes ensures reproducibility and precision, while robust quality control measures identify defects early in production. System-on-chip integration enables onboard data processing, reducing the need for external data handling devices [274, 275, 276]. Applying biocompatible coatings to miniaturized biosensors improves biocompatibility and reduces biofouling [277, 278, 279]. Effectively addressing these challenges empowers organic bioelectronics to pave the way for highly compact and versatile biosensors with applications ranging from wearable health monitoring to point-of-care diagnostics, thereby advancing healthcare and biosensing capabilities. ### Data Security and Privacy Data security and privacy are crucial concerns in the context of using organic electronics in biosensing applications [280, 281, 282]. With sensitive biological and health-related data being collected by these devices, maintaining the confidentiality and integrity of this information becomes paramount. Key issues include securing data transmission through robust encryption protocols, authenticating the biosensing device and its generated data to prevent tampering and unauthorized access, and ensuring secure data storage with strong encryption and access control measures. It is also essential to implement secure communication protocols between the biosensor and external devices or servers, anonymize and de-identify collected data to protect individual privacy, and guard against cyberattacks like malware and ransomware [283, 284]. Compliance with data protection regulations like GDPR and HIPAA is necessary, as is user awareness and education about data security best practices. A well-defined data breach response plan and proper data erasure procedures at the end of a device's life cycle are additional measures to mitigate risks and ensure the ethical use of biosensor data. As organic electronics advance in biosensing, a comprehensive approach to data protection is essential to foster trust and safeguard sensitive information. ### Future perspectives of organic bioelctronics Recent times have witnessed a revolutionary transformation in biosensor technology, achieved through synergistic integration with cutting-edge technologies such as smartphones, 3D printing, artificial intelligence, and the Internet of Things (IoT) [285]. This convergence has led to unprecedented advancements and opportunities in biosensors. By leveraging the capabilities of these emerging technologies, biosensors have become more accessible, versatile, and efficient than ever before. Smartphones now serve as portable and user-friendly interfaces for real-time data collection and analysis, making biosensing widely accessible. 3D printing has enabled the rapid prototyping and customization of biosensors, allowing tailored designs to meet specific application requirements. Artificial intelligence has empowered biosensors with advanced data processing and pattern recognition capabilities, enhancing accuracy and enabling predictive analytics. The IoT has facilitated seamless connectivity and remote monitoring of biosensors, enabling real-time data transmission and applications in remote and distributed environments. This amalgamation has opened new horizons in healthcare, environmental monitoring, food safety, and beyond, reshaping the future of biosensor applications. Moreover, the trajectory of organic bioelectronics in intelligent biosensing strategies holds immense promise due to rapid technological progress and interdisciplinary collaborations. This trajectory envisions multiple transformative directions that underline the potential evolution of intelligent biosensing using organic bioelectronics. These include the development of smart biosensing platforms that can autonomously make decisions and incorporate artificial intelligence algorithms for real-time analyte detection and quantification [286]. Additionally, there's a growing focus on sensors that can self-calibrate using internal or external reference signals to enhance accuracy and reliability [287, 288, 289]. Integrating data from different sensors employing diverse sensing modalities promises a more comprehensive understanding of sample composition. The concept of dynamic sampling, where sensors adapt their sampling rates based on detected analyte shifts, could optimize energy usage while ensuring timely detection. Furthermore, realizing interconnected sensing networks, predictive analytics, human-machine interfaces, personalized medical interventions, energy-efficient designs, and remote monitoring through telehealth services showcases the broad scope of organic bioelectronics' role in revolutionizing intelligent biosensing [290]. Furthermore, the advancement of sustainable organic bioelectronic sensors holds significant promise, propelled by progress in materials science, biotechnology, and a growing environmental consciousness. These sensors are increasingly capable of utilizing biodegradable and environmentally friendly materials, minimizing their ecological footprint [291]. The integration of energy-harvesting technologies further lessens their dependence on traditional batteries by tapping into renewable sources like solar energy or vibrations [292, 293]. The potential for mass production of flexible and printable organic electronics opens doors to versatile applications, including healthcare and environmental monitoring. Moreover, affordable, sustainable bioelectronic sensors are pivotal in addressing global health challenges facilitating remote disease monitoring in resource-limited regions. ## 8 Conclusions Organic electronics in biosensing represent a promising and dynamic frontier with far-reaching implications for medical and environmental applications. This exciting convergence of organic materials and bioelectronics has unlocked new opportunities for precise, sensitive, and real-time detection of biomolecules and chemical species, transforming the landscape of medical diagnostics and environmental monitoring. The unique properties of organic materials, such as biocompatibility, flexibility, and tunability, have paved the way for developing innovative biosensing devices with diverse applications. From implantable biosensors for continuous health monitoring to wearable devices enabling personalized diagnostics, organic bioelectronics offers groundbreaking solutions that bridge the gap between traditional sensing technologies and cutting-edge medical practices. In medical diagnostics, organic bioelectronic sensors offer the potential to revolutionize disease detection and management. These sensors' label-free and real-time monitoring capabilities enable rapid and accurate analysis of biomarkers, facilitating early disease diagnosis and tailored treatment plans. Moreover, integrating organic bioelectronics into wearable health monitoring systems empowers individuals to actively participate in their healthcare, promoting proactive and personalized health management. Beyond medical applications, the versatility of organic bioelectronics finds significant relevance in environmental monitoring. From detecting pollutants and toxins to monitoring changes in environmental parameters, organic bioelectronic sensors contribute to sustainable environmental management and conservation efforts. These sensors offer the potential for rapid and efficient detection of environmental threats, enabling timely interventions and preserving ecological balance. However, as with any emerging technology, organic electronics in biosensing face challenges that warrant attention. Issues related to biocompatibility, stability, scalability, and manufacturing consistency must be addressed to ensure these biosensing platforms' reliability and long-term performance. In conclusion, integrating organic electronics in biosensing is promising for medical and environmental applications. With ongoing research and collaborative efforts between scientists, engineers, and industry stakeholders, organic bioelectronics is poised to drive transformative advancements in healthcare and environmental sustainability. By harnessing the potential of organic materials and innovative sensing mechanisms, this frontier of biosensing promises to improve human health, protect the environment, and shape a more sustainable and technologically advanced future.
2309.03123
A Topological Proof of The Gibbard-Satterthwaite Theorem
We give a new proof of the Gibbard-Satterthwaite Theorem. We construct two topological spaces: one for the space of preference profiles and another for the space of outcomes. We show that social choice functions induce continuous mappings between the two spaces. By studying the properties of this mapping, we prove the theorem.
Yuliy Baryshnikov, Joseph Root
2023-09-06T16:00:28Z
http://arxiv.org/abs/2309.03123v1
# A Topological Proof of the Gibbard-Satterthwaite Theorem ###### Abstract We give a new proof of the Gibbard-Satterthwaite Theorem. We construct two topological spaces: one for the space of preference profiles and another for the space of outcomes. We show that social choice functions induce continuous mappings between the two spaces. By studying the properties of this mapping, we prove the theorem. ## 1 Introduction The Gibbard-Satterthwaite theorem is a landmark result in social choice theory and mechanism design. It delivers a striking message: the only voting rules which are not vulnerable to strategic manipulation are dictatorships. This fact not only has important implications for economics and political science, it has served as the starting point for the theory of mechanism design. The various branches of mechanism design correspond to the different ways to avoid the impossibilities implied by the Gibbard-Satterthwaite theorem. Proofs of the Gibbard-Satterthwaite theorem have primarily been combinatorial.1 An exception comes from Mossel and Racz (2012) who recently gave a "quantitative" proof using analytic techniques. Footnote 1: See for instance Gibbard (1973), Satterthwaite (1975), Barberá (1983), Reny (2001), and Sen (2001). In this paper, we provide a new proof of the Gibbard-Satterthwaite theorem using tools from algebraic topology. The key idea is to view the set of preference profiles and the set of outcomes as topological spaces. The social choice function then induces a continuous map between these spaces which can be analyzed using topological techniques. By viewing the problem from this lens, we provide a richer geometric view of the impossibility result of Gibbard and Satterthwaite. This note contributes to the literature on topological social choice theory. This area of research began with the publication of Chichilnisky (1980) which used topological techniques to establish an impossibility result for the aggregation of cardinal preferences. Baryshnikov (1993) gave a unified proof of both Arrow's impossibility theorem and the impossibility of Chichilnisky. While a large literature has since developed studying "topological social choice," to our knowledge, no proof of the Gibbard-Satterthwaite theorem has yet appeared. We start by proving the Muller-Satterthwaite theorem (Muller and Satterthwaite 1977) which states that the only monotonic and unanimous social choice functions are dictatorships. We construct two topological spaces \(N_{\mathscr{P}}\) and \(N_{A}\), for the set of preference profiles and the set of outcomes respectively. A monotonic and unanimous social choice function \(f\) induces a continuous map between \(N_{\mathscr{P}}\) and \(N_{A}\). \(N_{A}\) is easily seen to be homotopy equivalent to a \((n-2)\)-sphere. Baryshnikov (1993) showed that, in dimension \(n-2\), the space \(N_{\mathscr{P}}\) has the same homology groups as the Cartesian product of \(N\) spheres. We show that the homomorphism between the homologies of \(N_{\mathscr{P}}\) and \(N_{A}\) induced by \(f\) must be a projection onto one of the coordinates, proving the theorem. ## 2 Preliminaries Let \(A=\{a_{1},\ldots,a_{n}\}\) be a finite set of alternatives. Let \(P\) denote the set of linear orders on \(A\). The symbol \(\succ\) will be used to denote a generic element of \(P\). For a given \(\succ\), let top(\(\succ\)) be the \(a\in A\) such that \(a\succ b\) for all \(b\neq a\). Let \(N\geq 1\), denote the number of agents. Elements of \(P^{N}\) are called preference profiles. The set of preference profiles will be denoted \(\mathscr{P}\). A function \(f:\mathscr{P}\to A\) is called a **social choice function**. \(f\) is said to be **monotonic** if \(f(\succ_{1},\ldots,\succ_{N})=a\) and for each \(i\), \(\succ_{i}^{\prime}\) is a linear order such that \(a\succ_{i}b\) implies that \(a\succ_{i}^{\prime}b\) for all \(b\) then \(f(\succ_{1}^{\prime},\ldots,\succ_{N}^{\prime})=a\). \(f\) is said to be **unanimous** if whenever all agents top-rank some alternative \(a\), \(f(\succ_{1},\ldots,\succ_{N})=a\). \(f\) is said to be **dictatorial** if there is an agent \(i\) such that \(f(\succ_{1},\ldots,\succ_{N})=\text{top}(\succ_{i})\) for any \((\succ_{1},\ldots,\succ_{N})\). Finally, \(f\) is said to be **strategy-proof** if for every agent \(i\), either \(f(\succ_{1},\ldots,\succ_{i},\ldots,\succ_{N})\succ_{i}f(\succ_{1},\ldots, \succ_{i}^{\prime},\ldots,\succ_{N})\) or \(f(\succ_{1},\ldots,\succ_{i},\ldots,\succ_{N})=f(\succ_{1},\ldots,\succ_{i}^{ \prime},\ldots,\succ_{N})\) for every profile \((\succ_{1},\ldots,\succ_{i},\ldots,\succ_{N})\) and every \(\succ_{i}^{\prime}\). The aim is to prove the following theorem: **Theorem 1** (Gibbard-Satterthwaite).: _If \(n\geq 3\), a social choice function \(f\) is surjective and strategy-proof if and only if it is dictatorial._ ## 3 Topological Background We assume familiarity with basic notions from algebraic topology.2 For the reader's convenience, we briefly review a few concepts that will be central to our proof. Footnote 2: See Hatcher (2002) for a good introduction. An **abstract simplicial complex** is a set \(V\) together with a collection of subsets \(\Delta\) of \(V\) such that if \(\sigma\in\Delta\) and \(\sigma^{\prime}\subset\sigma\) then \(\sigma^{\prime}\in\Delta\). Any \(v\in V\) such that \(\{v\}\in\Delta\) is called a **vertex** of \(\Delta\). We write \(V(\Delta)\) for the set of vertices of \(\Delta\). If \(\sigma\in\Delta\) contains \(m+1\) elements, it is referred to as a **simplex** of \(\Delta\). We will restrict attention to finite complexes where \(V\) is a finite set. The topology of abstract simplicial complexes is derived from their so-called _geometric realizations_. Given a finite abstract simplicial complex \(S\), consider \(\mathbb{R}^{V(S)}\), the vector space whose coordinates are indexed by the vertices of \(S\). For any \(\sigma\in S\), we can define the standard \(\sigma\)-simplex in \(\mathbb{R}^{V(S)}\) as the convex hull of the unit vectors indexed by an element from \(\sigma\). The **standard geometric realization**\(|S|\) of the abstract simplicial complex \(S\) is the union of the standard \(\sigma\)-simplices in \(\mathbb{R}^{V(S)}\) for all \(\sigma\in S\). When we refer to the topology of a simplicial complex, we mean the topology of its standard geometric realization. Given two simplicial complexes \(S\) and \(T\). A function \(f:V(S)\to V(T)\) is called a **simplicial map** if for any \(\sigma\in S\) we have \(f(\sigma)\subset T\). A simplicial map \(f:V(S)\to V(T)\) induces a continuous map between the geometric realizations of \(S\) and \(T\) as follows. Any \(s\in|S|\) can be written as a convex combination of the vertices of \(S\), i.e. \(\sum_{v\in V(S)}\beta_{v}v\). By sending \(s=\sum_{v\in V(S)}\beta_{v}v\) to \(\sum_{v\in V(S)}\beta_{v}f(v)\), we get a continuous map \(f:|S|\to|T|\). Given a set \(X\) and an indexed collection of its subsets \(\{U_{\alpha}\}_{\alpha\in A}\) the **nerve** of \(\{U_{\alpha}\}_{\alpha\in A}\) is the abstract simplicial complex \(N\) where \(\sigma\subset A\) is in \(N\) if and only if \(\bigcap_{\alpha\in\sigma}U_{\alpha}\) is nonempty. Nerves are commonly associated with covers of a topological space. A collection of open sets \(U\) in a topological space \(X\) is called a **good covering** if the union of the sets in \(U\) is all of \(X\) and if every intersection of open sets in \(U\) is either contractible (i.e., is homotopy equivalent to a point) or empty. **Lemma 1** (Nerve lemma).: _Given a topological space \(X\) and a good covering \(U\), let \(N_{U}\) be the nerve associated to this covering. Then \(N_{U}\) and \(X\) are homotopy equivalent._ ## 4 Results ### The setup We first prove the Muller-Satterthwaite Theorem (Muller and Satterthwaite 1977). As is well-known, the Gibbard-Satterthwaite Theorem (Gibbard 1973)(Satterthwaite 1975) can easily be deduced as a corollary. **Theorem 2** (Muller-Satterthwaite).: _Suppose \(n\geq 3\). A social choice function is monotonic and unanimous if and only if it is dictatorial._ Our approach will be to construct covers of both \(\mathscr{P}\) and \(A\) such that a monotonic and unanimous social choice function will result in a simplicial map between the nerve of the cover of \(\mathscr{P}\) and the nerve of the cover of \(A\). Let \(1\leq i<j\leq n\). Define \[U^{+}_{ij} =\{\succ\in P:a_{i}\succ a_{j}\} U^{-}_{ij} =\{\succ\in P:a_{i}\prec a_{j}\}\] The sets \(U^{+}_{ij}\) and \(U^{-}_{ij}\) cover \(P\). Denote by \(I_{P}=\{(i,j,x):1\leq i<j\leq n,x\in\{+,-\}\}\) the index set. Let \(N_{P}\) be the nerve of this covering, so that \(J\subset I_{P}\) is in \(N_{P}\) if and only if \(\bigcap_{(i,j,x)\in J}U^{x}_{ij}\) is nonempty. Let \(\sigma=(\sigma_{1},\ldots,\sigma_{N})\in\{-,+\}^{N}\). Define \[U^{\sigma}_{ij}=\{(\succ_{1},\ldots,\succ_{N}):\succ_{l}\in U^{\sigma_{l}}_{ ij}\text{ for all }l\}\] The sets \(U^{\sigma}_{ij}\) cover \(\mathscr{P}\). Again, denote by \(I_{\mathscr{P}}=\{(i,j,\sigma):1\leq i<j\leq n,\sigma\in\{+,-\}^{N}\}\) the index set of this covering. Let \(N_{\mathscr{P}}\) be its nerve so that \(J\subset I_{\mathscr{P}}\) is in \(N_{\mathscr{P}}\) if and only if \(\bigcap_{(i,j,\sigma)\in J}U^{\sigma}_{ij}\) is nonempty. Finally, the sets \(U_{i}:=A-\{a_{i}\}\) ranging over \(1\leq i\leq n\) cover \(A\). Let \(N_{A}\) be the nerve of this covering so that \(J\subset\{1,\ldots,n\}\) is in \(N_{A}\) if and only if \(\bigcap_{j\in J}U_{j}\) is nonempty. **Lemma 2**.: _If \(f\) is monotonic and unanimous, it induces a well-defined simplicial map \(f^{s}:N_{\mathscr{P}}\to N_{A}\) where \((i,j,\sigma)\mapsto i\) if \(f(U^{\sigma}_{ij})\subset U_{i}\) and \((i,j,\sigma)\mapsto j\) if \(f(U^{\sigma}_{ij})\subset U_{j}\)._ Proof.: Fix \(i,j,\sigma\) and let \((\succ_{1}^{*},\ldots,\succ_{N}^{*})\) be any profile in \(U^{\sigma}_{ij}\) where all agents rank \(a_{i}\) and \(a_{j}\) above all other alternatives. Let \((\succ_{1}^{**},\ldots,\succ_{N}^{**})\) be the profile derived from \((\succ_{1}^{*},\ldots,\succ_{N}^{*})\) by moving \(a_{i}\) to the top of all rankings. First note that \(f(\succ_{1}^{*},\ldots,\succ_{N}^{*})\in\{a_{i},a_{j}\}\) since otherwise by monotonicity we would have \(f(\succ_{1}^{**},\ldots,\succ_{N}^{**})\neq a_{i}\), a violation of unanimity. If, say \(f(\succ_{1}^{*},\ldots,\succ_{N}^{*})=a_{i}\) then \(f(U^{\sigma}_{ij})\subset A-\{a_{j}\}\) since any \((\succ_{1},\ldots,\succ_{N})\in U^{\sigma}_{ij}\) with \(f(\succ_{1},\ldots,\succ_{N})=a_{j}\) would violate monotonicity as we should have \(f(\succ_{1},\ldots,\succ_{N})=f(\succ_{1}^{*},\ldots,\succ_{N}^{*})\). Finally, it is clear that \(f^{s}\) is a simplicial map since for any \(J\in N_{\mathscr{P}}\) if \((\succ_{1},\ldots,\succ_{N})\in\bigcap_{(i,j,\sigma)\in J}U^{\sigma}_{ij}\) then \(f(\succ_{1},\ldots,\succ_{N})\in\bigcap_{(i,j,\sigma)\in J}f(U^{\sigma}_{ij})\). Since \(f\) induces a simplicial map from \(N_{\mathscr{P}}\) to \(N_{A}\), it also induces homomorphisms \(f_{*}\) from the homology groups of \(N_{\mathscr{P}}\) to the homology groups \(N_{A}\). Likewise it induces homomorphisms \(f^{*}\) from the cohomology groups of \(N_{A}\) to those of \(N_{\mathscr{P}}\). By studying these maps, we will deduce the theorem. ### The topology of \(N_{\mathscr{P}}\) and \(N_{a}\) The topology of \(N_{A}\) is simple. **Lemma 3**.: _The simplicial complex \(N_{A}\) is homotopy equivalent to the \((n-2)\)-sphere._ Proof.: The only intersection of the \(U_{i}\) which is empty is the intersection of all the \(U_{i}\) so the simplicial complex \(N_{A}\) is isomorphic to the boundary of the standard \((n-1)\)-simplex. There are \(n\) maximal faces of \(N_{A}\) corresponding to \(\cap_{y\neq x}U_{y}\) for each \(x\in A\). Denote these faces \(F_{x}\). The topology of \(N_{\mathscr{P}}\) has already been described in (Baryshnikov 1993). We include the calculations here for completeness. To calculate the topology of \(N_{\mathscr{P}}\) we first construct a manifold \(M\) with a _good_ covering \(\{V_{\alpha}\}\). These are constructed so that the nerve \(N_{M}\) of the covering is identical to \(N_{\mathscr{P}}\). In this case, the homotopy types of \(M\) and \(N_{M}\) (and therefore \(N_{\mathscr{P}}\)) coincide by lemma 1, and the focus shifts to figuring out the topology of \(M\). Let \(W=\{(x_{1},\dots,x_{n})\in\mathbb{R}^{n}:\sum_{i}x_{i}=0\}\). The manifold \(M\) will be an open subset of the \(N(n-1)\)-dimensional vector space \(V:=W^{N}\). Let \(1\leq i<j\leq n\). Denote the intersection of the open halfspaces with \(W\) \[K^{+}_{ij}=\{(x_{1},\dots,x_{n})\in W:x_{i}>x_{j}\} K^{-}_{ij}=\{(x_{1},\dots,x_{n})\in W:x_{i}<x_{j}\},\] and define, for a vector of signs \(\sigma=(\sigma_{1},\dots,\sigma_{N})\in\{-,+\}^{N}\) the open polyhedral cones \[K^{\sigma}_{ij}=\{(\bar{x}^{1},\dots,\bar{x}^{N}):\;\bar{x}^{l}\in K^{\sigma_{ l}}_{ij}\text{ for all }l\}\] as products of such halfspaces over voters. Now we can introduce \(M\) as the union of these cones: \[M=\bigcup_{(i,j,\sigma)\in I_{\mathscr{P}}}K^{\sigma}_{ij}.\] The sets \(K^{\sigma}_{ij}\) are convex open polyhedra so their nonempty intersections are again convex open polyhedra and are therefore contractible. Let \(N_{M}\) be the nerve of the covering by \(M\) of these sets so that \(J\subset I_{\mathscr{P}}\) is in \(N_{M}\) if and only if \(\bigcap_{(i,j,\sigma)\in J}K^{\sigma}_{ij}\) is nonempty. **Lemma 4**.: \(N_{M}=N_{\mathscr{P}}\)_._ Proof.: Suppose \(J\in N_{M}\) so that there is some \((\bar{x}^{1},\dots,\bar{x}^{N})\) in \(K^{\sigma}_{ij}\) for all \((i,j,\sigma)\in J\). For each \(l\), let \(\epsilon_{l}\) be the smallest distance between any two distinct entries of \(\bar{x}^{l}\) and let \(\epsilon=\min\epsilon_{l}\). Choose \((\bar{y}^{1},\dots,\bar{y}^{N})\) so that \(|\bar{y}^{l}_{i}-\bar{x}^{i}_{i}|<\epsilon/2\) for all \(i,l\) and such that each entry of \(\bar{y}^{l}\) is distinct for each \(l\). Let \((\succ_{1},\dots,\succ_{N})\) be such that \(a_{i}\succ_{l}a_{j}\) if and only if \(\bar{x}^{l}_{i}>\bar{x}^{l}_{j}\) for all \(i,j,l\). \((\bar{y}^{1},\dots,\bar{y}^{N})\) breaks the indifferences of \((\bar{x}^{1},\dots,\bar{x}^{N})\) so that \((\succ_{1},\dots,\succ_{N})\in\bigcap_{(i,j,\sigma)\in J}U^{\sigma}_{ij}\) and \(J\in N_{\mathscr{P}}\). Conversely, suppose \(J\in N_{\mathscr{P}}\) so that there is some \((\succ_{1},\dots,\succ_{N})\in U^{\sigma}_{ij}\) for every \((i,j,\sigma)\in J\). Let \((\bar{x}^{1},\dots,\bar{x}^{N})\) be any vector of utility representations. Without loss, we can rescale each \(\bar{x}^{i}\) so that the utilites for each agent sum to zero. Then for each \((i,j,\sigma)\in J\) we have \((\bar{x}^{1},\dots,\bar{x}^{N})\in K^{\sigma}_{ij}\) and \(J\in N_{M}\). The nerve lemma states that \(N_{M}\) is homotopic to the manifold \(M\). To further describe \(M\), let \(\Lambda\) be the set of functions from \(\{(i,j):\;1\leq i<j\leq n\}\) to the integers \(\{1,2,\dots,N\}\). For any \(\lambda\in\Lambda\), let \[R^{\lambda}=\{(\bar{x}^{1},\ldots,\bar{x}^{N})\in V:\text{ for all }1\leq i<j\leq n,\ \bar{x}^{\lambda(i,j)}_{i}=\bar{x}^{\lambda(i,j)}_{j}\}\] **Claim 1**.: The manifold \(M\) is the complement \(V-\cup_{\lambda\in\Lambda}R^{\lambda}\). Proof.: Fix \(i<j\) and \(\sigma\). Any \((\bar{x}^{1},\ldots,\bar{x}^{N})\) in \(K^{\sigma}_{ij}\) is not in \(\cup_{\lambda\in\Lambda}R^{\lambda}\) since for any \((\bar{y}^{1},\ldots,\bar{y}^{N})\) in \(\cup_{\lambda\in\Lambda}R^{\lambda}\) there is some \(l\) such that \(\bar{y}^{l}_{i}=\bar{y}^{l}_{j}\). This proves that \(M\subset V-\cup_{\lambda\in\Lambda}R^{\lambda}\). Conversely, for any \((\bar{y}^{1},\ldots,\bar{y}^{N})\) in \(V-\cup_{\lambda\in\Lambda}R^{\lambda}\) there is some \(i<j\) such that \(\bar{y}^{l}_{i}\neq\bar{y}^{l}_{j}\) for all \(l\). In this case, \((\bar{y}^{1},\ldots,\bar{y}^{N})\subset\cup_{\sigma}K^{\sigma}_{ij}\subset \cup_{\sigma}\cup_{i<j}K^{\sigma}_{ij}=M\). Thus the manifold \(M=V-\cup_{\lambda\in\Lambda}R^{\lambda}\) is the complement to the union of a collection of finitely many linear spaces in \(V\), a construct often referred to as the _arrangement of linear subspaces_ (Bjorner 1994). **Theorem 3**.: _The cohomology groups \(H^{k}(N_{M})\) are \(0\) in positive dimensions less than \(n-2\) and \(H^{n-2}(N_{M})\cong\mathbb{Z}^{N}\)._ Proof.: The mapping \(\lambda\) which associates voter \(k=\lambda(ij)\) to each unordered pair \((ij)\) can be interpreted as the coloring edges of the complete graph on \(n\) vertices \(A\) with \(N\) colors. For a color \(k\), consider the graph \(\Gamma_{k}\) with vertex set \(A\) and the edges colored \(k\). For a point in \(R^{\lambda}\), the coordinates in its \(k\)-th component are the same if they lie in the same connected component of \(\Gamma_{k}\) (and generically different, otherwise). This implies that the dimension of the linear subspace \(R^{\lambda}\) is equal to the sum over colors \(k\) of the numbers of connected components of \(\Gamma_{k}\), minus \(1\) (to reflect the constraint that the coordinates sum up to \(0\)). Equivalently, that dimension is given by \[d(\lambda)=\sum_{k}\left(n-|E(\Gamma_{k})|+h_{1}(\Gamma_{k})-1\right),\] where \(|E(\Gamma_{k})|\) is the number of edges of color \(k\), and \(h_{1}(\Gamma_{k})\) is the rank of the 1st homology group of \(\Gamma_{k}\) (this follows from two definitions of Euler characteristics of a graph, via numbers of simplices and ranks of homologies). Collecting all the terms, we obtain \[d(\lambda)=(n-1)N-\binom{n}{2}+\sum_{k}h_{1}(\Gamma_{k}).\] We notice that each \(h_{1}(\Gamma_{k})\) is the sum of the ranks of 1st homology groups over edge-connected components of \(\Gamma_{k}\). We claim that the \(\sum_{k}h_{1}(\Gamma_{k})\) is maximized when all edges are of the same color. Indeed, one can see that if one repaints a connected component in \(\Gamma_{k}\) into some other color \(l\), the 1-cycles in that component of \(\Gamma_{k}\) becomes 1-cycles of \(\Gamma_{l}\), while new 1-cycles might appear, hence non-decreasing the total \(h_{1}\). Iterating, we obtain that all the dimensions of \(R^{\lambda}\) do not exceed those of constant \(\lambda\). It remains to notice that at the last repainting step, at lease one new 1-cycle is introduced. It follows that there are \(N\) linear spaces of maximal dimension equal to \((n-1)N-n+1=(N-1)(n-1)\) among linear subspaces \(R^{\lambda}\): the \(k\)-th subspace corresponds to all edges \((ij)\) colored in color \(k\). For such constant \(\lambda\equiv k\), \(R^{\lambda}=\{(\bar{x}^{1},\ldots,\bar{x}^{N}):\bar{x}^{k}_{1}=\cdots=\bar{x}^ {k}_{n}\}\). All subspaces \(R^{\lambda}\) with non-constant \(\lambda\)'s have smaller dimensions. Denote the union of the linear subspaces \(R^{\lambda}\) of maximal dimensions as \[R^{\prime}:=\cup_{\mathtt{constant}}\,_{\lambda}R^{\lambda}\subset\cup_{\lambda} R^{\lambda}=:R.\] One can easily verify that \(V-R^{\prime}\) is the product of \(N\)\((n-1)\)-dimensional real spaces with the origins removed, and thus has the homotopy type of the product of \(N\) spheres of dimension \((n-2)\). In particular, its cohomologies are zero in the dimensions between \(0\) and \((n-2)\), and the rank of \(H^{n-2}(V-R^{\prime})\) is \(N\). Comparing the cohomologies of \(V-R^{\prime}\) and \(V-R\) is easier using Alexander duality. Consider the \((N-1)(n-1)\)-dimensional sphere \(S=p_{\infty}\cup V\); and compactify \(R,R^{\prime}\) by adding the point \(p_{\infty}\) at infinity to these arrangements. Then Alexander duality asserts that \[H^{l}(V-R)\cong H_{(N-1)(n-1)-l-1}(R+p_{\infty},p_{\infty}),H^{l}(V-R^{\prime })\cong H_{(N-1)(n-1)-l-1}(R^{\prime}+p_{\infty},p_{\infty}).\] Consider the long exact sequence for the triple \(\{p_{\infty}\}\subset R^{\prime}+p_{\infty}\subset R+p_{\infty}\) \[\cdots\to H_{l}(R^{\prime}+p_{\infty},p_{\infty})\to H_{l}(R+p_{\infty},p_{ \infty})\to H_{l}(R+p_{\infty},R^{\prime}+p_{\infty})\to H_{l-1}(R^{\prime}+p_ {\infty},p_{\infty})\to\cdots\] The fact that \(R,R^{\prime}\) has no cells in dimensions above maximal (i.e., \((N-1)(n-1)-(n-2)-1\)), and that the pair \(R,R^{\prime}\) has no cells in the maximal dimension, one arrives at the desired conclusion. It will be useful to have a basis for \(H_{n-2}(N_{\mathscr{P}})\) and \(H^{n-2}(N_{\mathscr{P}})\). To that end, we will first calculate the basis for the case when \(N=1\). In this case, \(N_{\mathscr{P}}=N_{P}\) and \(M\) is simply \(\mathbb{R}^{n}\) minus the diagonal \(D\) where \(D=\{x\in\mathbb{R}^{n}:x_{i}=x_{j}\text{ for all }i,j\}\). Let \(\Delta_{P}\) be the \(n(n-1)-1\) simplex with the same vertices as \(N_{P}\). Consider the unoriented cyclic graph \(g^{\circ}\) with vertices \(\{1,2,\ldots,n\}\) and edges \((1,2),(2,3),\ldots,(n-1,n),(n,1)\). Any orientation \(g\) of the edges of \(g^{\circ}\), gives a face in \(\Delta_{P}\) including the vertices \((i,j,x)\) such that \(x=+\) if \(i\gets i+1\) and \(x=-\) if \(i\to i+1\) in \(g\). We denote this simplex as \(\delta(g)\) and write its boundary as \(h(g)\). **Lemma 5**.: _If \(g\) is an oriented cycle, \(h(g)\) generates \(H_{n-2}(N_{P})\). Otherwise it is zero._ Proof.: Suppose that \(\delta(g)=\{(1,2,\alpha_{1}),\ldots,(n-1,n,\alpha_{n-1}),(1,n,\alpha_{n})\}\). Any \(n-1\) give rise to an acyclic binary relation on \(\{1,2,\ldots,n\}\) which can be extended to a strict order so the corresponding face is in \(N_{P}\). Since \(h(g)=\partial\delta(g)\), it is in \(H^{n-1}(N_{P})\). If \(g\) is acyclic, \(\delta(g)\in N_{P}\) so that \(h(g)=0\) in \(H_{n-2}(N_{P})\). Next, let \(g\) be an oriented cycle, for example \(\alpha_{k}=+\) for all \(k<n\) and \(\alpha_{n}=-\). The sets \(R_{12}^{+},\ldots,R_{n-1n}^{+},R_{1n}^{-}\) cover \(R^{n}-D\) since any \(x\) not in their union has \(x_{1}\leq x_{2}\leq\cdots\leq x_{n}\leq x_{1}\). Then the inclusion of the subcomplex of \(N_{P}\) whose maximal faces are the \((n-1)\)-subsets of \(\delta(g)\) is a homotopy equivalence and \(h(g)\) is a generator for \(H_{n-1}(N_{P})\). Going forward, fix an oriented cycle \(\hat{g}\) and let \(c\) be the associated generator from above. The universal coefficient theorem says that \(H_{n-2}(N_{P})\cong H^{n-2}(N_{P})\) and that we can find an element \(c^{*}\) of \(H^{n-2}(N_{P})\) where \((c,c^{*})=1\) and \(c^{*}\) is a generator for \(H^{n-2}(N_{P})\).3 Footnote 3: \((c,c^{*})\) here denotes the pairing between homology and cohomology. Now we're ready to give a basis for \(H^{n-2}(N_{P})\). Let \(\Delta_{\mathscr{P}}\) be the simplex with the same vertices as \(N_{\mathscr{P}}\). Any collection \((g_{1},\ldots,g_{N})\) of orientations of \(g^{0}\) gives a face in \(\Delta_{\mathscr{P}}\) including the \((i,j,\sigma)\) where \(\sigma_{l}=+\) if \(i\to i+1\) in \(g_{l}\) and \(\sigma_{l}=-\) if \(i\gets i+1\) in \(g_{l}\). We denote this simplex as \(\delta(g_{1},\ldots,g_{N})\) and write its boundary as \(h(g_{1},\ldots,g_{N})\). For each \(l\), let \(p_{l}\) be the simplicial map from \(N_{\mathscr{P}}\) to \(N_{P}\) sending \((i,j,\sigma)\) to \((i,j,\sigma_{l})\). Let \(p_{l}^{*}\) be the induced homomorphism \(H^{n-2}(N_{P})\to H^{n-2}(N_{\mathscr{P}})\). For each \(l\), fix some \((g^{1}_{1},\ldots,g^{l}_{N})\) where \(g^{k}_{l}\) is acylic if \(k\neq l\) and \(g^{l}_{l}=\hat{g}\) let \(h_{l}=h(g^{1}_{1},\ldots,g^{l}_{N})\). **Lemma 6**.: _The collection \(\{h_{1},\ldots,h_{N}\}\) is a basis for \(H_{n-2}(N_{\mathscr{P}})\) and \(\{p^{*}_{1}(c^{*}),\ldots,p^{*}_{N}(c^{*})\}\) is the dual basis for \(H^{n-2}(N_{\mathscr{P}})\)._ Proof.: From the universal coefficient theorem, \(H_{n-2}(N_{\mathscr{P}})\cong H^{n-2}(N_{\mathscr{P}})\). For any \(k\) and \(l\), \((h_{k},p^{*}_{l}(c^{*}))\) is \(1\) if \(k=l\) and is zero otherwise. It is a simple exercise to verify that both must then be bases. This allows us to conclude the following. **Corollary 1**.: _For any two tuples of orientations \((g_{1},\ldots,g_{N})\) and \((g^{\prime}_{1},\ldots,g^{\prime}_{N})\) where for some \(k\), (1) \(g_{k}=g^{\prime}_{k}\) (2) both \(g_{k}\) and \(g^{\prime}_{k}\) are acyclic and (3) \(g_{l}\) and \(g^{\prime}_{l}\) are acyclic for all \(l\neq k\) then \(h(g_{1},\ldots,g_{N})=h(g^{\prime}_{1},\ldots,g^{\prime}_{N})\) in \(H_{n-2}(N_{\mathscr{P}})\)._ ### The Topology of a Social Choice Function Let \(d^{*}\) be a generator of \(H^{n-2}(N_{A})\). **Proposition 1**.: _Suppose \(f\) is monotonic and unanimous. \((f_{*}(h_{l}),d^{*})=1\) if and only if \(l\) is a dictator of \(f\). Otherwise it is zero._ Proof.: Let \((1,2,\sigma^{1}),\ldots(n-1,n,\sigma^{n-1}),(1,n,\sigma^{n})\) be the vertices of \(\delta(g^{1}_{1},\ldots,g^{l}_{N})\). \(f^{*}\) maps each of these vertices onto a vertex of \(N_{A}\). If \(f^{*}\) maps any two vertices of \(\delta(g^{1}_{1},\ldots,g^{l}_{N})\) to the same vertex of \(N_{A}\) then \((f_{*}(h_{l}),d^{*})=0\) and \(l\) is not a dictator since in this case there is some \(a\in A\) which is not chosen even when \(l\) top-ranks it. Conversely, suppose that \(f^{s}\) is injective on the vertices of \(\delta(g^{l}_{1},\ldots,g^{l}_{N})\). Consider some \(g^{\prime}\) derived from \(g^{l}_{k}\) by swapping one of the arrows without forming a cycle. All but one of the vertices of \(\delta(g^{1}_{1},\ldots,g^{\prime},\ldots g^{l}_{N})\) are vertices of \(\delta(g^{1}_{1},\ldots,g^{\prime}_{N})\). Let \(h^{\prime}_{l}=h(g^{1}_{1},\ldots,g^{\prime},\ldots g^{l}_{N})\). Corollary 1 implies \(h_{l}=h^{\prime}_{l}\). The new vertex of \(\delta(g^{l}_{1},\ldots,g^{\prime},\ldots g^{l}_{N})\) must mapped to the same vertex of \(N_{A}\) as the one it replaced in \(\delta(g^{l}_{1},\ldots,g^{l}_{N})\) since otherwise, \((f_{*}(h_{l}),d^{*})\neq 0\) and \((f_{*}(h_{l}),d^{*})=0\). Repeating this process, changing one arrow at a time, for any \(i\) we can reach the a tuple of orientations where \(i\gets i+1\) for all orientations. Finally, consider \(h(\hat{g},\ldots,\hat{g})\). Unanimity implies that \((f_{*}(h(\hat{g},\ldots,\hat{g})),d^{*})=1\). Together with proposition 1, we see that there must be exactly one dictator, concluding the proof of the Muller-Satterthwaite theorem. To prove the Gibbard-Satterthwaite theorem, we need the following simple fact, well-known in social choice theory (see for example Muller and Satterthwaite (1977)). **Proposition 2**.: _A social choice function is monotonic and unanimous if and only if it is surjective and strategy-proof._ Proof.: A unanimous social choice function is clearly surjective. A monotonic social choice function is also strategy-proof: if there were some \(f(\succ_{i},\succ_{-i})\prec_{i}f(\succ_{i}^{\prime},\succ_{-i})\), letting \(\succ_{i}^{*}\) be the preference derived from \(\succ_{i}\) by pushing \(f(\succ_{i}^{\prime},\succ_{-i})\) to the top, monotonicity implies \(f(\succ_{i},\succ_{-i})=f(\succ_{i}^{*},\succ_{-i}\ )=f(\succ_{i}^{\prime},\succ_{-i})\), a contradiction. For the converse, suppose that \(f\) is strategy-proof and surjective. Let \(f(\succ_{1},\ldots,\succ_{N})=a\) and for each \(i\), \(\succ_{i}^{\prime}\) is a linear order such that \(a\succ_{i}b\) implies that \(a\succ_{i}^{\prime}b\) for all \(b\). We have that \(f(\succ_{1},\succ_{2},\ldots,\succ_{N})=f(\succ_{1}^{\prime},\succ_{2},\ldots, \succ_{N})=f(\succ_{1}^{\prime},\succ_{2}^{\prime},\ldots,\succ_{N})=\cdots=f( \succ_{1}^{\prime},\succ_{2}^{\prime},\ldots,\succ_{N}^{\prime})\), so that \(f\) is monotonic. If \(f\) is strategy-proof and surjective it is also unanimous since for any \(a\) there is some \((\succ_{1},\ldots,\succ_{N})\) such that \(f(\succ_{1},\ldots,\succ_{N})=a\) and for any profile \((\succ_{1}^{\prime},\ldots,\succ_{N}^{\prime})\) where all agents top-rank \(a\), we have \(f(\succ_{1}^{\prime},\ldots,\succ_{N}^{\prime})\) by monotonicity. This proves the Gibbard-Satterthwaite Theorem.
2301.00208
Invertibility preserving mappings onto finite C*-algebras
We prove that every surjective unital linear mapping which preserves invertible elements from a Banach algebra onto a C*-algebra carrying a faithful tracial state is a Jordan homomorphism thus generalising Aupetit's 1998 result for finite von Neumann algebras.
Martin Mathieu, Francois Schulz
2022-12-31T14:33:17Z
http://arxiv.org/abs/2301.00208v1
# Invertibility preserving mappings onto ###### Abstract. We prove that every surjective unital linear mapping which preserves invertible elements from a Banach algebra onto a C*-algebra carrying a faithful tracial state is a Jordan homomorphism thus generalising Aupetit's 1998 result for finite von Neumann algebras. Key words and phrases:C*-algebras, tracial states, Jordan homomorphisms, invertibility preserving mappings 2020 Mathematics Subject Classification: 47B48, 47A10, 46L05, 46L30, 16W10, 17C65 ## 1. Introduction A linear mapping \(T\) between two unital, complex Banach algebras is said to be _spectrum-preserving_ if, for every element \(a\) in the domain algebra, its spectrum \(\sigma(a)\) coincides with \(\sigma(Ta)\). Provided the codomain is semisimple and \(T\) is surjective, \(T\) must be bounded (a result belonging to Aupetit [1, Theorem 5.5.2]). Provided the domain is semisimple too, \(T\) is injective; this follows from Zemanek's characterisation of the radical ([1, Theorem 5.3.1]) as \[\sigma(a+x)=\sigma(Ta+Tx)=\sigma(Tx)=\sigma(x)\quad\text{for each $a$ such that $Ta=0$ and every $x$}\] which implies that \(a\) belongs to the radical, which is zero in the semisimple case. Moreover, \(T1=1\), that is, \(T\) is _unital_. As a result, a surjective spectrum-preserving mapping between semisimple Banach algebras is a topological isomorphism and one naturally wonders if it is also an isomorphism of (some of) the algebraic structure. A _Jordan homomorphism_ is a linear mapping \(T\) with the property \(T(a^{2})=(Ta)^{2}\) for all \(a\) in the domain (which is equivalent to \(T(ab+ba)=TaTb+TbTa\) for all \(a\) and \(b\)). A Jordan isomorphism turns out to be spectrum-preserving, and a lot of work has been invested to explore to what extent the reverse implication holds. A pleasant survey on the history of this topic is contained in [2]; see also [8], [13] and [14] for related questions. In [3], Aupetit proved that every surjective spectrum-preserving linear mapping between von Neumann algebras is a Jordan isomorphism. It is not difficult to see that it suffices that one of the algebras is a unital C*-algebra of real rank zero and the other a unital semisimple Banach algebra. However, the problem remains open for general \(C\)*-algebras. It is also known that the assumption on \(T\) can be relaxed to a surjective unital _invertibility-preserving_ linear mapping (that is, \(\sigma(Ta)\subseteq\sigma(a)\) for all \(a\)); the conclusion is then that \(T\) is a Jordan homomorphism. In an earlier paper [2], Aupetit had already obtained the same result for finite von Neumann algebras. The main tool in that result was the Fuglede-Kadison determinant \(\Delta\); see [9, pp. 105] for its definition and properties. Its relation to the finite trace \(\tau\) is given by \(\Delta(a)=\exp(\tau(\log|a|))\), for every invertible element \(a\). In our approach we bypass the determinant and work exclusively with a (faithful) tracial state instead in order to obtain the following generalisation. **Theorem**.: _Let \(B\) be a unital complex Banach algebra and let \(A\) be a unital finite C*-algebra. Let \(T\colon B\to A\) be a surjective unital linear mapping which preserves invertible elements. Then \(T\) is a Jordan homomorphism._ We largely follow Aupetit's arguments but, to emphasise the differences, we split up the proof into a series of lemmas in the next section. ## 2. Preliminaries Let \(A\) be a unital \(C\)*-algebra. We say that \(A\) is _finite_ if it comes equipped with a _faithful tracial state_, that is, a linear functional \(\tau\) such that \(\tau(1)=1=\|1\|\), \(\tau(ab)=\tau(ba)\) for all \(a,b\in A\) and \(\tau(a^{*}a)=0\) implies \(a=0\). Such a functional is necessarily positive and bounded. We denote the set of all _states_ of \(A\) (positive linear functionals of norm \(1\)) by \(S\) and by \(Sp\) the subset of all _spectral states_\(f\) of \(A\), that is, \(f\in S\) and \(|f(x)|\leq\rho(x)\) for every \(x\in A\), where \(\rho(x)\) denotes the spectral radius of \(x\). It is known ([7, Theorem 4 in SS13]) that every \(f\in Sp\) has the trace property, that is, \(f(ab)=f(ba)\) for all \(a,b\in A\), and that \(f(a)\in\operatorname{co}\sigma(a)\), the convex hull of the spectrum \(\sigma(a)\) of \(a\), for each \(a\in A\); see, [7, Lemma 2 in SS13] or [1, Lemma 4.1.15]. Conversely, every tracial state \(\tau\) belongs to \(Sp\) as follows from the subsequent argument. For \(a\in A\), denote by \(V(a)=\{f(a)\mid f\in S\}\) its (_algebra_) _numerical range_[7]. As is shown in [5, Lemma], and attributed to [11, SS2], \(\operatorname{co}\sigma(a)=\bigcap_{b\in G(A)}V(bab^{-1})\), where \(G(A)\) stands for the group of invertible elements in \(A\). Clearly, \(\tau(a)\) belongs to the right hand side of the above identity and hence, \(\tau\in Sp\). (Compare also [12].) **Lemma 2.1**.: _Let \(A\) be a unital C*-algebra with faithful tracial state \(\tau\). Suppose that \(g\colon\mathbb{C}\to A\) is an entire function with values in \(G(A)\). Then the mapping \(g_{\tau}\colon\mathbb{C}\to\mathbb{R}\), \(g_{\tau}(\lambda)=\tau(\log(|g(\lambda)|)\) is harmonic._ Proof.: The argument in the proof of Theoreme 1.11 in [2], which is already entirely formulated in terms of the trace, takes over verbatim. In the following, \(T\) will denote a surjective unital linear mapping defined on a (complex, unital) Banach algebra \(B\) with values in a finite unital \(C\)*-algebra \(A\). We will assume that \(T\)_preserves invertible elements_ so that \(TG(B)\subseteq G(A)\). It follows from [1, Theorem 5.5.2] that \(T\) is bounded. Fix \(a,b\in B\) and define \[g\colon\mathbb{C}\times\mathbb{C}\longrightarrow G(A),\quad g(\lambda,\mu)= T(e^{\lambda a}e^{\mu b})e^{-\lambda Ta}e^{-\mu Tb}.\] Then \(g\) is a separately entire function. Its series expansion reads as follows \[\begin{split} g(\lambda,\mu)=1&+\frac{\lambda^{2}}{2} \big{(}T(a^{2})-(Ta)^{2}\big{)}+\frac{\mu^{2}}{2}\big{(}T(b^{2})-(Tb)^{2}\big{)} \\ &+\lambda\mu\big{(}T(ab)-TbTa\big{)}+\frac{\lambda^{3}}{6}\big{(} T(a^{3})+2(Ta)^{3}-3T(a^{2})Ta\big{)}\\ &+\frac{\lambda^{2}\mu}{2}\big{(}T(a^{2}b)+(Ta)^{2}Tb+Tb(Ta)^{2}- T(a^{2})Tb-2T(ab)Ta\big{)}\\ &+\frac{\lambda\mu^{2}}{2}\big{(}T(ab^{2})+2TbTaTb-2T(ab)Tb-T(b ^{2})Ta\big{)}\\ &+\frac{\mu^{3}}{6}\big{(}T(b^{3})+2(Tb)^{3}-3T(b^{2})Tb\big{)}+ \text{remainder}\end{split} \tag{2.1}\] where the remainder only contains terms of degree \(4\) or higher in \(\lambda\) and \(\mu\); we will put it to good use in the proof of the main theorem. By Lemma 2.1, the function \[g_{\tau}\colon\mathbb{C}\times\mathbb{C}\longrightarrow\mathbb{R},\quad g_{ \tau}(\lambda,\mu)=\tau(\log(|g(\lambda,\mu)|))\] is separately harmonic in \(\lambda\) and \(\mu\) and thus there exists a separately entire function \(h(\lambda,\mu)\) such that \(\operatorname{Re}h(\lambda,\mu)=g_{\tau}(\lambda,\mu)\) for all \(\lambda,\mu\in\mathbb{C}\). The next step will be to establish the following three lemmas; for their proofs, see Section 3. **Lemma 2.2**.: _For all \(\lambda,\mu\in\mathbb{C}\), we have \(\,e^{g_{\tau}(\lambda,\mu)}\leq\|g(\lambda,\mu)\|\)._ **Lemma 2.3**.: _With the above notation and caveats, let \(g^{*}(\lambda,\mu)\) stand for \((g(\lambda,\mu))^{*}\). Then there exists \(r>0\) such that, for all \(\lambda,\mu\in\mathbb{C}\) with \(|\lambda|,|\mu|<r\), we have_ \[2\operatorname{Re}h(\lambda,\mu)=\tau\big{(}\log(g^{*}(\lambda,\mu)g(\lambda, \mu)\big{)}\!\!=-\sum_{k=1}^{\infty}\frac{1}{k}\tau\big{(}(1-g^{*}(\lambda,\mu )g(\lambda,\mu))^{k}\big{)}. \tag{2.2}\] **Lemma 2.4**.: _For all \(\lambda,\mu\) in a neighbourhood of zero, \(\tau\big{(}\log(g^{*}(\lambda,\mu)g(\lambda,\mu)\big{)}\!\!=0\)._ ## 3. Proofs of the Lemmas and the Main Theorem The argument of the first lemma differs from [2] in that we cannot make use of the determinant in order to locate the appropriate values in the convex hull of the spectrum. Proof of Lemma 2.2.: As we observed above, \(\tau(\log(|g(\lambda,\mu)|))\in\operatorname{co}\sigma(\log(|g(\lambda,\mu)|))\) for all \(\lambda,\mu\in\mathbb{C}\). Since the spectrum of \(\log(|g(\lambda,\mu)|)\) is contained in \(\mathbb{R}\), it follows that \(\operatorname{co}\sigma(\log(|g(\lambda,\mu)|))=[s,t]\) for some \(s,t\in\sigma(\log(|g(\lambda,\mu)|))\) with \(s\leq t\). Since the exponential function is strictly increasing, the Spectral Mapping Theorem implies that \[e^{g_{\tau}(\lambda,\mu)}\in[e^{s},e^{t}]=\operatorname{co}\sigma(e^{\log(|g( \lambda,\mu)|)})=\operatorname{co}\sigma(|g(\lambda,\mu)|).\] As a result, \(0<e^{g_{\tau}(\lambda,\mu)}\leq\rho(|g(\lambda,\mu)|)\) and therefore, \[e^{2\,g_{\tau}(\lambda,\mu)} \leq\rho(|g(\lambda,\mu)|)^{2}=\rho(|g(\lambda,\mu)|^{2})\] \[=\rho(g^{*}(\lambda,\mu)g(\lambda,\mu))=\|g^{*}(\lambda,\mu)g( \lambda,\mu)\|\] \[=\|g(\lambda,\mu)\|^{2}\] as claimed. The next argument is rather straightforward. Proof of Lemma 2.3.: As \(g(0,0)=T1=1\), by continuity, there is \(r>0\) such that, for all \(\lambda,\mu\) with \(|\lambda|,|\mu|<r\), we have \(\|1-g^{*}(\lambda,\mu)g(\lambda,\mu)\|<1\). The series expansion of the logarithm thus yields \[-\log(g^{*}(\lambda,\mu)g(\lambda,\mu))=\sum_{k=1}^{\infty}\frac{1}{k}(1-g^{* }(\lambda,\mu)g(\lambda,\mu))^{k}. \tag{3.1}\] The definition of \(g_{\tau}\) entails that \[2\,\mathrm{Re}\,h(\lambda,\mu)=2\,\tau(\log(|g(\lambda,\mu)|))=\tau(\log(|g( \lambda,\mu)|^{2}))=\tau\big{(}\log(g^{*}(\lambda,\mu)g(\lambda,\mu)\big{)}. \tag{3.2}\] Combining these two identities gives the claim. The proof of the third lemma follows exactly Aupetit's arguments. (There appears to be some misprint at the bottom of page 61 and top of page 62 of [2].) Proof of Lemma 2.4.: For all \(\lambda,\mu\in\mathbb{C}\), we have \[\big{|}e^{h(\lambda,\mu)}\big{|}=e^{\mathrm{Re}\,h(\lambda,\mu)}=e^{g_{\tau}( \lambda,\mu)}\leq\|g(\lambda,\mu)\|\] by Lemma 2.2. Since \[\|g(\lambda,\mu)\|\leq\|T\|\,e^{|\lambda|(\|a\|+\|Ta\|)+|\mu|(\|b\|+\|Tb\|)}\] it follows that \(e^{h(\lambda,\mu)}=e^{\alpha\lambda+\beta\mu+\gamma}\) for suitable \(\alpha,\beta,\gamma\in\mathbb{C}\) ([4, Lemma 3.2]). As \(g_{\tau}(0,0)=0\) we have \(|e^{\gamma}|=1\), thus we may assume that \(\gamma=0\) (since we need only the real part of \(\gamma\)). Therefore, \(2\,\mathrm{Re}\,h(\lambda,\mu)=\alpha\lambda+\beta\mu+\bar{\alpha}\bar{ \lambda}+\bar{\beta}\bar{\mu}\). From Lemma 2.3 we obtain \[\alpha\lambda+\beta\mu+\bar{\alpha}\bar{\lambda}+\bar{\beta}\bar{\mu}=-\sum_ {k=1}^{\infty}\frac{1}{k}\tau\big{(}(1-g^{*}(\lambda,\mu)g(\lambda,\mu))^{k} \big{)}\] for all \(\lambda,\mu\) such that \(|\lambda|,|\mu|<r\) for suitable \(r>0\). The series expansion of \(g(\lambda,\mu)\) in (2.1) does not contain any powers of \(\lambda\) or \(\mu\) of first order, hence the series expansion in (3.1) cannot either. This entails that both \(\alpha\) and \(\beta\) are equal to zero. It now follows from (2.2) that \(\tau\big{(}\log(g^{*}(\lambda,\mu)g(\lambda,\mu)\big{)}{=0}\). We now have all the tools to prove our main theorem by adapting the arguments in Theorem 1.12 of [2] to our situation. Proof of the Theorem.: Set \(f(\lambda,\mu)=\sum_{k=1}^{\infty}\frac{1}{k}\tau\big{(}(1-g^{*}(\lambda,\mu)g( \lambda,\mu))^{k}\big{)}\) for all \(\lambda,\mu\) such that \(|\lambda|,|\mu|<r\) for suitable \(r>0\) (given by Lemma 2.3). By Lemma 2.4, \(f=0\) and thus \[\frac{\partial^{2}}{\partial\lambda\partial\mu}f(0,0)=0=\frac{\partial^{3}}{ \partial\lambda^{2}\partial\mu}f(0,0).\] Using these identities after substituting in the series expansion (2.1) into the log-series we find \[\tau\big{(}T(ab)-TaTb\big{)}=0 \tag{3.3}\] and \[\tau\big{(}T(a^{2}b)+(Ta)^{2}Tb+Tb(Ta)^{2}-T(a^{2})Tb-2\,T(ab)Ta\big{)}=0 \tag{3.4}\] for all \(a,b\in B\). From (3.3) we obtain \[\tau(T(a^{2}b))=\tau(T(a^{2})Tb)\] and \[\tau(T(a^{2}b))=\tau(T(a(ab)))=\tau(TaT(ab))\] so that (3.4) reduces to \[\tau((Ta)^{2}Tb)=\tau(TaT(ab)),\] using the trace property. It follows that \[\tau((Ta)^{2}Tb)=\tau(T(a^{2})Tb).\] Since \(T\) is surjective we may choose \(b\in B\) such that \(Tb=((Ta)^{2}-T(a^{2}))^{*}\) wherefore the last identity yields, for each \(a\in B\), that \[\tau\big{(}((Ta)^{2}-T(a^{2}))((Ta)^{2}-T(a^{2}))^{*}\big{)}=0.\] The faithfulness of \(\tau\) implies that \(T\) is a Jordan homomorphism. ## 4. Conclusions In this section, we collect together some consequences and sharpening of our main theorem. We also relate it to open problems of a similar nature. Suppose \(T\) is a surjective linear mapping between two semisimple unital Banach algebras which preserves the spectrum of each element. Then \(T\) is injective (as explained in the Introduction) and \(T1=1\). The latter follows, for example, from \[\sigma((T1-1)+Tx)=\sigma(T1+Tx)-1=\sigma(1+x)-1=\sigma(x)=\sigma(Tx)\] and the surjectivity of \(T\) which entails that \(\sigma((T1-1)+y)=\sigma(y)\) for all \(y\) in the codomain. Thus, by Zemanek's characterisation of the radical, \(T1-1=0\). As a result, we have a symmetric situation and can apply the Theorem to either \(T\) or its inverse to obtain the following consequence. **Corollary 4.1**.: _Let \(T\) be a surjective spectrum-preserving linear mapping between two semisimple unital Banach algebras. If either of them is a unital C*-algebra equipped with a faithful tracial state then \(T\) is a Jordan isomorphism._ This is another contribution to a longstanding, still open problem by Kaplansky who asked in 1970 whether the above statement holds without any further assumptions on the Banach algebras. For further references, see [2] and [10]. All the steps in the proof of the Theorem but the very last one can be performed for each individual tracial state on a unital \(C\)*-algebra. Therefore, the assumption can be relaxed to the existence of a faithful family of tracial states, that is, a family \(\{\tau_{i}\mid i\in I\}\) of tracial states \(\tau_{i}\) such that \(\tau_{i}(a^{*}a)=0\) for all \(i\in I\) implies \(a=0\). In particular, since any tracial state on a simple unital C*-algebra is faithful we obtain the following result. **Corollary 4.2**.: _Let \(T\colon B\to A\) be a surjective unital invertibility-preserving linear mapping into a simple unital C*-algebra \(A\) which carries a tracial state. Then \(T\) is a Jordan homomorphism._ **Remark 4.3**.: Our terminology of a "finite"C*-algebra is not quite standard. In [6, III.1.3.1], a unital \(C\)*-algebra \(A\) is called _finite_ if the identity of \(A\) is a finite projection; that is, there is no proper subprojection which is Murray-von Neumann equivalent to \(1\). Every unital C*-algebra with a faithful tracial state is finite in this sense but the converse fails in general (though it holds for stably finite exact \(C\)*-algebras). We prefer here a definition that does not make reference to any projections. We can also strengthen our main theorem in a different direction. Let \(G_{1}(B)\) denote the _principal component of_\(G(B)\), where \(B\) is a unital Banach algebra. It is known, see, e.g., [1, Theorem 3.3.7], that \(G_{1}(B)=\{e^{x_{1}}\cdots e^{x_{n}}\mid x_{i}\in B,\,n\in\mathbb{N}\}\). The associated _exponential spectrum_ of \(x\in B\) is \[\sigma_{\varepsilon}(x)=\{\lambda\in\mathbb{C}\mid\lambda-x\notin G_{1}(B)\}.\] In certain situations it is more natural and expedient to consider the exponential spectrum instead of the smaller spectrum, see, e.g., [1, Theorem 3.3.8]. From the proof of our main result we see that it suffices that the mapping \(T\) sends the product of any two exponentials in \(B\) onto an invertible element in \(A\). This gives the following corollary. **Corollary 4.4**.: _Let \(B\) be a unital complex Banach algebra and let \(A\) be a unital finite C*-algebra. Let \(T\colon B\to A\) be a surjective unital linear mapping such that \(TG_{1}(B)\subseteq G(A)\). Then \(T\) is a Jordan homomorphism._ A _spectral isometry_ between two Banach algebras \(A\) and \(B\) is a linear mapping \(S\) such that \(\rho(Sx)=\rho(x)\) for all \(x\in A\). Clearly, every spectrum-preserving mapping is a spectral isometry and so is every Jordan isomorphism. A conjecture related to Kaplansky's problem mentioned above states that every unital surjective spectral isometry between two C*-algebras is a Jordan isomorphism. This conjecture has been confirmed in many cases, see, e.g., [13, 14], but is open in all generality. Notably it was verified in [15] if \(A\) is a unital C*-algebra of real rank zero and without tracial states. The above Corollary 4.1 is thus a step forward in the direction of confirming the general conjecture. **Acknowledgements.** The research for this paper was completed while the second-named author was visiting Queen's University Belfast. He would like to thank both Professor Martin Mathieu and the Mathematical Sciences Research Centre at Queen's University Belfast for their hospitality, and the National Research Foundation of South Africa for their financial support (NRF Grant Number: 129692).
2310.20699
Bayesian Multistate Bennett Acceptance Ratio Methods
The multistate Bennett acceptance ratio (MBAR) method is a prevalent approach for computing free energies of thermodynamic states. In this work, we introduce BayesMBAR, a Bayesian generalization of the MBAR method. By integrating configurations sampled from thermodynamic states with a prior distribution, BayesMBAR computes a posterior distribution of free energies. Using the posterior distribution, we derive free energy estimations and compute their associated uncertainties. Notably, when a uniform prior distribution is used, BayesMBAR recovers the MBAR's result but provides more accurate uncertainty estimates. Additionally, when prior knowledge about free energies is available, BayesMBAR can incorporate this information into the estimation procedure by using non-uniform prior distributions. As an example, we show that, by incorporating the prior knowledge about the smoothness of free energy surfaces, BayesMBAR provides more accurate estimates than the MBAR method. Given MBAR's widespread use in free energy calculations, we anticipate BayesMBAR to be an essential tool in various applications of free energy calculations.
Xinqiang Ding
2023-10-31T17:57:58Z
http://arxiv.org/abs/2310.20699v3
# Bayesian Multistate Bennett Acceptance Ratio Methods ###### Abstract The multistate Bennett acceptance ratio (MBAR) method is a prevalent approach for computing free energies of thermodynamic states. In this work, we introduce BayesMBAR, a Bayesian generalization of the MBAR method. By integrating configurations sampled from thermodynamic states with a prior distribution, BayesMBAR computes a posterior distribution of free energies. Using the posterior distribution, we derive free energy estimations and compute their associated uncertainties. Notably, when a uniform prior distribution is used, BayesMBAR recovers the MBAR's result but provides more accurate uncertainty estimates. Additionally, when prior knowledge about free energies is available, BayesMBAR can incorporate this information into the estimation procedure by using non-uniform prior distributions. As an example, we show that, by incorporating the prior knowledge about the smoothness of free energy surfaces, BayesMBAR provides more accurate estimates than the MBAR method. Given MBAR's widespread use in free energy calculations, we anticipate BayesMBAR to be an essential tool in various applications of free energy calculations. Introduction Computing free energies of thermodynamic states is a central problem in computational chemistry and physics. It has wide-ranging applications including computing protein-ligand binding affinities [1], predicting molecular solubilities [2], and estimating phase equilibria, among other tasks. For states whose free energies are not analytically tractable, their free energies are often estimated using numerical methods [3]. These methods typically involve sampling configurations from states of interest and subsequently computing their free energies based on sampled configurations. In this work we focus on the second step of estimating free energies, assuming that equilibrium configurations have been sampled using Monte Carlo sampling or molecular dynamics. The multistate Bennett acceptance ratio (MBAR) method [4, 5, 6], is a common technique for estimating free energies given sampled configurations. This method is equivalent to the unbinned weighted histogram analysis method (UWHAM) [5, 7]. For the purpose of this study, we refer to this method as MBAR. The MBAR method not only offers an estimate of free energies but also provides the statistical uncertainty associated with the estimate. In situations where a large number of configurations are available, the MBAR estimator is unbiased and has the smallest variance among estimators reliant on sampled configurations [5, 6]. However, properties of the MBAR estimator and their associated uncertainty estimate remain largely unexplored when the number of configurations is small. Furthermore, in such scenarios, it becomes desirable to incorporate prior knowledge into the estimation procedure. A systematic approach of integrating prior knowledge into an estimation procedure is Bayesian inference [8]. Bayesian inference treats unknown quantities (free energies in this case) as random variables and incorporates prior knowledge into the estimation procedure by employing prior distributions and the Bayes's theorem. In terms of free energy estimation, prior knowledge could come from previous simulations, experiments, or physical knowledge of a system. A common instance of physical prior knowledge on free energies is that free energy surfaces along a collective coordinate are usually smooth. Combining prior knowledge with observed data (configurations sampled from thermodynamic states), Bayesian inference computes the posterior distribution of the unknown quantities. The posterior distribution provides both estimates of the unknown quantities and the uncertainty of the estimates. Estimating free energies using Bayesian inference has been investigated in multiple studies. For instance, Stecher et al. [9] used a Gaussian process as the prior distribution over smooth free energy surfaces. The resulting posterior distribution, given configurations from umbrella sampling, was utilized to estimate free energy surfaces and associated uncertainty. Shirts et al. [10] parameterized free energy surfaces using splines and constructed prior distributions using a Gaussian prior on spline coefficients. Unlike these studies that primarily focused on estimating free energy surfaces from biased simulations, the works of Habeck [11, 12], Ferguson [13], and Maragakis et al. [14] were aimed at estimating densities of states and free energy differences using Bayesian inference. Methods developed in these studies are direct Bayesian generalizations of the weighted histogram analysis method (WHAM) and the Bennett acceptance ratio (BAR) method [15]. This work focuses on improving the accuracy of estimating free energies of discrete thermodynamic states when the number of sampled configurations is small. For this purpose, we developed a Bayesian generalization of the MBAR method, which we term BayesMBAR. With several benchmark examples, we show that, when the number of configurations is small, BayesMBAR provides not only superior uncertainty estimates compared to MBAR but also more accurate estimates of free energies by incorporating prior knowledge into the estimation procedure. ## 2 Methods The MBAR method is commonly understood as a set of self-consistent equations, which is not amenable to the development of its Bayesian generalization. To develop a Bayesian generalization of MBAR, we first emphasize the probabilistic nature of the MBAR method. Although there are multiple statistical models from which the MBAR method can be derived, we build upon the reverse logistic regression model,[4] which treats free energies as parameters and provides a likelihood for inference. To convert the reverse logistic regression model into a Bayesian model, we treat free energies as random variables and place a prior distribution on them. Then the posterior distribution of free energies is computed using the Bayes's theorem. Samples from the posterior distribution are efficiently generated using Hamiltonian Monte Carlo (HMC) methods.[16, 17] These samples are used to estimate free energies and quantify the uncertainty of the estimate. Hyperparameters of the prior distribution are automatically optimized by maximizing the marginal likelihood of data (Bayesian evidence). We present the details of BayesMBAR in the following sections. ### The reverse logistic regression model of MBAR Computing free energies of thermodynamic states is closely related to computing normalizing constants of Bayesian models. Multiple methods have been developed in statistics for estimating normalizing constants and these methods are directly applicable for estimating free energies. Here we focus on the reverse logistic regression method proposed by Geyer[4] and show that the solution of this method is equivalent to the MBAR method. Let us assume that we aim to calculate free energies of \(m\) thermodynamic states (up to an additive constant) by sampling their configurations. Let \(u_{i}(x),i=1,...,m\) be the reduced potential energy functions of the \(m\) states. The free energy of the \(i\)th state is defined as \[F_{i}=-\log\int_{\Gamma}e^{-u_{i}(x)}\mathrm{d}x, \tag{1}\] where \(\Gamma\) is the configuration space. For the \(i\)th state, \(n_{i}\) uncorrelated configurations, \(\{x_{ik},k=1,...,n_{i}\}\), are sampled from its Boltzmann distribution \(p_{i}(x;F_{i})=\exp(-[u_{i}(x)-F_{i}])\). To estimate free energies, Geyer[4] proposed the following retrospective formulation. This formulation treats indices of states in an unconventional manner. Let us use \(y_{ik}\) to denote the index of the state from which configuration \(x_{ik}\) is sampled. Apparently, \(y_{ik}=i\) for all \(i\) and \(k\). Although indices of states for sampled configurations are determined in the sampling setup, they are treated as a multinomial distributed random variable with parameters \(\pi=(\pi_{1},...,\pi_{m})\). Because \(n_{i}\) configurations are sampled from state \(i\), the maximum likelihood estimate of \(\pi_{i}\) is \(\hat{\pi}_{i}=n_{i}/n\), where \(n=\sum_{i=1}^{m}n_{i}\). The concatenation of state indices and configurations, \((y,x)\), is viewed as samples from the joint distribution of \(p(y,x)\), which is defined as \[p(y=i,x;F_{i},\log\pi_{i}) =P(y=i)\cdot p(x|y=i;F_{i}) \tag{2}\] \[=e^{-[U_{i}(x)-F_{i}-\log\pi_{i}]}, \tag{3}\] for \(i\in\{1,...,m\}\). Here \(p(y,x;F_{i},\log\pi_{i})\) means that it is a distribution of the random variable \((y,x)\) with parameters \(F_{i}\) and \(\log\pi_{i}\). We will use such notation of separating parameters from random variables with a semicolon henceforth. Following a retrospective argument, the reverse logistic regression method estimates the free energies by asking the following question. Given that a configuration \(x\) is observed, what is the probability that it is sampled from state \(y=i\) rather than other states? Using Bayes's theorem, we can compute this retrospective conditional probability as \[P(y=i|x;F,\log\pi)=\frac{p(y=i,x;F_{i},\log\pi_{i})}{\sum_{j=1}^{m}p(y=j,x;F_{ j},\log\pi_{j})}=\frac{e^{-[U_{i}(x)-F_{i}-\log\pi_{i}]}}{\sum_{j=1}^{m}e^{-[U_{j} (x)-F_{j}-\log\pi_{j}]}}, \tag{4}\] where \(F=(F_{1},...,F_{m})\) and \(\log\pi=(\log\pi_{1},...,\log\pi_{m})\). The free energies are estimated by maximizing the product of the retrospective conditional probabilities of all configurations, which is equivalent to maximizing the log-likelihood of \[\ell(F,\log\pi) =\sum_{i=1}^{m}\sum_{k=1}^{n_{i}}\log P(y_{ik}=i|x_{ik};F,\log\pi)\] \[=\sum_{i=1}^{m}\sum_{k=1}^{n_{i}}\Big{[}-[U_{i}(x_{ik})-F_{i}- \log\pi_{i}]-\log\sum_{j=1}^{m}e^{-[U_{j}(x_{ik})-F_{j}-\log\pi_{j}]}\Big{]}. \tag{5}\] The log-likelihood function \(\ell(F,\log\pi)\) in Eq. 5 depends on \(F\) and \(\log\pi\) only through their sum \(\phi=F+\log\pi\), so \(F\) and \(\log\pi\) are not separately estimable from maximizing the log-likelihood. The solution is to substitute \(\log\pi_{i}\) with the empirical estimate \(\log\hat{\pi}_{i}=\log(n_{i}/n)\). Then setting the derivative of \(\partial\ell/\partial F\) to zero, we obtain \[\hat{F}_{r}=-\log\sum_{i=1}^{m}\sum_{k=1}^{n_{i}}\frac{e^{-u_{r}(x_{ik})}}{ \sum_{j=1}^{m}n_{j}e^{-[u_{j}(x_{ik})-\hat{F}_{j}]}}. \tag{6}\] for \(r=1,...,m\). \(\hat{F}=(\hat{F}_{1},...,\hat{F}_{m})\) is the solution that maximizes \(\ell(F,\log\hat{\pi})\). Eq. 6 is identical to the MBAR equation and reduces to the BAR equation [15] when \(m=2\). The technique described above is termed "reverse logistic regression" based on two primary insights. First, the log-likelihood in equation 5 bears resemblance to that found in multi-class logistic regression. Second, the primary goal of this method is to estimate \(F\), the intercept term. This differs from traditional logistic regression, where the aim is to determine regression coefficients and predict the response variable \(y\). The uncertainty of the estimate \(\hat{F}\) is computed using asymptotic analysis of the log-likelihood function \(\ell(F,\log\pi)\) in Eq. 5. Because the log-likelihood function \(\ell(F,\log\pi)\) depends on \(F\) and \(\log\pi\) only through their sum \(\phi=F+\log\pi\), the observed Fisher information matrix computed using the log-likelihood function can only be used to compute the asymptotic covariance of the sum \(\phi\). The observed Fisher information matrix at \(\hat{\phi}=\hat{F}+\log\hat{\pi}\) is \[\mathcal{J}_{\phi}=\sum_{i=1}^{m}\sum_{k=1}^{n_{i}}(\text{diag}(p_{ik})-p_{ik }p_{ik}^{\top}), \tag{7}\] where \(p_{ik}\) is the column vector of \((p(y_{ik}=1|x_{ik};\hat{\phi}),...,p(y_{ik}=m|x_{ik};\hat{\phi}))\), and \(\text{diag}(p_{ik})\) is the diagonal matrix with \(p_{ik}\) as its diagonal elements. The asymptotic covariance matrix of \(\hat{\phi}\) is the Moore-Penrose pseudo-inverse of the observed Fisher information matrix, i.e., \(\text{cov}(\hat{\phi})=\mathcal{J}_{\phi}^{-}\). To compute the asymptotic covariance matrix of \(\hat{F}\), we assume that \(\hat{\phi}\) and \(\log\hat{\pi}\) are asymptotically independent. Then the asymptotic covariance matrix of \(\hat{F}\) can be computed as \[\text{cov}(\hat{F}) =\text{cov}(\hat{\phi})-\text{cov}(\log\hat{\pi})\] \[=\mathcal{J}_{\phi}^{-}-\text{diag}(1/(n\hat{\pi}))+\mathbf{11}^{ \top}/n, \tag{8}\] where \(\mathbf{1}\) is a column vector of \(m\) ones. The asymptotic covariance matrix in Eq. 8 is the same as that derived in Ref. [5] and is commonly used in the MBAR method [6]. With the asymptotic covariance matrix of \(\hat{F}\), we can compute the asymptotic variance of their differences using the identity \(\text{var}(\hat{F}_{r}-\hat{F}_{s})=\text{cov}(\hat{F}_{r},\hat{F}_{r})+\text{ cov}(\hat{F}_{s},\hat{F}_{s})-2\cdot\text{cov}(\hat{F}_{r},\hat{F}_{s})\), where \(\text{cov}(\hat{F}_{r},\hat{F}_{s})\) is the \((r,s)\)th element of the matrix \(\text{cov}(\hat{F})\). ### Bayesian MBAR As shown above, the reverse logistic regression model formulates MBAR as a statistical model. It provides a likelihood function (Eq. 5) for computing the MBAR estimate \(\hat{F}\) and the associated asymptotic covariance. Based on this formulation, we developed BayesMBAR by turning the reverse logistic regression into a Bayesian model. In BayesMBAR, we treat \(F\) as a random variable instead of a parameter and place a prior distribution on \(F\). The posterior distribution of \(F\) is then used to estimate \(F\). Let us represent the prior distribution of \(F\) as \(p(F;\theta)\), where \(\theta\) is the parameters of the prior distribution and is often called hyperparameters. Borrowing from the reverse logistic regression, we use the retrospective conditional probability in Eq. 4 as the likelihood function, i.e., \(p(y|x,F)=p(y|x;F,\log\pi)\). We note that \(F\) is treated as a random variable in \(p(y|x,F)\) whereas it is a parameter in \(p(y|x;F,\log\pi)\). The \(\log\pi\) term in the likelihood function is substituted with the maximum likelihood estimate \(\log\hat{\pi}\). With these definitions, the posterior distribution of \(F\) given sam pled configurations and state index is \[p(F|Y,X) =\frac{p(Y|F,X)p(F;\theta)}{\int p(Y|F,X)p(F;\theta)dF}\] \[\propto p(F;\theta)\prod_{i=1}^{m}\prod_{k=1}^{n_{i}}p(y_{ik}|x_{ik};F), \tag{9}\] where \(Y=\{y_{ik}:i=1,...,m;k=1,...,n_{i}\}\) and \(X=\{x_{ik}:i=1,...,m;k=1,...,n_{i}\}\). Using the posterior distribution in Eq. 9, we can compute various quantities of interest such as the posterior mode and the posterior mean, both of which can serve as point estimates of \(\hat{F}\). In addition, we can use the posterior covariance matrix as an estimate of the uncertainty for \(\hat{F}\). However, to carry out these calculations, we need to address the following questions that commonly arise in Bayesian inference. ### Choosing the prior distribution To fully specify the BayesMBAR model, we need to choose a prior distribution for \(F\). We could use information about \(F\) from previous simulations or experiments to construct the prior distribution if such information is available. For example, the prior distribution of protein-ligand binding free energies could be constructed using free energies computed with fast but less accurate methods such as docking. The information could also come from the binding free energies of similar protein-ligand systems. Turning such information into a prior distribution will depend on domain experts' experience and likely vary from case to case. In this work, we focus on scenarios where such information is not available. In this scenario, we propose to use two types of distributions as the prior: the uniform distribution and the Gaussian distribution. **Using uniform distributions as the prior.** As the MBAR method has proven to be a highly effective method for estimating free energies in many applications, a conservative strategy for choosing the prior distribution is to minimize the deviation of BayesMBAR from MBAR. Such a strategy leads to using the uniform distribution as the prior distribution, because it makes the maximum a posteriori probability (MAP) estimate of BayesMBAR the same as the MBAR estimate. Specifically, if we set the prior distribution of \(F\) to be the uniform distribution, i.e., \(p(F;\theta)\propto\) constant, the posterior distribution of \(F\) in Eq. 9 becomes the same as the likelihood function. Therefore, maximizing the posterior distribution of \(F\) is equivalent to maximizing the log-likelihood function in Eq. 5. Besides recovering the MBAR estimate with its MAP estimate, BayesMBAR with a uniform prior distribution provides two additional advantages. First, in addition to the MAP estimate, BayesMBAR also offers the posterior mean as an alternative point estimate of \(F\). Second, BayesMBAR produces a posterior distribution of \(F\), which can be used to estimate the uncertainty of the estimate. As shown in the Result sections, the uncertainty estimate from BayesMBAR is more accurate than that from MBAR when the number of configurations is small. **Using Gaussian distributions as the prior.** In many applications, we are interested in computing free energies along collective coordinates such as distances, angles or alchemical parameters. In such cases, we often have the prior knowledge that the free energy surface is a smooth function \(F(\lambda)\) of the collective coordinate, \(\lambda\). A widely used approach to encode such knowledge into Bayesian inference is to use a Gaussian process [18] as the prior distribution. A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution. A Gaussian process is fully specified by its mean function \(\mu(\lambda)\) and covariance function \(k(\lambda,\lambda^{\prime})\). The value of the covariance function \(k(\lambda,\lambda^{\prime})\) is the covariance between \(F(\lambda)\) and \(F(\lambda^{\prime})\). The covariance function is often designed to encode the smoothness of the function. Specifically, the covariance \(k(\lambda,\lambda^{\prime})\) between \(F(\lambda)\) and \(F(\lambda^{\prime})\) increases as \(\lambda\) and \(\lambda^{\prime}\) become closer. When the mean function is smooth and a covariance function such as the squared exponential covariance function is used, the Gaussian process is a probability distribution of smooth functions. In BayesMBAR we focus on estimating free energies at discrete values of the collective coordinate, \((\lambda_{1},...,\lambda_{m})\), instead of the whole free energy surface. Projecting the Gaussian process over free energy surfaces onto discrete values of \(\lambda\), we obtain as the prior distribution of \(F=(F(\lambda_{1}),...,F(\lambda_{m}))\) a multivariate Gaussian distribution with the mean vector \(\mu=(\mu(\lambda_{1}),...,\mu(\lambda_{m}))\) and the covariance matrix \(\Sigma\). The \((i,j)\)th element of \(\Sigma\) is computed as \(\Sigma_{ij}=k(\lambda_{i},\lambda_{j})\) and represents the covariance between \(F(\lambda_{i})\) and \(F(\lambda_{j})\). As in many applications of Gaussian processes, we set the mean function to be a constant function, i.e., \(\mu=(c,...,c)\), where \(c\) is a hyperparameter to be optimized. The choice of the covariance function is a key ingredient of constructing the prior distribution, as it encodes our assumption about the free energy surface's smoothness [18]. Several well-studied covariance functions are suitable for use in BayesMBAR. In this study we use the squared exponential covariance function as an example, noting that other types of covariance function can be used as well. The squared exponential covariance function is defined as \[k_{\rm SE}(\lambda,\lambda^{\prime})=\sigma^{2}\cdot\exp\Big{(}-\frac{r^{2}}{2 l^{2}}\Big{)}, \tag{10}\] where \(r=|\lambda-\lambda^{\prime}|\) and the variance scale \(\sigma\) and the length scale \(l\) are hyperparameters to be optimized. Every function \(F(\lambda)\) from such Gaussian processes has infinitely many derivatives and is very smooth. The hyperparameters \(\sigma\) and \(l\) control the variance and the length scale of the function \(F(\lambda)\), respectively. With the mean function and the covariance function defined, the prior distribution of \(F\) is fully specified as a multivariate Gaussian distribution of \[p(F;\theta)=\frac{1}{(2\pi)^{m/2}|\Sigma_{\theta}|^{1/2}}\exp(-\frac{1}{2}(F- \mu_{\theta})^{T}\Sigma_{\theta}^{-1}(F-\mu_{\theta})), \tag{11}\] where \(\mu_{\theta}\) and \(\Sigma_{\theta}\) are the mean vector and the covariance matrix, respectively. They depend on the hyperparameters \(\theta=(c,\sigma,l)\) that is optimized by maximizing the Bayesian evidence, as described in following sections. ### Computing posterior statistics. With the prior distribution of \(F\) defined as above, the posterior distribution defined in Eq. 9 contains rich information about \(F\). Specifically, the MAP or the posterior mean can be used as point estimates of \(F\) and the posterior covariance matrix can be used to compute the uncertainty of the estimate. **Computing the MAP estimate.** The MAP estimate of \(F\) is the value that maximizes the posterior distribution density, i.e., \(\hat{F}=\operatorname*{arg\,max}_{F}\log p(F|Y,X)\). When the prior distribution is chosen to be either uniform distributions or the Gaussian distributions, \(\log p(F|Y,X)\) is a concave function of \(F\). This means that the MAP estimate is the unique global maximum of the posterior distribution density and can be efficiently computed using standard optimization algorithms. In BayesMBAR, we implemented the L-BFGS-B algorithm [19] and the Newton's method to compute the MAP estimate. **Computing the mean and the covariance matrix of the posterior distribution.** Computing the posterior mean and the covariance matrix is more challenging than computing the MAP estimate. It involves computing an integral with respect to the posterior distribution density. When there are only two states, we compute the posterior mean and the covariance matrix by numerically integration. When there are more than two states, numerical integration is not feasible. In this case, we estimate the posterior mean and covariance matrix by sampling from the posterior distribution using the No-U-Turn Sampler (NUTS) [17]. The NUTS sampler is a variant of Hamiltonian Monte Carlo (HMC) methods [16] and has the advantage of automatically tuning the step size and the number of steps. The NUTS sampler has been shown to be highly efficient in sampling from high-dimensional distributions for Bayesian inference problems. In BayesMBAR, an extra factor that further improves the efficiency of the NUTS sampler is that the posterior distribution density is a concave function, which means that the sampler does not need to cross low density (high energy) regions during sampling. ### Optimizing hyperparameters When Gaussian distributions with a specific covariance function are used as the prior distribution of \(F\) (Eq. 11), we need to make decisions about the values of hyperparameters. Such decisions are referred to as model selection problems in Bayesian inference and several principles have been proposed and used in practice. In BayesMBAR, we use the Bayesian model selection principle, which is to choose the model that maximizes the marginal likelihood of the data. The marginal likelihood of the data is also called the Bayesian evidence and is defined as \[p(Y|X;\theta)=\int p(Y|F,X)p(F;\theta)dF. \tag{12}\] Because the Bayesian evidence is a multidimensional integral, computing it with numerical integration is not feasible. In BayesMBAR, we use ideas from variational inference [20] and Monte Carlo integration [5] to approximate it and optimize the hyperparameters. We introduce a variational distribution \(q(F)\) and use the evidence lower bound (ELBO) of the marginal likelihood as the objective function for optimizing the hyperparameters. Specifically, the ELBO is defined as \[\mathcal{L}(q,\theta) =\int q(F)\log\frac{p(Y|F,X)p(F;\theta)}{q(F)}dF\] \[=\underset{F\sim q}{\mathbb{E}}\big{[}\log p(Y|F,X)+\log p(F; \theta)-\log q(F)\big{]}. \tag{13}\] It is straightforward to show that \(\mathcal{L}(q,\theta)=\log p(Y|X;\theta)-D_{KL}(q||p(F|Y,X;\theta))\leq\log p( Y|X;\theta)\), where \(D_{KL}(q||p(F|Y,X;\theta))\) is the Kullback-Leibler divergence between \(q(F)\) and \(p(F|Y,X;\theta)\). Therefore the ELBO is a lower bound of the log marginal likelihood of data and the gap between them is the Kullback-Leibler divergence between \(q(F)\) and \(p(F|Y,X;\theta)\). This suggests that, to make the ELBO a good approximation of the log marginal likelihood, we should choose \(q(F)\) that is close to \(p(F|Y,X;\theta)\). Although we could in principle use \(p(F|Y,X;\theta)\) as the variational distribution \(q(F)\) (then the ELBO would be equal to the log marginal likelihood), it is not practical because computing the gradient of the ELBO with respect to the hyperparameters would require sampling from \(p(F|Y,X;\theta)\) at every iteration of the optimization and is computationally too expensive. Instead we choose \(q(F)\) to be a Gaussian distribution to approximate the posterior distribution based on the following observations. The posterior distribution density \(p(F|Y,X;\theta)\) is equal to the product of the likelihood function \(p(Y|F,X)\) and the prior distribution \(p(F;\theta)\) up to a normalization constant. The likelihood term \(p(Y|F,X)\) is a log-concave function of \(F\) and does not depend on \(\theta\), so we can approximate it using a fixed Gaussian distribution \(\mathcal{N}(\mu_{0},\Sigma_{0})\), where the mean \(\mu_{0}\) and the covariance matrix \(\Sigma_{0}\) are computed by sampling \(F\) from \(p(Y|F,X)\) once. Because the prior distribution \(p(F;\theta)\) is also a Gaussian distribution, \(\mathcal{N}(\mu_{\theta},\Sigma_{\theta})\), multiplying the fixed Gaussian distribution \(\mathcal{N}(\mu_{0},\Sigma_{0})\) with the prior yields another Gaussian distribution \(\mathcal{N}(\mu_{q},\Sigma_{q})\), where \(\mu_{q}\) and \(\Sigma_{q}\) can be analytically computed as \[\mu_{q} =\Sigma_{q}\big{(}\Sigma_{0}^{-1}\mu_{0}+\Sigma_{\theta}^{-1}\mu _{\theta}\big{)} \tag{14}\] \[\Sigma_{q} =\big{(}\Sigma_{0}^{-1}+\Sigma_{\theta}^{-1}\big{)}^{-1}. \tag{15}\] Therefore we choose the proposal distribution \(q(F)\) to be the Gaussian distribution \(\mathcal{N}(\mu_{q},\Sigma_{q})\), where \(\mu_{q}\) and \(\Sigma_{q}\) are computed as above and depend on \(\theta\) analytically. We compute the ELBO and its gradient with respect to \(\theta\) using the reparameterization trick [21]. Specifically, we reparameterize the proposal distribution \(q(F)\) using \(F=\mu_{q}+\Sigma_{q}^{1/2}\epsilon\), where \(\epsilon\) is a random variable with the standard Gaussian distribution. The ELBO can then be written as \[\mathcal{L}(\theta)=\underset{\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})} {\mathbb{E}}\big{[}\log p(Y|\mu_{q}+\Sigma_{q}^{1/2}\epsilon,X)\big{]}-\mathrm{ D}_{\mathrm{KL}}(\mathcal{N}(\mu_{q},\Sigma_{q})||\mathcal{N}(\mu_{\theta}, \Sigma_{\theta})). \tag{16}\] The first term on the right hand side can be estimated by sampling \(\epsilon\) from the standard Gaussian distribution and evaluating the log-likelihood \(p(Y|\mu_{q}+\Sigma_{q}^{1/2}\epsilon,X)\). The second term can be computed analytically. The gradient of the ELBO with respect to \(\theta\) are computed using automatic differentiation [22]. ## 3 Results ### Computing the free energy difference between two harmonic oscillators. We first tested the performance of BayesMBAR by computing the free energy difference between two harmonic oscillators. In this case, because there are only two states, BayesMBAR reduces to a Bayesian generalization of the BAR method and we use BayesBAR to refer to it. The two harmonic oscillators are defined by the potential energy functions of \(U_{1}(x)=\frac{1}{2}k_{1}x^{2}\) and \(U_{2}(x)=\frac{1}{2}k_{2}(x-1)^{2}\), where \(k_{1}\) and \(k_{2}\) are the force constants and \(U_{1}\) and \(U_{2}\) are in the unit of \(k_{B}T\). The objective is to compute the free energy difference between them, i.e., \(\Delta F=F_{2}-F_{1}\) and \(F_{i}=-\log\int e^{-U_{i}(x)}\mathrm{d}x\) for \(i=1\) and \(2\). We first draw \(n_{1}\) and \(n_{2}\) samples from the Boltzmann distribution of \(U_{1}\) and \(U_{2}\), respectively. Then we use BayesBAR with the uniform prior distribution to estimate the free energy difference. To benchmark BayesBAR, we also computed the free energy difference using the BAR method and compared the results from both methods with the true value (Table 1). The forces constants are set to be \(k_{1}=25\) and \(k_{2}=36\). The number of samples, \(n_{1}\) and \(n_{2}\), are set equal and range from \(10\) to \(5000\). For each sample size, we repeated the calculation for \(K=100\) times and computed the root mean squared error (RMSE), the bias, and the standard deviation (SD) of the estimates. The RMSE is computed as \(\sqrt{\sum_{k=1}^{K}(\Delta\hat{F}_{i}-\Delta F)^{2}/K}\), where \(\Delta\hat{F}_{k}\) is the estimate from the \(k\)th repeat and \(\Delta F\) is the true value. The bias is computed as \(\Delta\bar{F}-F\), where \(\Delta\bar{F}=\sum_{k=1}^{K}\Delta\hat{F}_{i}/K\), and the SD is computed as \(\sqrt{\sum_{k=1}^{K}(\Delta F_{i}-\Delta\bar{F})^{2}/(K-1)}\). Because the uniform prior distribution is used, the MAP estimate of BayesBAR is identical to the BAR estimate. Besides the MAP estimate, BayesBAR also provides the posterior mean estimate, which is computed using numerical integration. Compared to the MAP esti mate (the BAR estimate), the posterior mean estimate has a smaller RMSE. Decomposing the RMSE into bias and SD, we found that the posterior mean estimate has a larger bias but a smaller SD than the MAP estimate. The decrease in SD over compensates the increase in bias for the posterior mean estimate, which leads to its smaller RMSE. Although the MAP estimate and the posterior mean estimate have different RMSEs, the difference is small and both estimates converge to the true value as the sample size increases. This suggests that both estimates can be used interchangeably in practice. Besides the MAP and the posterior mean estimate for \(\Delta F\), BayesBAR offers an estimate of the uncertainty (whose true value is in the SD column in Table 1) using the posterior standard deviation (the BayesBAR column in Table 1). For benchmarking, we also calculated the uncertainty estimate using asymptotic analysis, Bennett's method, and the bootstrap method. Because each repeat produces an uncertainty estimate, we used the average from all \(K\) repeats as the uncertainty estimate of each method, denoted as "Estimate of SD" in Table 1. When the number of configurations is small, the asymptotic analysis significantly overestimates the uncertainty, while both Bennett's method and the bootstrap method tend to underestimate the uncertainty. Practically, overestimating uncertainty is favored over underestimating, as the former prompts further configuration collection, whereas the latter might \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{\(n_{1}(=n_{2})\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{SD} & \multicolumn{4}{c}{Estimate of SD} \\ \cline{2-10} & MAP\({}^{a}\) & mean\({}^{b}\) & MAP & mean & MAP & mean & BayesBAR & asymptotic & Bennett’s & Bootstrap \\ \hline 10 & 2.45 & 2.40 & 0.85 & 0.90 & 2.29 & 2.23 & 4.08 & 39.24 & 1.10 & 1.53 \\ 13 & 2.53 & 2.47 & 0.87 & 0.92 & 2.38 & 2.29 & 3.55 & 19.69 & 1.11 & 1.47 \\ 18 & 1.92 & 1.84 & 0.16 & 0.22 & 1.91 & 1.83 & 3.09 & 11.58 & 1.09 & 1.31 \\ 28 & 1.72 & 1.65 & 0.44 & 0.48 & 1.66 & 1.58 & 2.58 & 7.06 & 1.07 & 1.14 \\ 48 & 1.27 & 1.19 & 0.16 & 0.19 & 1.26 & 1.17 & 1.90 & 2.92 & 1.07 & 1.11 \\ 99 & 1.34 & 1.23 & -0.19 & -0.11 & 1.33 & 1.22 & 1.38 & 1.64 & 0.98 & 0.96 \\ 304 & 0.83 & 0.79 & 0.00 & 0.03 & 0.83 & 0.79 & 0.80 & 0.81 & 0.74 & 0.70 \\ 5000 & 0.19 & 0.18 & -0.00 & -0.00 & 0.19 & 0.18 & 0.20 & 0.20 & 0.20 & 0.20 \\ \hline \multicolumn{10}{l}{\({}^{a}\) Maximum a posteriori probability (MAP) estimate of BayesBAR (equivalent to the BAR estimate); \({}^{b}\) Posterior mean estimate of BayesBAR.} \\ \end{tabular} \end{table} Table 1: The free energy difference between two harmonic oscillators (\(k_{1}=25,k_{2}=36\)). cause the user to stop sampling prematurely. Nevertheless, excessive overestimation isn't ideal either, as it might result in gathering an unnecessarily large number of configurations. Given these considerations, BayesBAR's uncertainty estimate overestimates the uncertainty modestly and thus is a better choice than the other methods. As the sample size increases, the uncertainty estimates from all methods converge to the true value. The asymptotic analysis tends to overestimate uncertainty much more than BayesBAR. This is because the asymptotic analysis approximates the posterior distribution of \(\Delta F\) with a Gaussian distribution centered around the MAP estimate. Such an approximation is generally accurate for a large number of configurations. However, with a smaller number of configurations, this approximation becomes imprecise, leading to considerable overestimation of uncertainty. Fig. 1 provides a visual comparison, contrasting the posterior distribution of \(\Delta F\) as determined by BayesBAR with the Gaussian approximation from the asymptotic analysis for an experiment where \(n_{1}=n_{2}=18\). ### Computing free energy differences among three harmonic oscillators. We next tested the performance of BayesMBAR on a multistate system. The system consists of three harmonic oscillators with the following unitless potential energy functions: \(U_{1}(x)=\frac{1}{2}k_{1}x^{2}\), \(U_{2}(x)=\frac{1}{2}k_{2}(x-1)^{2}\), and \(U_{3}(x)=\frac{1}{2}k_{3}(x-2)^{2}\), where \(k_{1}=16\), \(k_{2}=25\), and \(k_{3}=36\). The free energy differences among the three harmonic oscillators are analytically known. Similar to the two harmonic oscillator system, we first draw \(n\) samples form the Boltzmann distribution of each harmonic oscillator. We use BayesMBAR with the uniform prior to estimate the free energy differences by computing both the MAP estimate and the posterior mean estimate. The posterior mean estimate is computed by sampling from the posterior distribution using the NUTS sampler instead of numerical integration. Figure 2 shows the posterior distribution of the free energy differences (\(F_{2}-F_{1}\) and \(F_{3}-F_{1}\)) and a subset of samples drawn from the posterior distribution in one repeat of the calculation when \(n=18\). As shown in Figure 2(b) and 2(c), samples from the NUTS samplers decorrelate quickly and Figure 1: Probability densities of the posterior distribution (solid line) of \(\Delta F\) and the approximate Gaussian distribution (dashed line) used by the asymptotic analysis for the two harmonic oscillator system with \(n_{1}=n_{2}=18\). can efficiently transverse the posterior distribution. For benchmarking purposes, we conducted the calculation 100 times (\(K=100\)) for each sample size \(n\), and derived metrics including the RMSE, bias, and SD of the estimate (Table 2). Given the use of a uniform prior, BayesMBAR's MAP estimate is the same as the MBAR estimate. When contrasted with the MBAR estimate, the posterior mean estimate has lower SD but higher bias. When factoring in both SD and bias, the posterior mean estimate has a smaller RMSE compared to the MBAR estimate. The difference in RMSE between the two is minimal, and both estimates converges to the correct value as sample size grows. In terms of uncertainty, BayesMBAR offers a superior estimate compared to established techniques like asymptotic analysis or the Bootstrap method, especially with limited configuration sizes. Notably, BayesMBAR's uncertainty estimate avoids the underestimation seen with the Bootstrap method. Simultaneously, compared to the asymptotic analysis, BayesMBAR's uncertainty estimate has a more modest overestimation. Figure 2: Probability density and samples of the posterior distribution of \(F_{2}-F_{1}\) and \(F_{3}-F_{1}\) for the three harmonic oscillators with \(n=18\). (a) Contours are the logarithm of the posterior distribution density. Dots are a subset of samples drawn from the posterior distribution using the NUTS sampler. (b and c) The first 300 samples of \(F_{2}-F_{1}\) and \(F_{3}-F_{1}\) drawn from the posterior distribution using the NUTS sampler. \begin{table} \begin{tabular}{c c c c c c c c c c} \multicolumn{8}{c}{\(F_{2}-F_{1}\)} \\ \hline \multirow{2}{*}{\(n\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{SD} & \multicolumn{2}{c}{Estimate of SD} \\ \cline{2-9} & MAP & mean & MAP & mean & MAP & mean & BayesMBAR & asymptotic & Bootstrap \\ \hline 10 & 1.89 & 1.83 & 0.45 & 0.53 & 1.84 & 1.75 & 2.28 & 5.31 & 1.26 \\ 13 & 1.84 & 1.76 & 0.46 & 0.52 & 1.78 & 1.68 & 1.93 & 3.20 & 1.19 \\ 18 & 1.41 & 1.30 & -0.09 & 0.01 & 1.41 & 1.30 & 1.62 & 2.17 & 1.05 \\ 28 & 1.18 & 1.12 & 0.13 & 0.19 & 1.17 & 1.10 & 1.31 & 1.55 & 0.91 \\ 48 & 0.79 & 0.75 & -0.04 & 0.01 & 0.79 & 0.75 & 0.97 & 1.00 & 0.83 \\ 99 & 0.67 & 0.64 & -0.06 & -0.04 & 0.66 & 0.64 & 0.69 & 0.70 & 0.63 \\ 304 & 0.39 & 0.39 & -0.01 & -0.00 & 0.39 & 0.39 & 0.40 & 0.40 & 0.40 \\ 5000 & 0.09 & 0.09 & -0.00 & 0.00 & 0.09 & 0.09 & 0.10 & 0.10 & 0.10 \\ \hline \multicolumn{8}{c}{\(F_{3}-F_{1}\)} \\ \hline \multirow{2}{*}{\(n\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{SD} & \multicolumn{2}{c}{Estimate of SD} \\ \cline{2-9} & MAP & mean & MAP & mean & MAP & mean & BayesMBAR & asymptotic & Bootstrap \\ \hline 10 & 2.93 & 2.85 & 0.89 & 1.02 & 2.79 & 2.66 & 4.63 & 28.27 & 2.02 \\ 13 & 3.26 & 3.18 & 1.25 & 1.35 & 3.01 & 2.87 & 4.16 & 20.74 & 1.86 \\ 18 & 2.53 & 2.40 & 0.12 & 0.27 & 2.53 & 2.39 & 3.39 & 10.12 & 1.85 \\ 28 & 2.28 & 2.20 & 0.62 & 0.73 & 2.20 & 2.08 & 2.87 & 6.35 & 1.56 \\ 48 & 1.73 & 1.64 & 0.16 & 0.26 & 1.72 & 1.62 & 2.26 & 3.91 & 1.37 \\ 99 & 1.53 & 1.43 & 0.36 & 0.42 & 1.49 & 1.37 & 1.58 & 1.84 & 1.21 \\ 304 & 0.99 & 0.95 & -0.00 & 0.03 & 0.99 & 0.95 & 0.89 & 0.91 & 0.81 \\ 5000 & 0.23 & 0.22 & 0.01 & 0.01 & 0.23 & 0.22 & 0.22 & 0.23 & 0.22 \\ \hline \end{tabular} \end{table} Table 2: Free energy differences among three harmonic oscillators (\(k_{1}=16,k_{2}=25,k_{3}=36\)). ### Computing the hydration free energy of phenol. We further tested the performance of BayesMBAR on a realistic system that involves collective variables. Specifically, we use BayesMBAR to compute the hydration free energy of phenol using an alchemical approach. In this approach, we modify the non-bonded interactions between phenol and water using an alchemical variable \(\lambda=(\lambda_{\text{elec}},\lambda_{\text{vdw}})\), where \(\lambda_{\text{elec}}\) and \(\lambda_{\text{vdw}}\) are alchemical variables for the electrostatic and the van der Waals interactions, respectively. The dependency of the non-bonded interactions on \(\lambda\) is defined as Eq. S1 in the Supporting Information. When \((\lambda_{\text{elec}},\lambda_{\text{vdw}})=(0,0)\), the non-bonded interactions between phenol and water are turned on and phenol is in the water phase. When \((\lambda_{\text{elec}},\lambda_{\text{vdw}})=(1,1)\), the non-bonded interactions are turned off and phenol is in the vacuum phase. The hydration free energy of phenol is equal to the free energy difference between the two states of \(\lambda=(0,0)\) and \(\lambda=(1,1)\). To compute the free energy difference, we introduce 7 intermediate states through which \(\lambda_{\text{elec}}\) and \(\lambda_{\text{vdw}}\) are gradually changed from \((0,0)\) to \((1,1)\). The values of \(\lambda_{\text{elec}}\) and \(\lambda_{\text{vdw}}\) for the intermediate states are included in the Supporting Information. We run _NPT_ molecular dynamics simulations for all states at 300 K and 1 atm. Each simulation is run for 2 ns with a time step of 2 fs and configurations are saved every 2 ps. We use BayesMBAR to compute the free energy differences with \(n\) configurations from each state. We repeated the calculation for \(K=100\) times. Because the ground truth hydration free energy is not known analytically, we use as the benchmark the MBAR estimate computed using all configurations sampled from all repeats, i.e., 100,000 configurations from each state. **Uniform prior.** We first tested the performance of BayesMBAR with the uniform prior using different numbers of configurations. Here the \(n\) configurations from each state are not randomly sampled from saved configurations during the 2 ns of simulation. Instead, we use the first \(n\) configurations to mimic the situation in production calculations where configurations are saved sequentially. The results are summarized in Table 3. Compared to the MAP estimate (the MBAR estimate), the posterior mean estimate has a smaller SD but larger bias, as observed in the previous harmonic oscillator systems. In terms of RMSE, the posterior mean estimate has a larger RMSE than the MAP estimate, which is different from that in the harmonic oscillator systems. We also compared the uncertainty estimate among BayesMBAR, asymptotic analysis, and the bootstrap method. The BayesMBAR estimate of the uncertainty is closer to the true value than the asymptotic analysis while not underestimating the uncertainty as the bootstrap method does when the number of configurations is small. In addition to the free energy difference between the two end states, we also compared the uncertainty estimates for free energies of all states. As shown in Figure 3, the uncertainty estimates from BayesMBAR are closer to the true uncertainty than the asymptotic analysis when the number of configurations is small (\(n=5\)). **Normal prior.** The free energy surface along the alchemical variable \(\lambda\) is expected to be smooth, so we can use a normal prior distribution in BayesMBAR to encode this prior knowledge. We use normal prior distributions with the squared exponential covariance function. The hyperparameters in the covariance functions and the mean parameter of the prior distribution are optimized by maximizing the Bayesian evidence. After optimizing the hyperparameters, we use the MAP estimator to estimate the free energy difference between the two end states and compare it to the MAP estimator with the uniform prior distribution (Table 4), which is identical to the MBAR estimator. By incorporating the prior knowledge of the smoothness of the free energy surface, the BayesMBAR estimator with a normal prior distribution has a smaller RMSE than the MBAR estimator, especially when the number of configurations is small. As the number of config \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \multirow{2}{*}{\(n_{1}(=n_{2})\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{SD} & \multicolumn{2}{c}{Estimate of SD} \\ \cline{2-10} & MAP & mean & MAP & mean & MAP & mean & BayesMBAR & asymptotic & Bootstrap \\ \hline 5 & 3.07 & 3.19 & -1.52 & -1.82 & 2.67 & 2.61 & 3.05 & 5.16 & 2.33 \\ 7 & 2.46 & 2.52 & -0.67 & -0.96 & 2.37 & 2.33 & 2.46 & 3.08 & 2.13 \\ 12 & 1.95 & 1.96 & -0.35 & -0.56 & 1.91 & 1.87 & 1.89 & 2.09 & 1.69 \\ 25 & 1.34 & 1.32 & -0.06 & -0.21 & 1.34 & 1.30 & 1.28 & 1.30 & 1.22 \\ 75 & 0.79 & 0.79 & -0.05 & -0.10 & 0.79 & 0.78 & 0.73 & 0.73 & 0.72 \\ 1000 & 0.22 & 0.22 & -0.02 & -0.02 & 0.22 & 0.22 & 0.20 & 0.20 & 0.20 \\ \hline \end{tabular} \end{table} Table 3: Hydration free energy (in the unit of \(k_{b}T\)) of phenol computed using BayesMBAR with the uniform prior. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(n_{1}(=n_{2})\)} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{SD} \\ \cline{2-7} & uniform & normal & uniform & normal & uniform & normal \\ \hline 5 & 3.07 & 2.00 & -1.52 & 0.25 & 2.67 & 1.98 \\ 7 & 2.46 & 1.95 & -0.67 & 0.46 & 2.37 & 1.89 \\ 12 & 1.95 & 1.49 & -0.35 & 0.45 & 1.91 & 1.42 \\ 25 & 1.34 & 1.26 & -0.06 & 0.43 & 1.34 & 1.19 \\ 75 & 0.79 & 0.80 & -0.05 & 0.08 & 0.79 & 0.80 \\ 1000 & 0.22 & 0.22 & -0.02 & -0.03 & 0.22 & 0.22 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of the performance of BayesMBAR with the uniform prior and the normal prior for computing the hydration free energy of phenol. Figure 3: Free energy estimates of all states for computing the hydration free energy of phenol. SD_F (Bayes) and SD_F (asymptotic) are the average of the uncertainty estimates using BayesMBAR and the asymptotic analysis, respectively, when \(n=5\). SD_F (true) is the true uncertainty when \(n=5\). \(F_{\text{ref}}\) is the MBAR estimate computed using all configurations sampled from all repeats. urations increases, the BayesMBAR estimator converges to the MBAR estimate. When the number of configurations is small, the information about free energy from data is limited and the prior knowledge of the free energy surface excludes unlikely results and helps improve the estimate. When the number of configurations is large, the inference is dominated by the data and the prior knowledge becomes less important, because the prior knowledge used here is a relatively weak prior. This behavior is desirable because the prior knowledge should be used when data alone are not sufficient to make a good inference and at the same time not bias the inference when data are sufficient. ## 4 Conclusion and Discussion In this study, we developed BayesMBAR, a Bayesian generalization of the MBAR method based on the reverse logistic regression formulation of MBAR. BayesMBAR provides a posterior distribution of free energy, which is used to estimate free energies and compute the estimation uncertainty. When uniform distributions are used as the prior, the MAP estimate of BayesMBAR recovers the MBAR estimate. Besides the MAP estimate, BayesMBAR provides the posterior mean estimate of free energy. Compared to the MAP estimate, the posterior mean estimate tends to have a larger bias but a smaller SD. The difference in accuracy between the MAP estimate and the posterior mean estimate is small and both estimates converge to the true value as the number of configurations increases. Therefore both estimates can be used interchangeably in practice. In BayesMBAR, the estimation uncertainty is computed using the posterior standard deviation. All benchmark systems in this study show that such uncertainty estimate from BayesMBAR is better than that from the asymptotic analysis, the bootstrap method or Bennett's method, especially when the number of configurations is small. As a Bayesian method, BayesMBAR is able to incorporate prior knowledge about free energy into the estimation. We demonstrated this feature by using a normal prior distribution to encode the prior knowledge of the smoothness of free energy surfaces. All hyperparameters in the prior distribution are automatically optimized by maximizing the Bayesian evidence. By using such prior knowledge, BayesMBAR provides more accurate estimate than the MBAR method when the number of configurations is small, and converges to the MBAR estimate when the number of configurations is large. BayesMBAR can be easily extended to incorporate other types of prior knowledge about free energy, such as knowledge from previous calculations or experimental data. The author thanks the Tufts University High Performance Computing Cluster that was utilized for the research reported in this paper. Dependency of non-bonded interactions between phenol and water on the alchemical variable \(\lambda\), (Table S1) values of \(\lambda_{\text{elec}}\) and \(\lambda_{\text{vdw}}\) for the intermediate states and the end states used in the alchemical calculation of the hydration free energy of phenol.
2309.04658
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
2023-09-09T01:56:40Z
http://arxiv.org/abs/2309.04658v2
# Exploring Large Language Models for Communication Games: ###### Abstract Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, "Werewolf", demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains. ## 1 Introduction Since incomplete information games such as Werewolf (Ri et al., 2022) and Poker (Brown and Sandholm, 2019) can be used as a good proxy to exploit various fundamental problems in economics and social science (Gibbons, 1992), research on playing such games with artificial intelligence (AI) agents has attracted widespread attention in recent years (Brown and Sandholm, 2019; FAIR et al., 2022; Toriumi et al., 2017). Among them, the communication games which heavily rely on natural language communication, e.g., Werewolf, present even greater practical values and challenges as agents must gather and infer information from the inherently ambiguous natural language utterances. Although substantial efforts have been devoted to such games (Toriumi et al., 2017; FAIR et al., 2022), most of them either impose strict restrictions on the language used in the game (Osawa et al., 2014; Hirata et al., 2016; Shibata et al., 2023) or require a significant amount of human-annotated data (FAIR et al., 2022; Kramar et al., 2022). Therefore, it is still challenging for AI agents to play communication games in a natural way. Fortunately, large language models (LLMs) like ChatGPT (OpenAI, 2022) have recently made significant advancements. These models have demonstrated impressive or even superhuman performance across a broad spectrum of academic and professional exams (OpenAI, 2023), showcasing sophisticated language comprehension, generation, and reasoning abilities. Furthermore, studies have shown that LLMs exhibit a certain degree of theory of mind capabilities (Bubeck et al., 2023; Shapira et al., 2023; Kosinski, 2023), as well as the potential to simulate believable human behaviors (Park et al., 2023). Recent research also suggests that LLMs can improve themselves (Fu et al., 2023) or align better with human values (Liu et al., 2023) through mutual communication. All these advancements make LLMs promising candidates for tackling the challenge of enabling AI agents to participate in communication games in a more natural and sophisticated manner. Nevertheless, it is not trivial to play communication games for LLMs. Firstly, the finite maximum input length of LLMs, also known as context length, limits the volume of information that can be conveyed at a single time. In communication games, historical information is important for decision-making, but it is often too massive to be processed by LLMs. Secondly, understanding the intentions of other players and making suitable decisions to win the game require complex reasoning, which is a demanding task for LLMs (Zhou et al., 2023). Thirdly, LLMs might learn from experience like human beings to upgrade their behaviors. Unfortunately, fine-tuning LLMs is not practical since it is both time-consuming and data-intensive. In this work, we aim to explore LLM-based agents for the Werewolf game, which is a representative and widely studied communication game. To address the issue of limited context length, we propose a method to retrieve and reflect necessary historical information, resulting in a compact context for each LLM-based agent. Moreover, the reflection process also serves the purpose of enhancing the reasoning ability of the agent, which functions in a manner akin to the chain-of-thought mechanism (Wei et al., 2022). To learn from experience without tuning model parameters on supervised data, we propose a mechanism that extracts suggestion from past experiences based on the current situation. Our goal is to prevent LLMs from making similar mistakes repeatedly across several matches. Experiments indicate that LLMs have great potential in playing communication games. Our contributions can be summarized as follows: * We propose a framework for playing communication games with frozen LLMs without human-annotated data. * Empirical studies on Werewolf demonstrate that our framework demonstrates the ability to learn from experiences without tuning the parameters of LLMs. * Strategic behaviors such as trust, confrontation, camouflage, and leadership begin to emerge in our experiments, which can serve as a catalyst for further research on LLMs for communication games. ## 2 Background: Werewolf There are various versions of the Werewolf game. Fig. 1 shows an example of the version that we adopt in this work. Specifically, there are seven players with five distinct roles: two werewolves, two villagers, a witch, a guard, and a seer. All the involved roles are divided into two sides, of which one side is the werewolves and the other side includes the villagers and the special roles (i.e., witch, guard, and seer). The objective of werewolves is to eliminate all villagers, while the villagers aim to work with special roles to eliminate all werewolves. There should be at least one alive villager at the end of the game if the villagers and special roles want to win. The game alternates between day and night phases. During each night, the werewolves can vote to eliminate one role. During the daytime, all alive players will organize an open discussion and then vote to eliminate one suspicious werewolf. As for the special roles, the witch can use a bottle of antidote and a bottle of poison, which can be used only once in a game, to either save or poison a role. The guard can protect one role to be not eliminated each night. And the seer can uncover the role of one player each night. Figure 1: A snapshot of our implemented Werewolf game. There are 5 roles and 7 players, and each of them is acted by an LLM autonomously. The number before each talking denotes the speaking order. Some social behaviors can be primarily observed in this figure, including \(\text{trust}\,,\) confrontation, camouflage, and \(\text{leadership}\,\). One important feature of the Werewolf game is that all the players only know their own roles at the beginning. They have to infer the roles of other players through natural language-based communication and reasoning. Therefore, to excel at Werewolf, an agent should not only be good at natural language understanding and generation but also possess advanced abilities, such as deciphering the intentions of others and understanding the theory of mind (Toriumi et al., 2017). This factor makes Werewolf a good testbed for research on communication games. ## 3 Playing Werewolf with LLMs ### Notations We refer to one full day-night cycle as one **day**, indexed by \(t\). A **round** consists of multiple days, from the beginning of the game to the day that one side wins or it reaches the predefined max number of days. We will index a round by \(r\). The agents are numbered by \(i\). In the following sections, a symbol in the form \(X_{i}^{(r,t)}\) means it is corresponding to agent \(i\) at round \(r\) and day \(t\). For brevity, \(r\) or \(t\) will be omitted when it is clear from the context. The words an agent says to others are called **responses** and the words an agent hears are called **observations**, denoted as \(G\) and \(O\). Moreover, the agent will also generate natural language summary of the current situation given the communication history, which is called **reflection** and denoted as \(R\) (see SS3.3 for more information). For brevity, we will refer to responses, observations, and reflections as **messages** if they need to be considered together. ### Overall Framework For each role in the game, we implement an individual LLM-based agent through prompting and the full prompt can be found in Appendix A.5. Fig. 2 shows the outline of the prompt for response generation, which consists of four major components: (1) the game rules, the assigned role, the abilities and objectives of each role, and some basic human priors on effective gameplay strategies (part 1); (2) the most recent \(K\) messages (part 2.1), a set of heuristically selected informative messages (part 2.2), and the reflection of the agent (part 2.3); (3) the suggestions extracted from past experiences (part 3); and (4) chain-of-thought prompt to elicit reasoning (part 4). The major challenge for the second component is the limited context length of LLMs, and its details will be discussed in SS3.3. The third component is responsible for learning from experiences without tuning the model parameters and will be introduced in SS3.4. For using experience, the most relevant works to ours are Shinn et al. (2023) and Fu et al. (2023). However, the former is limited to using experiences within a single round, and the latter is designed for a two-player game. In contrast, our approach is capable of leveraging cross-round experiences and able to be applied to multi-player scenarios. ### Historical Information Collecting Obviously, communication history plays a important role in Werewolf. However, due to the context length limitation of LLMs, it is unrealistic to feed all the history into LLMs via a prompt. To this end, we propose to collect historical information from three perspectives, namely, _freshness_, _informativeness_, and _completeness_, in consideration of both effectiveness and efficiency. Figure 2: Outline of prompt for response generation. _Italies_ are comments. Freshness.Intuitively, the most recent history should be included in the context. Therefore, we include the most recent \(K\) messages, denoted as \(O_{i}^{t}\), in the context (part 2.1 in Fig. 2). Informativeness.The messages carrying critical information for inferring the role of the agents should be included in the context, e.g., the messages disclose the role of an agent. For efficiency, we collect the easy-to-identify informative messages using rule matching and fill the top \(N\) of them ranked by a heuristic metric into the prompt, denoted as \(V_{i}^{t}\) (part 2.2 in Fig. 2). The rules and metric are provided in Appendix A.1. Completeness.The above two perspectives only cover a limited amount of historical information. Therefore, it is vital to extract more information from the entire history. However, it is not straightforward due to the context length limitation of LLMs. To this end, we propose to _reflect by answering questions_ method to achieve both effectiveness and efficiency. The resulting reflection is denoted as \(R_{i}^{t}\) (part 2.3 in Fig. 2). Suppose the current day is \(t\), we first build a short-term memory \(\mathcal{M}_{i}^{t}\) for each agent \(i\), which consists of all observations and reflections of agent \(i\) until the speaking time now 1. Then we prompt the LLM to select \(L\) questions from a predefined set (Appendix A.2) and ask \(M\) extra questions conditioned on \(O_{i}^{t}\), hoping that answers to these \(L+M\) questions \(Q_{i}^{t}=\{q_{i,j}^{t}\}_{j=1}^{L+M}\) can cover the historical information as much as possible. Then, for each question \(q_{i,j}^{t}\), we use a finetuned Sentence-BERT (Reimers and Gurevych, 2019) model 2 on the question answering task to retrieve top \(T\) messages \(U_{i,j}^{t}=\{u_{i,j,k}^{t}\}_{k=1}^{T}\) from \(\mathcal{M}_{i}^{t}\), and prompt the LLM to obtain the answer \(a_{i,j}^{t}\) for \(q_{i,j}^{t}\): Footnote 1: In practice, \(\mathcal{M}_{i}^{t}\) is incrementally updated. Footnote 2: Model name: multi-qa-mpnet-base-cos-v1 \[a_{i,j}^{t}=\mathrm{Answer}\left(q_{i,j}^{t},U_{i,j}^{t}\right). \tag{1}\] Finally, the reflection \(R_{i}^{t}\) is obtained using the LLM by reflecting on the most recent messages \(O_{i}^{t}\), the selected easy-to-identify informative messages \(V_{i}^{t}\), and the answers \(A_{i}^{t}=\{a_{i,j}^{t}\}_{j=1}^{L+M}\): \[R_{i}^{t}=\mathrm{Reflect}\left(O_{i}^{t},V_{i}^{t},A_{i}^{t}\right). \tag{2}\] The prompts used are shown in Appendix A.5. ### Learning from Experiences In practice, the strategy a player used when playing Werewolf maybe evolve as the player gains more experience. Moreover, the strategy of a player may also be influenced by the strategies of other players. Therefore, an ideal Werewolf AI agent should be able to borrow from its own experiences and the experiences of other players. To this end, we propose a non-parametric learning mechanism, enabling LLMs to take reference from experiences without parameter tuning. On one hand, we collect and score the pairs of response and reflection from all players at the end of each round to form an experience pool. On the other hand, in each day of a new round, we retrieve the most relevant experiences from the pool and extract a suggestion from them to guide the reasoning of the agent. Experience Pool.The experience pool is a collection of response, reflection and score tuples. Formally, suppose a round \(r\) ends at day \(T_{\max}\), the agents that win the game form a set \(\mathcal{W}\) and the others form a set \(\mathcal{L}\). For each agent \(i\), we define the experience \(E_{i}^{r}\) collected from it in round \(r\) as \[E_{i}^{r}=\left\{\left(R_{i}^{(r,t)},G_{i}^{(r,t)},s_{i}^{(r,t)}\right)\right\} _{t=1}^{T_{\max}}, \tag{3}\] where \(G_{i}^{t}\) and \(R_{i}^{t}\) are response and reflection as defined in last section respectively, and \(s_{i}^{t}\) is the score, which is defined as \[s_{i}^{t}=\begin{cases}1,000-T_{\max}&\text{if }i\in\mathcal{W}\\ T_{\max}&\text{if }i\in\mathcal{L}\end{cases}, \tag{4}\] The experience pool is defined as the union of experiences collected from all agents in all rounds: \[E=\bigcup_{i,r}E_{i}^{r}. \tag{5}\] The intuition behind the definition of \(s_{i}^{(r,t)}\) is to encourage an agent to win the game and try to win it fast, or at least lose it slowly if it cannot win. As preliminary experiments show that this definition can guide the LLMs to learn from experiences, we will leave the exploration of more sophisticated score functions to future work. Suggestion Extraction.As the experiences pool \(E\) can grow everlasting while the max context of LLMs is limited, we propose to retrieve a subset of experiences from \(E\) based on the reflection of the agent and then generate a suggestion from the subset to fill into the prompt (part 3 in Fig. 2). Specially, suppose we are at day \(t\) in a new round, and the reflection of the agent \(i\) is \(R_{i}^{t}\), we first retrieve a subset of experiences \(E_{\mathrm{sub}}\) from \(E\) based on the reflection \(R_{i}^{t}\) as following: \[E_{\mathrm{sub}}=\left\{(R_{l},G_{l},s_{l})\left|\cos\left(f(R_{i}^{t}),f(R_{l} )\right)>\epsilon\right\},\right. \tag{6}\] where \((R_{l},G_{l},s_{l})\in E\), \(f(\cdot)\) denotes one Sentence-BERT model 3, and \(\epsilon\) is a threshold. Preliminary experiments show that if the entire \(E_{\mathrm{sub}}\) is used, the performance may be harmed. The reason is that a strong assumption behind the definition of the score \(s_{l}\) is that all the experiences of the winners are good and those of the losers are not. However, this assumption may not hold in practice. Fortunately, we observe that the experience with the lowest score in \(E_{\mathrm{sub}}\) has a significantly high probability to be a bad one, and the experiences with a score around the median of the scores in \(E_{\mathrm{sub}}\) are more likely to be the good ones. Therefore, we only leverage these experiences from \(E\). Formally, denote the response with the lowest score as \(G_{0}\), the responses with scores around the median score as \(\{G_{1},G_{2},\cdots,G_{n}\}\), the suggestion is extracted with the LLM via prompting: Footnote 3: Model name: all-mpnet-base-v2 \[S_{i}^{t}=\mathrm{Extract}(G_{0},\{G_{1},G_{2},\cdots,G_{n}\}). \tag{7}\] Note that although \(G_{0}\) tends to be a bad experience, the agent can learn by refraining from them. The prompt implementing \(\mathrm{Extract}\) is as follows: _"There is one bad experience {G\({}_{0}\)} and also a set of experience {\(G_{1},\cdots,G_{n}\)} that may consist of good ones, find the difference between them and identify the good ones from the experience set."_ ## 4 Experiments ### Setup We employ a recent framework called Chatarena Wu et al. (2023) to implement our design, which allows for the connection of multiple LLMs. The gpt-3.5-turbo-0301 model 4 is served as our backend LLMs. The talking order is randomly determined. We set the window size \(K\), i.e. \(|O_{i}^{t}|\), to be \(15\). The number of predefined questions that can be selected \(L\) is set to be \(5\) and the number of freely asked questions \(M\) is \(2\). The threshold of experience retrieval \(\epsilon\) is \(0.85\) and we keep at most \(50\) experiences when extracting suggestions. Besides, we set the temperature of the LLM to be \(0\) for CoT reasoning and \(0.3\) for generating other content. Footnote 4: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) ### Experience Pool Construction Intuitively, the size of the experience pool may have a significant impact on performance. Therefore, we construct experience pools using different numbers of game rounds, including \(10\), \(20\), \(30\), and \(40\) rounds. For each round, we randomly assign different roles to players 1 to 7 and the experience pools are updated at the end of the round. Note that the experience pool in these rounds is leveraged for evaluation purposes, i.e., part 3 in Fig. 2 is removed. To evaluate the effect of our proposed framework to borrow from experiences, we equip the villager, seer, guard, and witch with experience pools, while the werewolves are not allowed to leverage these pools. Through this approach, we can assume that the performance level of the agents playing as wrevolves remains constant, serving as a reference to gauge the performance levels of the other agents. Preliminary experiments indicate that the relatively simple basic human priors on effective game-play strategies, provided in the prompt shown in Fig. 2, serve as a bootstrapping mechanism during learning from experiences. This suggests that it is valuable to further investigate how to leverage data from human gameplay to build an experience pool, and we will leave this as future work. ### Analysis of Using Experience The agents leverage the experiences via the suggestions generated using the method described in Sec. 3.4. And the following is an example of extracted suggestion: _"The best way for you to do under such reflection is to vote to kill someone based on your observation and analysis."_ To investigate the effectiveness of the suggestions, we use winning rate to measure the performance of the agents following AIWolf 5. Moreover, we emphasize that if an agent is not strong enough to defeat a stronger one, persisting longer without being eliminated is also a stronger performance. Hence we use average duration as another metric to evaluate the capabilities of the agents. Footnote 5: [http://aiwolf.org/en/](http://aiwolf.org/en/) We run each experiment for 50 rounds and the results are shown in Fig. 3. In general, Fig. 2(a) shows that learning from experience may lead to an increase in winning rate of the villager side in most cases. This indicates that our method can benefit from using experience. Furthermore, when using the experience pool with \(10\) or \(20\) historical rounds, there is a notable positive effect on both the winning rate of the villager side and the game duration, which demonstrates the effectiveness of our method. When equipped with the experience of \(30\) rounds, the game duration is obviously longer (Fig. 2(b)), even though the winning rate of the villager side has not changed conspicuously. When learning from larger \(40\) rounds, the winning rate of the villager side exhibit slightly promising results, yet the average duration becomes shorter. In summary, on the one hand, our framework exhibits the ability to learn from experiences without the need for tuning the parameters of LLMs. On the other hand, the effectiveness of our method tends to be unstable when the volume of experience is relatively substantial. As the amount of historical experience increases, the winning rate of the villager side does not show a clear trend. We conjecture that this may partially be attributable to the manner in which we guide the learning process, namely through simple prompts and heuristic scores, resulting in sparse and indirect supervision signals. Consequently, there remains room for improvement. Additionally, a key assumption in our aforementioned experiments, where the werewolf side serves as a baseline, is that their capabilities remain constant. However, our analysis suggests that this assumption may not hold true. Fig. 2(c) and Fig. 2(d) show the trends in the average number of camouflage behaviors (see 5.3 for definition) taken by villager and werewolf sides, respectively. Although villagers can learn to deceive from historical experiences, the behavior of the werewolves also improves compared to when no experience is used and changes as the amount of experience accumulates. Therefore, when multi-LLMs engage in multi-party games, the capability of the LLMs might also change in response to variations of the capability of other LLMs. We believe this conclusion is important in multi-LLMs games, which can also explain the trend in Fig. 2(a) and Fig. 2(b). ### Ablation Study To validate the necessity of each component in our approach, we conducted a detailed ablation study with qualitative and quantitative analyses. #### 4.4.1 Qualitative Analysis For qualitative analysis, we remove each of the components in the pipeline of our method and empirically discuss how it will influence the model outputs. Figure 3: Effects of learning from experiences. Dashed lines in all charts indicate values without using experience. * Game rules and role descriptions \(Z\): Obviously, this element is necessary. If we remove the game rule prompt, the LLMs might not know what to do. * Recent messages \(O\): They are also necessary as well. LLMs make decisions mainly based on these recent messages. * Informative messages \(V\): We have listed some informative content in Section A.1 (Table 1). If these informative messages are removed, the agent output will degrade quickly. For example, the agent may consider a dead player alive, or forget other role who has been uncovered. * Selected and asked questions \(Q\): Here \(Q\) and the informative messages \(V\) are the only 2 sources of the information that exceed the context length of LLMs. Hence it is imperative for our method. * Reflection \(R\): Firstly, we preserve historical experience through reflection on the current situation. Hence from a methodological perspective, \(R\) is a necessary component. In addition, \(R\) helps agents clarify current situations, thereby improving decision-making effect. For example, if we remove making a reflection by the agent in Table 4 (as well as extracting suggestions), the CoT and final outputs will be as follows: _My step-by-step thought process:_ * _As the witch, I want to use my poison bottle to eliminate a player who I suspect is a werewolf._ * _Based on the discussion during the daytime, Player 4 voted to kill me, which makes me suspicious of them being a werewolf._ * _However, I also want to consider the possibility that Player 4 is a villager who made a mistake or was influenced by other players._ * _I will also consider the possibility that other players may be werewvolves and try to eliminate me or other important players._ _My concise talking content:_ * _I choose to use my poison bottle to eliminate Player 4._ There exist more and more similar examples. This ambivalent reasoning process might derive from a lack of summarization of the situation. Clearly, situational reflection is a necessary component. * Suggestion extracted from experience \(S\): Its usefulness is analyzed in Section 4.3. * Chain-of-thought prompt \(C\): CoT reasoning helps LLMs break down the complex reasoning process and make some inner thoughts. If CoT is removed (e.g. in Table 4), the final output of LLM will be: * _I choose to pass for now and save my bottle of poison for a later night when I have more concrete evidence of a player's werewolf identity._ In fact, removing CoT reasoning will lead to weaker decision-making. LLMs can not often perform better without the backend of CoT reasoning. Moreover, can the pre-defined question set be substantiated by directly asking questions by LLMs? Although LLMs can propose plausible questions, it is difficult for them to propose questions that are more helpful in subsequent reasoning and decision-making. We can certainly provide examples of direct questioning of LLMs, i.e. freely ask 5 questions without including the question set, and the LLMs will output questions such as: _Have any players revealed their roles yet? Have any players been acting suspiciously? Has the seer used their ability to verify any players yet? Has the guard used their ability to protect any players yet? Has the witch used their ability to save or poison any players yet?_ In fact, the questions posed by **agents playing different roles** are very similar to the above ones. Therefore, it is necessary to inject some humans prior to the decision-making process. In our experiment, we design more helpful and informative questions for different roles. They have at least the following influences on agent decision-making: * Recall important and critical information. Of course, they are role-related. * Relieve hallucinations and error generations. For example, prompt the current phase and the agent role. * Help LLMs simplify complex reasoning. For example, remind the agent to anticipate the consequences of revealing their roles. * Imitate the way that a human player thinks. For example, speculate on the roles of other agents. #### 4.4.2 Quantitative Analysis For quantitative analysis, we compare our whole approach with the variants that remove one certain component. We sample 50 responses from the variants model output and perform a human evaluation. The annotator needs to judge if the output is reasonable or not. Some unreasonable examples might be hallucinations, forgetting the roles of others, taking counter-intuitive actions, etc. Fig. 4 shows that our method can generate more reasonable and realistic responses than any other variant. This indicates that every part of our method is necessary. ## 5 Emergent Strategic Behaviors We observed that LLMs exhibit some strategic behaviors not explicitly preprogrammed in the game rules or prompts. These behaviors are grouped into four categories, including trust, confrontation, camouflage, and leadership. We will introduce them in the following four subsections respectively. It is worth noting that, in order to investigate whether the emergent strategic behaviors stem from the training data of the LLM, we attempted to modify the role names in the prompts to irrelevant ones (e.g., changing "werewolf" to "pretty girl") or even to those with opposite semantic meanings. Experiments indicate that similar strategic behaviors still emerge. For readability, we will only present results with the original role names. ### Trust "Trust" refers to the belief that other players share common goals with oneself and that they will act in line with these goals. For instance, players may proactively share information that is detrimental to themselves, or jointly accuse someone of being their enemy with other players at certain moments. The intriguing behavior exhibited by the LLMs is that they tend to trust others based on certain evidence rather than blindly following others. In other words, they decide whether to trust based on their own reasoning, demonstrating independent thinking abilities in group games. To investigate how the trust behaviors of players change throughout the game, we define a _Trust Relationship Table_ to visualize the establishment of trust among players at different stages. It is a table \(T\) containing 7 rows and 7 columns, and we have \(T\left(i,j\right)=1\) if the talking content of player \(i\) exhibits trust towards player \(j\). Some trust behaviors examples are provided in Appendix A.3. Fig. 5 displays two Trust Relationship Tables. The upper table corresponds to a round in which the experience pool is not utilized, while the lower table corresponds to a round that employs an experience pool constructed from 20 rounds of gameplay. Both rounds span a duration of 5 days. From Fig. 5, we can see that the trust behavior gradually increases as the game progresses regardless of whether experience is used. Moreover, this behavior is not a pre-programmed behavior, but rather spontaneously emerging from the LLMs in an environment where both cooperation and competition coexist. The LLMs will also dissolve unreasonable trust relationships based on its own analysis (represented as dished circles in the tables). When utilizing 20-rounds historical experiences, it seems that the LLMs are more inclined to establish trust relationships, especially bi-directional trusts. Indeed, establishing necessary trust relationships in time is vital for promoting game victories. This could be one of the reasons contributing to the improvement in winning rate when experience is employed (Sec. 4.3). Figure 4: Percentage of reasonable outputs. ### Confrontation "Confrontation" refers to actions taken by players for the opposing objectives of the two camps. For instance, explicit attacks on others taken by werewolves during the night, or accusing others of werewolves during the day are all confrontation behaviors. Actions taken by roles with special abilities to protect themselves also belong to confrontational behaviors. The following is a short clip of communication in the daytime 6: Footnote 6: Due to space limitations and ethical considerations, we shorten the original responses without changing key semantics in the cases. \begin{tabular}{|l|} \hline **P1** (Werewolf) : I vote to eliminate P5. \\ **P3** (Guard) : I choose to pass. \\ **P5** (Villager) : I choose to pass. \\ \hline \end{tabular} We can see the werewolf wants to lead other players to eliminate an innocent player. On the contrary, other players do not merely follow the werewolf but express disagreement based on their own judgment. This behavior, which makes it difficult for the werewolf to achieve their objective, represents a form of implicit confrontation. The following is another clip at night: \begin{tabular}{|l|} \hline **P1** (Werewolf) : I choose to eliminate P5 again. \\ **P3** (Guard) : I choose to protect P5. \\ \hline \end{tabular} As the uncooperative and aggressive behavior of Player 1 has drawn attention, it may be suspected of being a werewolf by some players now. Therefore, the guard, possessing strong defensive capabilities, chose to protect the previous target of Player 1 in the ensuing night. Since the target could potentially be its teammate, the guard chooses to assist the target in countering the attacks of the werewolf. The attack from the werewolves and the defense of other players can be seen as confrontational behaviors as well. ### Camouflage "Camouflage" refers to actions of concealing the identity or misleading others. In competitive environments with incomplete information, obscuring the identity and intentions can enhance survivability, thereby helping achieve the game objectives. Therefore, camouflage is an important skill. However, it is not merely about keeping its identity under wraps or not talking about their roles. \begin{tabular}{|l|} \hline **P1** (Werewolf) : I key everyone, good morning! I noticed that it is a peaceful night and no one was eliminated. As a villager, I have nothing to share now. I hope you tell me more. \\ \hline \end{tabular} In the above example, we can see the werewolf claiming to be a villager. This kind of action obscures its real identity, effectively deceiving the trust of others and increasing its own safety. In fact, not only do werewolves disguise themselves as villagers, but important roles such as seers and witches also often disguise themselves as villagers to ensure their safety. Figure 5: Trust Relationship Tables. The upper subtables do not use historical experience while the bottom ones use the 20-rounds historical experience. The yellow balls represent established trust relationships, and the yellow dashed circles signify the dissolution of previously existing trust relationships. Furthermore, LLMs may fabricate events that do not actually exist to achieve their goals, as demonstrated in the following daytime example. The seeer has verified Player 1 is a werewolf. \begin{tabular}{|p{284.5pt}|} \hline The seeer has verified Player 1 is a werewolf. \\ \hline **P2** (Sear): I have noticed that P1 was talking active, so P1 may be a werewolf. \\ \hline \end{tabular} In fact, the seer can not get the responses of others during the night. Hence what it says is fake. However, it can convey information about the werewolf to its teammates while not revealing its role in this manner. It may be posited that camouflage is merely hallucinations generated by LLMs. However, we maintain that the majority of such behaviors are not hallucinations but rational actions. We delve into which behaviors should be classified as hallucinations and which should not in Appendix A.4. ### Leadership "Leadership" refers to actions that influence other players, attempting to control the course of the game. For instance, a werewolf may suggest others to act towards the intention of werewolves. \begin{tabular}{|p{284.5pt}|} \hline **P1** (Werewolf): Good morning everyone! I know nothing about the peaceful night. Can the seeer tell us more about who is the werewolf? Then, P5 falsely accuses P3 of being a werewolf. \\ \hline **P4** (Werewolf): I agree with P5. Based on my observation, I also think P3 is a werewolf. Let's vote to eliminate him to protect the villagers! \\ \hline \end{tabular} Calling to actions and guidance are more likely to gain the support of others. As shown in the example above, the werewolf calls for the seer to uncover its identity, which may lead the other agents to be in solidarity with the camouflaged werewolf. Such efforts to influence the actions of others underscore a fascinating social attributes demonstrated by the LLMs. Such behaviors are similar to those of human beings. ## 6 Related Work Game Playing.Intensive efforts have been devoted to game-playing AI in recent years. Silver et al. (2017, 2018) demonstrated that two-player zero-sum games with complete information, such as Go and chess, can be addressed through self-play. And superhuman performance has been achieved in some incomplete information games, such as heads-up poker Bowling et al. (2015); Brown and Sandholm (2018). However, these methods lack the ability of processing language, which is relied on heavily in communication games such as Werewolf and Diplomacy. While various Werewolf agents have been developed, they primarily rely on rule-based systems or talking templates Osawa et al. (2014); Wang and Kaneko (2018); Shibata et al. (2023), which constrain the expressive capacity of language within the game. FAIR et al. (2022) and Kramar et al. (2022) achieve promising results on Diplomacy, but their approaches necessitate a substantial volume of human data and are specifically tailored to the game. In contrast, this work endeavors to explore the potential of large language models (LLMs) in playing communication games and observes the emergence of strategic behaviors. Through this exploration, we aspire to inspire novel approaches to tackling communication games. Learning with LLMs.As the computational cost and high requirement of training data, common ways to learn with LLMs like fine-tuning Dai and Le (2015) and parameter-efficient tuning Houlsby et al. (2019) are difficult to perform in practice. Moreover, many excellent LLMs do not make their checkpoints public, thus parameter-based learning is unfeasible. Guiding LLMs by prompt engineering attracts more attention recently. Some typical prompt-based works Yao et al. (2022); Wu et al. (2023) overlook the ability to learn from historical experience. Wang and Li (2023) possesses learning ability in simple tasks and requires dense supervising signals. Due to the very sparse supervised signal, it can not be directly used in Werewolf games. Shinn et al. (2023) and Fu et al. (2023) are the most similar works to ours. However, the former can not learn from cross-trajectory experiences. And the latter is only designed for two-player scenarios. ## 7 Conclusion and Future Work In this paper, we design a framework for communicative games, taking Werewolf as a representative case for exploring its feasibility. Further, we study how historical experiences influence the abilities of LLMs. Intriguingly, we observe non-preprogrammed emergent strategic behaviors in LLMs during gameplay such as trust, confrontation, camouflage, and leadership. We also point out that despite our early study on using LLMs to construct communication game agents, there are still many issues worth further research in this direction. Firstly, how to enable LLM to master advanced game techniques, such as teaching human players experience or autonomous exploration, is a very attractive direction. In addition, it is worth further exploring how to construct an invariant baseline (see 4.3) to evaluate the capabilities of multi-LLMs settings. Finally, minimizing the impact of hallucinations and promoting their application in real-world scenarios is the most practical and valuable work. For future work, we intend to apply our method to a broader range of games and further enhance its gaming capabilities. ## Limitations Although we have demonstrated that our method possesses the potential to play communication games, there are still some limitations. Firstly, hallucinations (Ji et al., 2023) affect the factuality of the generated content and may negatively impact the reasoning abilities. Then, there may be a larger space to leverage historical experience, such as mitigating the adverse effects of noise and utilizing cross-game general experiences. Moreover, we do not incorporate experience pools derived from human players in this study. In future research, we will explore more robust strategies for utilizing experience and enhance our method for comparison with human performance. ## Ethics Statement This study involves the discussion and analysis of a simulated game setting, and any references to "killing", "eliminating" or related actions are strictly confined within the context of this game. The authors do not condone violence, or illegal activities in any form in real-life scenarios. The game in this paper is designed for entertainment and research purposes only, and its main intent is to facilitate an understanding of game mechanics, player behavior, and artificial intelligence. Furthermore, this study adheres to all relevant ethical guidelines and maintains the highest standards of research integrity.
2309.04880
Holographic CFTs on $AdS_d\times S^n$ and conformal defects
We consider ($d+n+1$)-dimensional solutions of Einstein gravity with constant negative curvature. Regular solutions of this type are expected to be dual to the ground states of ($d+n$)-dimensional holographic CFTs on $AdS_d\times S^n$. Their only dimensionless parameter is the ratio of radii of curvatures of $AdS_d$ and $S^n$. The same solutions may also be dual to $(d-1)$-dimensional conformal defects in holographic QFT$_{d+n}$. We solve the gravity equations with an associated conifold ansatz, and we classify all solutions both singular and regular by a combination of analytical and numerical techniques. There are no solutions, regular or singular, with two boundaries along the holographic direction. Out of the infinite class of regular solutions, only one is diffeomorphic to $AdS_{d+n+1}$ and another to $AdS_d\times AdS_{n+1}$. For the regular solutions, we compute the on-shell action as a function of the relevant parameters.
Ahmad Ghodsi, Elias Kiritsis, Francesco Nitti
2023-09-09T21:53:29Z
http://arxiv.org/abs/2309.04880v2
# Holographic CFTs on \(AdS_{d}\times S^{n}\) and conformal defects ###### Abstract: We consider \((d+n+1)\)-dimensional solutions of Einstein gravity with constant negative curvature. Regular solutions of this type are expected to be dual to the ground states of \((d+n)\)-dimensional holographic CFTs on \(AdS_{d}\times S^{n}\). Their only dimensionless parameter is the ratio of radii of curvatures of \(AdS_{d}\) and \(S^{n}\). The same solutions may also be dual to \((d-1)\)-dimensional conformal defects in holographic QFT\({}_{d+n}\). We solve the gravity equations with an associated conifold ansatz, and we classify all solutions both singular and regular by a combination of analytical and numerical techniques. There are no solutions, regular or singular, with two boundaries along the holographic direction. Out of the infinite class of regular solutions, only one is diffeomorphic to \(AdS_{d+n+1}\) and another to \(AdS_{d}\times AdS_{n+1}\). For the regular solutions, we compute the on-shell action as a function of the relevant parameters. Holography, CFT, AdS, conformal defects + ###### Contents * 1 Introduction, results and outlook * 1.1 Results * 1.2 Conformal Defects * 1.3 Outlook * 2 Constant negative curvature solutions with slices * 2.1 The general conifold ansatz * 2.2 The slice slice * 3 Regular and singular asymptotic of the solutions * 3.1 Near-boundary expansions * 3.2 Regular and singular end-points * 3.2.1 Singular end-points * 3.2.2 Regular end-points * 3.3 Solutions with A-bounces and monotonic solutions * 3.3.1 \(AdS_{d}\) bounce * 3.3.2 \(S^{n}\) bounce * 4 Exact solutions * 5 Numerical solutions * 5.1 Solutions with one regular end-point * 5.2 Solutions with A-bounces * 5.3 \(A_{1}\)-bounce space of solutions * 5.4 \(A_{2}\)-bounce space of solutions * 5.5 Monotonic solutions * 6 The space of all solutions * 7 The boundary CFT data * 7.1 Boundary data of (R, B)-type * 8 The on-shell action and the free energy * 8.1 Regularization * 8.2 Fixing the scheme * 9 Solutions with \(AdS_{d}\times S^{1}\) slices * 9.1 Asymptotics * 9.2 Exact solutions * 9.3 The global \(AdS_{d+2}\) solution * 9.4 Relations between parameters in two coordinates * 10 On general Einstein manifold solutions with constant negative curvature. ## Acknowledgements Appendices * A Product space ansatz for the slice * A.1 The curvature invariants * B Various global coordinates on \(AdS_{d+n+1}\) and its Euclidean version * B.1 Standard global coordinates on \(AdS_{d+n+1}\) * B.2 Coordinates fibered over \(AdS_{d}\times S^{n}\) * B.3 The special case \(n=0\) * C Analytic solutions for other signatures * C.1 The uniform solution * C.2 The constant \(A_{2}\) solution * D The stress-energy tensor * E Perturbations around the product space solution * F Topological Black holes with a negative cosmological constant ## 1 Introduction, results and outlook Quantum field theories are usually considered in flat background space-time. They can be studied, however, in background space-times that have non-zero curvature. Space-time curvature is irrelevant in the UV, as at short distances any regular manifold is flat. However, the curvature is relevant in the IR and can affect the low-energy structure of the QFT. There are several reasons to consider QFT in curved backgrounds. * Partition functions of QFTs on compact manifolds (spheres), are important elements in the study of the monotonicity of the RG Flow and the definition of generalized C-functions, especially in odd dimensions, [1; 2; 3]. * Many observables in CFTs and other massless QFTs (supersymmetric indices are examples) are well-defined when a mass gap is introduced. This can be generated by putting the theory on a positive curvature manifold, like a sphere. Sphere compactifications have been used in calculating supersymmetric indices in CFTs, [4]. They have also been used as regulators of IR divergences of perturbation theory in QFT, [5; 6; 7] and string theory, [8]. * Curvature in QFT, although UV-irrelevant is IR-relevant and importantly affects the IR physics. It can drive (quantum) phase transitions in the QFT, [9; 10]. * The ground-states of holographic QFTs on curved manifolds lead to constant (negative) curvature metrics sliced by curved slices. The Fefferman-Graham theorem indicates that such regular metrics exist near the asymptotically \(AdS\) boundary, [11]. However, it is not known whether such solutions can be extended to globally regular solutions in the Euclidean case. If yes, then there may exist associated Minkowski signature solutions with horizons1. The few (mathematical) facts that are known can be found in [13; 14]. Footnote 1: Such metrics have been discussed in section 5 of [12]. Holography suggests that because we can put any holographic CFT on any manifold we choose, there should be dual regular saddle point solutions. This argument has, however, a loophole: it may be that for a regular solution to exist, more bulk fields need to be turned-on (spontaneously), via asymptotically vev solutions2. Footnote 2: A milder version of this phenomenon associated with spontaneous symmetry breaking of a parity-like \(Z_{2}\) symmetry has been observed in [15]. * Cosmology has always given a motivation to study QFT in curved space-time, [16; 17]. In particular, QFT in de Sitter or almost de Sitter space is expected to describe early universe inflation as well as the current acceleration of the universe. * The issue of quantum effects in approximate de Sitter backgrounds is a controversial issue even today, [18]-[23]. * Partition functions of holographic QFTs on curved manifolds are important building blocks in the no-boundary proposal of the wave-function of the universe, [24; 25]. They serve to determine probabilities for various universe geometries. Many examples of holographic QFTs living on non-trivial geometries have been already discussed in the past. The simplest case of \((S^{1})^{n}\) has already been systematically studied in the case where all circles have the same radius as well as when there are two different radii, [26; 27; 28]. The case of \(S^{1}\times S^{d-1}\) has been studied extensively but not systematically. It contains \(AdS_{d+1}\) in global coordinates, as well as (Euclidean) Schwarzschild-\(AdS\), and some RG flows have been analyzed in this case. A systematic analysis of curved space-time holographic RG flows in Einstein-dilaton theories has been initiated in [10], when the boundary field theory is defined on an Einstein space with positive or negative curvature. For positive curvature, the RG flow pattern is not very different from that of flat space field theories. The main difference is that curvature dominates in the IR and provides a gap to the theory before the deep IR regime is reached. On the other hand, many quantum phase transitions appear, driven by the positive curvature. The general problem where the boundary is a product of constant (positive) curvature manifolds and the QFT is a CFT has been addressed in [29]. Phase transitions were found, generalizing the Hawking-Page transition (which is relevant in the \(S^{1}\times S^{d-1}\) case), [30]. Efimov resonances were also found that were explored in [31] to generate a class of associated black hole solutions. The general case of QFTs on \(S^{2}\times S^{2}\) was addressed in [15]. Among other things, it was found that a \(Z_{2}\) parity-like symmetry that exists when the two spheres have the same size is always spontaneously broken by quantum effects. Therefore the vacuum is always doubly degenerate. In the case where the boundary has negative curvature, however, the holographic QFT interpretation of the solutions is _very_ different from that of a standard RG flow. The reason is that, when the bulk is foliated by constant negative curvature \(d\)-dimensional slices, the solution has _two_ asymptotically \(AdS_{d+1}\) boundaries. This corresponds to two UV CFTs that are interacting through the bulk. Solutions in string theory, with asymptotic boundary metrics being \(AdS\), have been studied for some time, [32]-[44]. They have two (apparently) distinct conformal boundaries at the two end-points of the holographic coordinate. However, as the slices involve a non-compact manifold, which has also a conformal boundary, the two boundaries are connected. This results in a single conformal boundary. If the bulk is \(d+1\) dimensional, and the slices are \(AdS_{d}\), the total boundary is conformal to two pieces of \(S^{d}\) separated by an overlap on the equator3\(S^{d-1}\). The two endpoints of the flow can have different sources, the two holographically-dual theories can have different couplings and they are separated by an interface, justifying the name "Janus solutions". A similar class of solutions contains a single boundary and is delimited in the bulk by a brane that ends on "the boundary of the boundary". They are also \(AdS\)-sliced and a prototypical example was discussed in [45]. They have been proposed as holographic duals of boundary CFTs, [46, 47]. Related holographic RG flows have been considered in [48, 49]. There is another incarnation of such solutions. In Euclidean cases, where the slice manifold is a constant negative curvature manifold with finite volume and no boundary, such a solution is an example of a Euclidean wormhole. This is an object that still holds mysteries for the holographic correspondence, [50, 51, 52, 53]. The holographic interpretation of such solutions is still debated and for this reason, their occurrence is also an interesting datum. \(AdS\)-sliced solutions were studied systematically in [44] with three purposes * The holographic construction of QFTs on \(AdS\) manifolds. * The exploration of the space of holographic interfaces. * The study of "proximity of QFTs" defined by which ones can be connected by wormholes. A specific potential landscape was fully analyzed by a combination of analytical and numerical methods. It was found that the solution space contained many exotic RG flow solutions that realized unusual asymptotics, as boundaries of different regions in the space of solutions. Phenomena like "walking" flows and the generation of extra boundaries via "flow fragmentation" were found. The purpose of the present paper is to pursue the research program started in [26] and [10], and to study a further example along similar lines: holographic CFTs on product manifolds of the type4\(AdS_{d}\times S^{n}\). Such manifolds are interesting as they combine a piece that has constant negative curvature and one that has constant positive curvature. Footnote 4: All our results are valid if we replace \(AdS_{d}\) with any \(d\)-dimensional negative constant curvature manifold, with or without finite volume. A similar statement holds for \(S^{n}\). Moreover, these geometries are also interesting since upon (generalized) dimensional reduction on \(S^{n}\) they give rise to the infrared region of confining field theories defined on \(AdS_{d}\)[54, 55]. This connection, and the space of solutions of the reduced theory, will be thoroughly analyzed in a forthcoming work. ### Results We consider an Einstein theory with a negative cosmological constant in \(d+n+1\) dimensions. The ansatz used is a conifold ansatz that contains a holographic (radial) coordinate and a product of a \(d\)-dimensional constant negative curvature manifold and an \(n\)-dimensional constant positive curvature manifold. \[ds^{2}=du^{2}+e^{2A_{1}(u)}ds^{2}_{AdS_{d}}+e^{2A_{2}(u)}ds^{2}_{S^{n}}\,. \tag{1}\] The solutions should have a \(d+n\)-dimensional conformal boundary, where a holographic CFT lives. In this context, we obtain and solve the equations of motion and compute the scalar curvature invariants, which are necessary ingredients to check the regularity/singularity of the solutions. * **Classification of the solutions:** We classify the solutions according to their "end-points," which we define as limiting values of the radial coordinate of the conifold. A detailed analysis shows that we have four classes of end-points: 1. An \(AdS\)-like boundary where the scale factors of \(AdS\) and the sphere diverge. We shall denote this end-point as **B**. 2. A regular end-point where the scale factor of the sphere shrinks to zero sizes while the \(AdS\) factor asymptotes to a constant value. We shall denote this end-point as **R**. 3. A singular end-point in which the size of \(AdS\) vanishes while the size of the sphere diverges. We shall denote this end-point as **A**. 4. A singular end-point in which the size of the sphere vanishes while the size of \(AdS\) diverges. We shall denote this end-point as **S**. Only the first two of the four end-points correspond to a regular geometry. A solution is characterized by its two end-points along the radial (holographic) direction. We denote the class of a solution by its two end points, ie. (**B, R**) or (**B, S**), etc. In addition to end-points, a generic solution may or may not have an _A-bounce:_ this is a stationary point of one or both of the scale factors which then displays a local minimum or maximum away from the end-points. By analyzing the behavior of the scale factors near the A-bounces we recognize that the \(AdS_{d}\) or \(S^{n}\) can have at most one A-bounce. This restricts the classes of solutions with the above-mentioned end-points. Our analytical and numerical analysis leads to the following results: 1. There is only one class of solutions that are everywhere regular: these are solutions that have one regular end-point and one \(AdS\)-like boundary, i.e. (**B, R**). 2. If a solution has an A-bounce, then it also has at least one singular end-point. Therefore, we do not find any regular wormhole-like solution. 3. There do not exist solutions in which at both end-points the scale factor of the sphere shrinks to zero sizes, ie. solutions of the type (**R, R**), (**R, S**) and (**S, S**) do not exist. 4. We find two exact solutions of the Einstein equations: one of them is the global \(AdS_{d+n+1}\) space-time; the other is the product solution \(AdS_{d}\times AdS_{n+1}\). * **Space of solutions:** The space of solutions is three-dimensional as three initial conditions are needed to solve the equations. From the holographic point of view, these correspond to the two curvature scales of \(AdS_{d}\) and \(S^{n}\) and one vev parameter of the dual stress-energy tensor. However, one of these parameters can be scaled out and the physics of such solutions depends on two dimensionless parameters. They can be taken as the ratio of curvatures of \(AdS_{d}\) and \(S^{n}\) and the associated ratio for the vev. We analyze the transition between the above-mentioned solutions in the parameter space. In this space, we can follow how different regular/singular solutions change to each other as we move inside this space. There is a codimension-one subspace (a two dimensional surface) for regular solutions which ends on one side to the product space solution. * **QFT data on the boundary:** The Fefferman-Graham expansion near the \(AdS\)-like boundary (UV boundary) contains three parameters: two of them are the \(AdS\) and sphere curvatures \((R^{UV}_{AdS},R^{UV}_{S})\). Since the dual CFT is conformally invariant, the physics only depends on the ratio of these curvatures. The last parameter (\(C\)), is related to the vacuum expectation value and corresponds to parts of the vev of the components of the stress tensor. We can construct another dimensionless parameter from \(C\) and one of the \(AdS\) or sphere curvatures. Overall, we have two dimensionless ratios that describe the holographic QFT on the conformal boundary of a bulk solution. The value of \(C\) depends on the data of the IR end-point. Here the IR is the location of the regular end-point and the only relevant parameter remaining is the curvature of the \(AdS\) slice at this point. The value of \(C\) for the product space solution, \(AdS_{d}\times AdS_{n+1}\), diverges and for the global \(AdS\) space solution, it is zero as expected. For other regular solutions, it can be a positive or a negative number. * **Free energy:** The computation of the free energy for regular solutions shows that among these solutions, the global \(AdS\) solution has the maximum value. This implies that if one constructs the no-boundary wave-function along the lines of [24] the global \(AdS\) solution is the least probable state. All the previous conclusions hold when \(n>1\), as in this case, the sphere has non-zero positive curvature. The case \(n=1\) needs a separate analysis that is performed in section 9. The \(S^{1}\) can be interpreted as a Euclidean time, and the structure of the solutions is that of a black hole with a hyperbolic horizon. Such black holes are known as topological back holes, [61; 62]. In this case, we only have the following classes of solutions: 1. The regular solutions of \(({\bf R},{\bf B})\) type. This describes the solution outside the horizon of the black hole i.e. stretched from the horizon to the asymptotic boundary. 2. The singular solutions of \(({\bf R},{\bf A})\) type. This describes the solution behind the horizon of the black hole i.e. stretched from horizon to singularity. 3. The singular solutions of \(({\bf A},{\bf B})\) type. This describes a solution that is stretched from singularity to boundary (solutions with a naked singularity). The regular solutions appear in two classes: \(\bullet\) Black holes with two horizons (one event and one Cauchy horizon). In the limit where the two horizons coincide, we have an extremal black hole solution. \(\bullet\) Solutions with a single horizon. At the boundary of these solutions is the global AdS\({}_{d+2}\) solution. Known facts about topological black holes are collected in appendix F. We finally remark, that the techniques of the conifold ansatz with constant curvature slices can be used to find solutions at higher dimensions while solving only ODEs. It is not clear whether this algorithm captures all negative constant curvature metrics. ### Conformal Defects There is another context where conifold geometries with \(AdS\times S\) slices are relevant, namely in the study of conformal defects, [56]-[60]. Consider a \(D\)-dimensional QFT\({}_{D}\), with a \(d\)-dimensional defect in it. If the QFT\({}_{D}\) is defined on flat space then its generic symmetry is \(ISO(D)\). If it is a CFT\({}_{D}\), the symmetry is enhanced to conformal symmetry, i.e. \(O(D+1,1)\). Consider now a \(d\)-dimensional flat space defect, in QFT\({}_{D}\), localized on a \(d\)-dimensional hyperplane in \(R^{D}\). The symmetries that remain unbroken by the defect that is assumed to be a flat \(d\)-dimensional hyperplane, are \(ISO(d)\times SO(D-d)\). If the defect is conformally invariant on the d-dimensional world-volume5 then \(ISO(d)\) is enhanced to \(O(d+1,1)\) and the total symmetry becomes \(O(d+1,1)\times SO(D-d)\). Footnote 5: The generic case is that the bulk theory is a QFT\({}_{D}\) without conformal invariance, but that the defect theory is tuned to be conformally invariant. Examples of such theories can be found in [60]. The most common case, however, studied in the literature is that where the theory in the bulk is a CFT\({}_{D}\). In a holographic theory such a symmetry will be geometrically realized by a \(AdS_{d+1}\times S^{D-d-1}\) manifold6. Footnote 6: Interestingly, the flat \(D\)-dimensional metric is conformal to the metric of \(AdS_{d+1}\times S^{D-d-1}\) with the \(d\)-dimensional defect being identified with the \(d\)-dimensional boundary of \(AdS_{d+1}\). A special case is a conformal interface that has \(d=D-1\). In that case, the symmetry becomes \(O(D,1)\) and is geometrically realized by \(AdS_{D}\). Moreover, \(SO(1)\) is realized by \(S^{0}\) which are two distinct points (and this explains why in this case we have two boundaries). The holographic dual of this is given by holographic solutions with the \((D+1)\)-dimensional metric to be a conifold with \(AdS_{D}\) slices realizing the aforementioned symmetry. Similarly, in the case of general \(d\), we expect that the holographic ansatz will be a \((D+1)\)-dimensional conifold with \(AdS_{d+1}\times S^{D-d-1}\) slices. Therefore, the holographic ansatz we study in this paper is expected to also describe conformal \(d\)-dimensional defects in a holographic QFT\({}_{D}\). In particular, the structure of the generic solutions is such that their boundary has two components. One is the boundary of the total space, and this is conformal to \(AdS_{d+1}\times S^{D-d-1}\), which is also conformal to flat space7. There is another boundary, namely the union of the boundaries of the \(AdS_{d+1}\) slices. Insertions on that boundary correspond to defect operators. Footnote 7: There is a conical singularity around the defect if the curvatures of \(AdS_{d+1}\) and \(S^{D-d-1}\) are not the same. The bulk operators are in one-to-one correspondence with the gravitational fields, and their correlators are calculated by putting Dirichlet boundary conditions at the \(AdS_{d+1}\times S^{D-d-1}\) boundary8. The defect operators are in one-to-one correspondence again with the bulk gravitational fields but their correlators are now determined by putting boundary conditions at the boundary of \(AdS_{d+1}\). Clearly, this picture describes defects that do not carry additional degrees of freedom. Footnote 8: When there are non-trivial dynamical degrees of freedom on the defect this ceases to be true. The special analytic solutions found in this paper are interesting from this point of view. We consider the case \(n>1\) that corresponds to defects with codimension \(D-d\geq 3\). The global \(AdS_{D+1}\) solution seems to imply that the defect does not back-react in the induced CFT geometry as the total space is the same as the holographic dual of a CFT without the defect. Therefore this seems to correspond to trivial conformal defects associated with the identity operator of the CFT. The \(AdS_{d+1}\times AdS_{D-d}\) solution, on the other hand, seems to imply a complete decoupling between the defect and its transverse space. The boundary structure of this solution is different and it has two independent boundaries that in Poincare coordinates are \(R^{d}\) and \(S^{D-d-1}\). Insertions on these boundaries provide correlators for the defect and its transverse theory. Obviously, these correlators are completely independent. In particular, all one-point functions vanish. The study of small graviton fluctuations around this geometry indicates that there is no flow of energy between defect and bulk. In the case of \(n=1\) or \(D-d=2\), again the global \(AdS_{D+1}\) solution should correspond to trivial defects. On the other hand, the product solution is now \(AdS_{d+1}\times{\cal M}_{2}\) where \({\cal M}_{2}\) are the three spaces \(EAdS_{2}^{\pm,0}\) described in section 9.2. They have one or two \(AdS_{2}\) boundaries. In analogy with extremal black holes whose horizon contains \(AdS_{2}\) factors, we would expect also here similar phenomena: a one-dimensional scale invariance as well as a quantum mode that does not decouple at low temperatures. Further analysis is needed in order to substantiate such claims. ### Outlook There is one more case of constant negative curvature manifolds that can be written as conifolds that remains to be systematically studied: that where the slices are products of negative curvature manifolds. The regular solutions found here descend via dimensional reduction on \(S^{n}\) to solutions of Einstein dilaton gravity with a dilaton potential that has confining asymptotics, [54]. They imply the correct way of desingularizing the asymptotic singular solutions of the Einstein-dilaton theory. This is an interesting domain as it will teach us about confining theories on \(AdS\). Finally, the implications of our solutions for conformal defects need to be examined. There are several questions in this direction that involve quantitative questions like correlation functions both in the bulk and the defect as well as the dynamics of symmetries broken by the defect. In particular, an interesting question involves the construction of non-trivial defect flows in the holographic context. This is in principle straightforward in the holographic context, as such flows will involve solutions that will depend on two radial coordinates, \(u\) and the radial coordinate of the AdS slice. The relevant boundary conditions are that the solutions are vev only at the \(u\)-boundary while they have sources on the slice AdS boundary. Special solutions of this type have been considered in [41]. We plan to study this further in the near future. The structure of this paper is as follows: In section 2 we derive the equations of motion for a metric with a domain wall holographic coordinate and slices which in general are the product of Einstein manifolds. In section 3 we compute the asymptotic expansions near the boundary, singular and regular end-points for \(AdS_{d}\times S^{n}\) slices. We also explore the possibility of having A-bounces in the scale factors of \(AdS_{d}\) or \(S^{n}\). In section 4, we present two exact solutions of the theory, the global \(AdS_{d+n+1}\) and product space solution \(AdS_{d}\times AdS_{n+1}\). In sections 5 and 6, we show all the numerical solutions that we found and how they are related to each other through a three-dimensional space of solutions. In section 7, we extract the boundary CFT data of the regular solutions and identify the dimensionless parameters that characterize the CFT. Using this data, we calculate the on-shell action and the renormalized free energy in section 8. In section 9, we focus on the special case of \(AdS_{d}\times S^{1}\) and use a suitable coordinate transformation to obtain exact solutions of the equations of motion. Then we discuss their properties. In section 10, we comment on how to generalize our solutions to conifolds of conifolds. ## 2 Constant negative curvature solutions with \(AdS_{d}\times S^{n}\) slices ### The general conifold ansatz We consider an Einstein theory in a \(d+1\) dimensional bulk space-time parametrized by coordinates \(x^{a}\equiv(u,x^{\mu})\) where \(u\) is the holographic coordinate. The most general two-derivative action is \[S=M_{P}^{d-1}\int d^{d+1}x\sqrt{-g}\big{(}R-\Lambda\big{)}+S_{GHY}\,, \tag{1}\] where \(M_{P}\) is the \(d+1\) dimensional Plank mass. In this action \(g_{ab}\) is the bulk metric, \(R\) is its associated Ricci scalar and \(\Lambda\) is a cosmological constant. The surface term \(S_{GHY}\) is the Gibbons-Hawking-York term at the space-time boundary (e.g. the UV boundary if the bulk is asymptotically \(AdS\)). The bulk field equations of motion are given by \[R_{ab}-\frac{1}{2}g_{ab}(R-\Lambda)=0\,. \tag{2}\] We shall consider a (holographic) boundary QFT defined on a space that is a product of Einstein manifolds. The natural bulk metric ansatz that preserves all the original symmetries of the boundary metric, is given in terms of a domain wall holographic coordinate \(u\) and a conifold ansatz (for both Euclidean and Lorentzian signatures) \[ds^{2}=g_{ab}dx^{a}dx^{b}=du^{2}+\sum_{i=1}^{n}\mathrm{e}^{2A_{i}(u)}\zeta^{i} _{\alpha_{i},\beta_{i}}dx^{\alpha_{i}}dx^{\beta_{i}}\,. \tag{3}\] Here the geometry of the constant \(u\) slices are products of \(n\) Einstein manifolds, each with metric \(\zeta^{i}_{\alpha_{i},\beta_{i}}\), dimension \(d_{i}\) and coordinates \(x^{\alpha_{i}}\), \(\alpha_{i}=1,2,...,d_{i}\). Each Einstein manifold is associated with a different scale factor \(A_{i}(u)\), which depends on the coordinate \(u\) only. Therefore, every \(d\)-dimensional slice at constant \(u\) is given by the product of \(n\) Einstein manifolds of dimension \(d_{1},...,d_{n}\). This is the conifold ansatz. Since \(\zeta^{i}_{\mu\nu}\) are Einstein manifolds, the following relations hold \[R^{(\zeta^{i})}_{\mu\nu}=\kappa_{i}\zeta^{i}_{\mu\nu}\,\,\,\,\,,\,\,\,\,\,\,R^ {(\zeta^{i})}=d_{i}\kappa_{i}\,, \tag{4}\] where \(\kappa_{i}\) is the (constant) scalar curvature scale of the \(i\)th manifold and no sum on \(i\) is implied. We have the identity \[\sum_{i=1}^{n}\,\,d_{i}=d\,. \tag{5}\] In the case of maximal symmetry, the scalar curvatures are \[\kappa_{i}=\left\{\begin{array}{ccc}\frac{(d_{i}-1)}{\alpha_{ i}^{2}}&dS_{d_{i}}\,\,\,\,\mathrm{or}\,\,\,S^{d_{i}}\\ 0&\mathcal{M}_{d_{i}}\\ -\frac{(d_{i}-1)}{\alpha_{i}^{2}}&AdS_{d_{i}}\end{array}\right.\,, \tag{6}\] where \(\alpha_{i}\) are associate radii and \({\cal M}_{d_{i}}\) denotes \(d_{i}\)-dimensional Minkowski space. The non-trivial components of Einstein's equation from (2) are \[\Big{(}\sum_{k=1}^{n}d_{k}\dot{A}_{k}\Big{)}^{2}-\sum_{k=1}^{n}d_{k }\dot{A}_{k}^{2}-\sum_{k=1}^{n}\mathrm{e}^{-2A_{k}}R^{\zeta^{k}}+\Lambda=0\quad,\quad uu \tag{7}\] \[2(1-\frac{1}{d})\sum_{k=1}^{n}d_{k}\ddot{A}_{k}+\frac{1}{d}\sum_ {i,j=1}^{n}d_{i}d_{j}(\dot{A}_{i}-\dot{A}_{j})^{2}+\frac{2}{d}\sum_{k=1}^{n} \mathrm{e}^{-2A_{k}}R^{\zeta^{k}}=0\quad,\quad ii\] (8) \[\ddot{A}_{i}+\dot{A}_{i}\sum_{k=1}^{n}d_{k}\dot{A}_{k}-\frac{1}{d _{i}}\mathrm{e}^{-2A_{i}}R^{\zeta^{i}}=\ddot{A}_{j}+\dot{A}_{j}\sum_{k=1}^{n} d_{k}\dot{A}_{k}-\frac{1}{d_{j}}\mathrm{e}^{-2A_{j}}R^{\zeta^{j}}\quad,\quad i\neq j \tag{9}\] where the derivatives with respect to \(u\) are denoted by a dot. The details of computations are found in appendix A. The above equations are the same for both Lorentzian and Euclidean signatures of the slices, so all our results hold for both cases. Holographic saddle points are in one-to-one correspondence with the regular solutions to the equations (7)-(9). Hence, in the following, we shall be interested in the structure and properties of solutions to these equations, specifically for a negative cosmological constant \(\Lambda\). To check the regularity of the solutions, we analyze scalar invariants of curvatures. For example (see appendix A.1 for more details) the Ricci scalar is given by: \[R=-2\sum_{i=1}^{n}d_{i}\ddot{A}_{i}-\big{(}\sum_{i=1}^{n}d_{i}\dot{A}_{i} \big{)}^{2}-\sum_{i=1}^{n}d_{i}\dot{A}_{i}^{2}+\sum_{i=1}^{n}\mathrm{e}^{-2A_ {i}}R^{\zeta^{i}}\,, \tag{10}\] while the Ricci squared scalar reads \[R_{ab}R^{ab}=\Big{(}\sum_{i=1}^{n}d_{i}(\ddot{A}_{i}+\dot{A}_{i}^{2})\Big{)}^ {2}+\sum_{i=1}^{n}d_{i}\Big{(}\mathrm{e}^{-2A_{i}}\kappa-\big{(}\ddot{A}_{i}+ \dot{A}_{i}\sum_{j=1}^{n}d_{j}\dot{A}_{j}\big{)}\Big{)}^{2}\,. \tag{11}\] Moreover, the Kretschmann scalar, \({\cal K}=R_{abcd}R^{abcd}\) is given by \[{\cal K} =\sum_{i=1}^{n}\Big{(}e^{-4A_{i}}{\cal K}^{\zeta^{i}}-4e^{-2A_{i }}\dot{A}_{i}^{2}R^{\zeta^{i}}-2d_{i}\dot{A}_{i}^{4}\] \[+4d_{i}(\ddot{A}_{i}+\dot{A}_{i}^{2})^{2}\Big{)}+\sum_{i,j=1}^{n} 2d_{i}d_{j}\big{(}\dot{A}_{i}\dot{A}_{j}\big{)}^{2}\,, \tag{12}\] where \({\cal K}^{\zeta^{i}}\) is the Kretschmann scalar of the \(\zeta^{i}\) metric. ### The \(AdS_{d}\times S^{n}\) slice We now specialize the general conifold ansatz to the main subject of investigation of this paper, namely the bulk holographic description of QFTs living on \(AdS_{d}\times S^{n}\) space-time. The metric (3) in this case is \[ds^{2}=du^{2}+e^{2A_{1}(u)}\zeta_{\alpha\beta}^{1}dx^{\alpha}dx^{\beta}+e^{2A_ {2}(u)}\zeta_{\mu\nu}^{2}dx^{\mu}dx^{\nu}\,, \tag{13}\] where \(\zeta^{1}\) and \(\zeta^{2}\) are the \(AdS_{d}\) and \(S^{n}\) metrics respectively. We have set the dimensions of the Einstein manifolds to \(d_{1}=d\) and \(d_{2}=n\). The non-trivial components of Einstein's equation are \[\big{(}d\dot{A}_{1}+n\dot{A}_{2}\big{)}^{2}-d\dot{A}_{1}^{2}-n\dot{A}_{2}^{2}-e^ {-2A_{1}}R_{1}-e^{-2A_{2}}R_{2}+\Lambda=0\,, \tag{14}\] \[(d+n-1)\big{(}d\ddot{A}_{1}+n\ddot{A}_{2}\big{)}+dn(\dot{A}_{1}-\dot{A}_{2})^{2 }+e^{-2A_{1}}R_{1}+e^{-2A_{2}}R_{2}=0\,, \tag{15}\] \[\ddot{A}_{1}+\dot{A}_{1}(d\dot{A}_{1}+n\dot{A}_{2})-\frac{1}{d}e^{-2A_{1}}R_{1 }=\ddot{A}_{2}+\dot{A}_{2}(d\dot{A}_{1}+n\dot{A}_{2})-\frac{1}{n}e^{-2A_{2}}R_ {2}\,, \tag{16}\] where we have defined \[R_{1}\equiv R^{\zeta^{1}}\quad,\quad R_{2}\equiv R^{\zeta^{2}}\.\] To check the regularity of the solutions we need to know the Kretschmann scalar from (12). In the geometry (13), it is given by \[\mathcal{K} =e^{-4A_{1}}\mathcal{K}_{1}+e^{-4A_{2}}\mathcal{K}_{2}-4e^{-2A_{ 1}}R_{1}\dot{A}_{1}^{2}-4e^{-2A_{2}}R_{2}\dot{A}_{2}^{2}+2d(d-1)\dot{A}_{1}^{4}\] \[+2n(n-1)\dot{A}_{2}^{4}+4nd\dot{A}_{1}^{2}\dot{A}_{2}^{2}+4d( \ddot{A}_{1}+\dot{A}_{1}^{2})^{2}+4n(\ddot{A}_{2}+\dot{A}_{2}^{2})^{2}\,, \tag{17}\] where \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are the Kretschmann scalars for \(AdS_{d}\) and \(S^{n}\) respectively \[\mathcal{K}_{1}=\frac{2}{d(d-1)}R_{1}^{2}\quad,\quad\mathcal{K}_{2}=\frac{2}{ n(n-1)}R_{2}^{2}\,. \tag{18}\] ## 3 Regular and singular asymptotic of the solutions In the rest of this paper we parametrize the value of the cosmological constant as \[\Lambda=-\frac{1}{\ell^{2}}(d+n)(d+n-1)\,. \tag{19}\] The gravitational equations demand a constant negative curvature Einstein manifold and read \[\big{(}d\dot{A}_{1}+n\dot{A}_{2}\big{)}^{2}-d\dot{A}_{1}^{2}-n \dot{A}_{2}^{2}-e^{-2A_{1}}R_{1}-e^{-2A_{2}}R_{2}=\frac{1}{\ell^{2}}(d+n)(d+n- 1)\,, \tag{20}\] \[(d+n-1)\big{(}d\ddot{A}_{1}+n\ddot{A}_{2}\big{)}+dn(\dot{A}_{1}- \dot{A}_{2})^{2}+e^{-2A_{1}}R_{1}+e^{-2A_{2}}R_{2}=0\,,\] (21) \[\ddot{A}_{1}+\dot{A}_{1}(d\dot{A}_{1}+n\dot{A}_{2})-\frac{1}{d}e^ {-2A_{1}}R_{1}=\ddot{A}_{2}+\dot{A}_{2}(d\dot{A}_{1}+n\dot{A}_{2})-\frac{1}{n }e^{-2A_{2}}R_{2}\,. \tag{22}\] In this section, we find the expansions of the \(AdS_{d}\) and \(S^{n}\) scale factors (\(A_{1}\) and \(A_{2}\)) near the \(AdS\) (UV) boundary, the end-points, and the A-bounces9. Using these expansions we can search and classify various regular and singular bulk solutions and extract the values of sources and vevs of the dual boundary CFTs. Footnote 9: A-bounces are points where any scale factor \(A\) changes direction, i.e. \(\dot{A}=0\). ### Near-boundary expansions The Fefferman-Graham expansion of the (13) metric near a UV boundary, which can be reached either as \(u\to+\infty\) or at \(u\to-\infty\), is \[ds^{2} =du^{2}+e^{\pm\frac{2u}{\bar{A}_{1}}}(ds^{2}_{QFT}+\cdots)\] \[=du^{2}+e^{\pm\frac{2u}{\bar{A}_{1}}}\Big{[}e^{2\bar{A}_{1}} \zeta^{1}_{\alpha\beta}dx^{\alpha}dx^{\beta}+e^{2\bar{A}_{2}}\zeta^{2}_{\mu\nu} dx^{\mu}dx^{\nu}\Big{]}+\text{sub-leading}\,, \tag{15}\] where \(\bar{A}_{1},\bar{A}_{2}\) are arbitrary constants. Therefore, the holographic CFT will be living on a boundary with geometry \(AdS_{d}\times S^{n}\), with metric given by the square bracket in equation (15) and with the corresponding curvatures given by \[R_{1}^{UV}=e^{-2\bar{A}_{1}}R_{1}\;\;\;,\;\;\;R_{2}^{UV}=e^{-2\bar{A}_{2}}R_{2 }\,. \tag{16}\] In the expression above, \(R_{1}\) and \(R_{2}\) are the scalar curvatures of the metrics \(\zeta_{1}\) and \(\zeta_{2}\) of \(AdS_{d}\) and \(S^{n}\), respectively. We parametrize them by introducing the corresponding curvature radii \[R_{1}=-\frac{d(d-1)}{\alpha_{1}^{2}}\;\;\;,\;\;\;R_{2}=\frac{n(n-1)}{\alpha_{2 }^{2}}\,, \tag{17}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are the associated radii of the \(AdS\) and \(S\) spaces. As equations (12)-(14) show, we have two second-order equations plus one first-order constraint for the two scale factors \(A_{1}(u)\) and \(A_{2}(u)\). This system has three integration constants. Two of them are shifts of \(\bar{A}_{1}\) and \(\bar{A}_{2}\) which can be fixed by demanding that \(R_{1}\) and \(R_{2}\) coincide with the actual curvatures of the manifold on which the UV boundary theory is defined according to the holographic dictionary, i.e. the relations in (16). The last integration constant enters at sub-leading order in the asymptotic (UV) expansion in (15), and therefore it corresponds to a vacuum expectation value. Since in our model, the only non-trivial bulk field is the metric, it must correspond to parts of the vev of the components of the stress tensor. As we have shown in appendix D for the specific cases of \(d=n=2\), this constant, called \(C\), appears in the expectation values of the stress-energy tensor of both \(AdS_{2}\) and \(S^{2}\), as seen in equations (14a) and (14b). The value of the third constant will be fixed once we impose the regularity in the interior. Since the dual CFT is conformally invariant, the physics depends only on the ratio of the curvature scales of \(AdS_{d}\) and \(S^{n}\) which is the only dimensionless source parameter of our problem. Solving the equations of motion (12)-(14), near the putative boundary either at \(u\to+\infty\) or \(u\to-\infty\) gives expansions for scale factors of \(AdS_{d}\) and \(S^{n}\) spaces. For \(d=n=4\) we find the following expansions,10 \[A_{1}(u)\!=\!\bar{A}_{1}\pm\frac{u}{\ell}-\frac{1}{2^{4}3^{1}7^{1}}( 5\mathcal{R}_{1}-2\mathcal{R}_{2})e^{\mp\frac{3u}{\ell}}-\frac{1}{2^{9}3^{2}7^{2 }}(46\mathcal{R}_{1}^{2}-20\mathcal{R}_{1}\mathcal{R}_{2}-17\mathcal{R}_{2}^{2} )e^{\mp\frac{4u}{\ell}}\] \[-\big{(}\frac{1}{2^{12}3^{4}7^{3}}(356\mathcal{R}_{1}^{3}-66 \mathcal{R}_{1}^{2}\mathcal{R}_{2}-171\mathcal{R}_{1}\mathcal{R}_{2}^{2}-92 \mathcal{R}_{2}^{3})\big{)}e^{\mp\frac{6u}{\ell}}\] \[-\big{(}\frac{1}{2^{19}3^{4}7^{4}}(2111\mathcal{R}_{1}^{4}-1160 \mathcal{R}_{1}^{3}\mathcal{R}_{2}-1740\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2} -1160\mathcal{R}_{1}\mathcal{R}_{2}^{3}+2111\mathcal{R}_{2}^{4})+C\big{)}e^{ \mp\frac{8u}{\ell}}\] \[\pm\frac{1}{2^{16}3^{3}7^{3}}\big{(}23\mathcal{R}_{1}^{4}-52 \mathcal{R}_{1}^{3}\mathcal{R}_{2}+52\mathcal{R}_{1}\mathcal{R}_{2}^{3}-23 \mathcal{R}_{2}^{4}\big{)}\frac{u}{\ell}e^{\mp\frac{8u}{\ell}}+\mathcal{O}(e ^{\mp\frac{10u}{\ell}})\,, \tag{10a}\] \[A_{2}(u)\!=\!\bar{A}_{2}\pm\frac{u}{\ell}+\frac{1}{2^{4}3^{1}7^{ 1}}(2\mathcal{R}_{1}-5\mathcal{R}_{2})e^{\mp\frac{2u}{\ell}}+\frac{1}{2^{9}3^ {2}7^{2}}(17\mathcal{R}_{1}^{2}+20\mathcal{R}_{1}\mathcal{R}_{2}-46\mathcal{ R}_{2}^{2})e^{\mp\frac{4u}{\ell}}\] \[+\big{(}\frac{1}{2^{12}3^{4}7^{3}}(92\mathcal{R}_{1}^{3}+171 \mathcal{R}_{1}^{2}\mathcal{R}_{2}+66\mathcal{R}_{1}\mathcal{R}_{2}^{2}-36 \mathcal{R}_{2}^{3})\big{)}e^{\mp\frac{6u}{\ell}}\] \[-\big{(}\frac{1}{2^{19}3^{4}7^{4}}(2111\mathcal{R}_{1}^{4}-1160 \mathcal{R}_{1}^{3}\mathcal{R}_{2}-1740\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2} -1160\mathcal{R}_{1}\mathcal{R}_{2}^{3}+2111\mathcal{R}_{2}^{4})-C\big{)}e^{ \mp\frac{8u}{\ell}}\] \[\mp\frac{1}{2^{16}3^{3}7^{3}}\big{(}23\mathcal{R}_{1}^{4}-52 \mathcal{R}_{1}^{3}\mathcal{R}_{2}+52\mathcal{R}_{1}\mathcal{R}_{2}^{3}-23 \mathcal{R}_{2}^{4}\big{)}\frac{u}{\ell}e^{\mp\frac{8u}{\ell}}+\mathcal{O}(e^ {\mp\frac{10u}{\ell}})\,. \tag{10b}\] Here \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) are dimensionless curvature parameters, defined as \[\mathcal{R}_{1}\equiv\ell^{2}R_{1}e^{-2\bar{A}_{1}}=\ell^{2}R_{1}^{UV}\quad, \quad\mathcal{R}_{2}\equiv\ell^{2}R_{2}e^{-2\bar{A}_{2}}=\ell^{2}R_{2}^{UV}\,. \tag{11}\] The constant \(C\) that appeared in the above equations is proportional to the vev of the stress-energy tensor of the boundary CFT that we already discussed above, see also appendix D for more details. A similar argument can be found in [15] for holographic CFTs on \(S^{2}\times S^{2}\). We also note that the coefficients of \(\frac{u}{\ell}e^{\mp\frac{8u}{\ell}}\) in (10a) and (10b) reflect the conformal anomaly in \(d+n=8\) dimensions. To see more details in \(d+n=4\) see appendix D or [63; 64]. ### Regular and singular end-points We now study the geometry close to an (IR) end-point, i.e. a point \(u=u_{0}\) where one or both scale factors of the \(AdS\) and \(S\) shrink to zero. At this point, the \(u\) direction terminates11. Such an end-point may be regular, or it may be a curvature singularity. In the latter case, from the point of view of holography, the associated solution has to be rejected. Footnote 11: If it is the \(AdS_{d}\) scale factor that shrinks to zero, and the \(AdS\) has Minkowski signature, this point is a horizon, [12]. However, as we shall see, this kind of end-point is always singular. Given such an endpoint, we now work out an expansion of the solution near it and compute the Kretschmann scalar. This will determine if this end-point is regular or singular. To solve equations of motion near \(u=u_{0}\) (\(u\to u_{0}^{+}\)), we consider the following expansions for scale factors12 Footnote 12: A power-law leading behavior, \(A_{1}=\kappa_{1}(u-u_{0})^{a}+\cdots\) and \(A_{2}=\kappa_{2}(u-u_{0})^{b}+\cdots\) in which \(a,b<0\) cannot solve Einstein’s equations near \(u=u_{0}\), as one can show along the lines of [15]. Other non-power-law behaviors do not produce solutions. \[A_{1}(u)=\lambda_{1}\log\frac{u-u_{0}}{\ell}+\frac{1}{2}\log a_{0 }+a_{1}\frac{u-u_{0}}{\ell}+a_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u- u_{0})^{3}\,, \tag{3.10a}\] \[A_{2}(u)=\lambda_{2}\log\frac{u-u_{0}}{\ell}+\frac{1}{2}\log s_{0 }+s_{1}\frac{u-u_{0}}{\ell}+s_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u- u_{0})^{3}\,. \tag{3.10b}\] The constants appearing in the above expansions determine the behavior (regularity or singularity) of the end-point at \(u=u_{0}\). Inserting the first two leading terms in the above expansions into the equations of motion (3.2)-(3.4) we obtain \[\frac{(d+n-1)(d+n)}{\ell^{2}}+\frac{\frac{r_{1}}{\ell^{2}}}{(u-u_{0})^{2 \lambda_{1}}}+\frac{\frac{r_{2}}{\ell^{2}}}{(u-u_{0})^{2\lambda_{2}}}-\frac{ (d\lambda_{1}+n\lambda_{2})^{2}-d\lambda_{1}^{2}-n\lambda_{2}^{2}}{(u-u_{0})^{ 2}}+\cdots=0\,, \tag{3.11}\] \[dn(\lambda_{1}-\lambda_{2})^{2}-(d+n-1)(d\lambda_{1}+\lambda_{2}n)+\frac{ \frac{r_{1}}{\ell^{2}}}{(u-u_{0})^{2\lambda_{1}-2}}+\frac{\frac{r_{2}}{\ell^{ 2}}}{(u-u_{0})^{2\lambda_{2}-2}}+\cdots=0\,, \tag{3.12}\] \[\frac{n\frac{r_{1}}{\ell^{2}}}{(u-u_{0})^{2\lambda_{1}-2}}-\frac{d\frac{r_{2} }{\ell^{2}}}{(u-u_{0})^{2\lambda_{2}-2}}+dn(\lambda_{2}-\lambda_{1})(d\lambda_ {1}+n\lambda_{2}-1)+\cdots=0\,, \tag{3.13}\] where we have defined \[r_{1}\equiv\frac{\ell^{2}R_{1}}{a_{0}}\quad,\quad r_{2}\equiv\frac{\ell^{2}R _{2}}{s_{0}}\,. \tag{3.14}\] By an exhaustive analysis of the above equations for various regions of \(\lambda_{1}\) and \(\lambda_{2}\), we find the following possibilities for \(\lambda_{1}\) and \(\lambda_{2}\): * Singular end-point: (\(1>\lambda_{1}>0\) and \(0>\lambda_{2}>-1\)) or (\(0>\lambda_{1}>-1\) and \(1>\lambda_{2}>0\)). * Regular end-point (sphere shrinking): \(\lambda_{1}=0,\lambda_{2}=1\). For other values of \(\lambda_{1}\) and \(\lambda_{2}\), for example, \(\lambda_{1}>1\) or \(\lambda_{2}>1\) or both, or for example when \(\lambda_{1}=1,\lambda_{2}=0\) where the \(AdS\) is shrinking to zero sizes, we find no solution for equations (3.11)-(3.13). When solving equations (3.11)-(3.13), the values of \(\lambda_{1}\) and \(\lambda_{2}\) are fixed. Moreover, we find \(a_{0}\) and \(s_{0}\) (and \(u_{0}\)) as free parameters and \[a_{1}=s_{1}=0\,, \tag{3.15a}\] \[a_{2}=\frac{(d+n)\big{(}d(2\lambda_{1}-2\lambda_{2}-1)-n+1\big{)} }{4d(\lambda_{1}-\lambda_{2})(2d\lambda_{1}+2n\lambda_{2}+1)}\,,\] (3.15b) \[s_{2}=\frac{(d+n)\big{(}n(2\lambda_{1}-2\lambda_{2}+1)+d-1\big{)} }{4n(\lambda_{1}-\lambda_{2})(2d\lambda_{1}+2n\lambda_{2}+1)}\,, \tag{3.15c}\] and all higher coefficients of the expansion can be similarly determined. #### 3.2.1 Singular end-points We may have solutions that one of the scale factors shrinks but the other one blows up when \(u\to u_{0}^{+}\). Here we find only two possible cases: * \(1>\lambda_{1}>0\,,\quad 0>\lambda_{2}>-1\): In this case, the \(AdS_{d}\) scale factor vanishes and the \(S^{n}\) scale factor diverges. We have named this asymptotic \(A_{0}S_{\infty}\). We obtain \[\lambda_{1}=\frac{\sqrt{dn(d+n-1)}+d}{d(d+n)}>0\quad,\quad\lambda_{2}=\frac{n- \sqrt{dn(d+n-1)}}{n(d+n)}<0\,.\] (3.16) * \(1>\lambda_{2}>0\,,\quad 0>\lambda_{1}>-1\): In this class of solutions, the \(AdS_{d}\) size is growing and the \(S^{n}\) size is shrinking. We have named this asymptotic \(A_{\infty}S_{0}\). We have the following solution \[\lambda_{1}=\frac{d-\sqrt{dn(d+n-1)}}{d(d+n)}\quad,\quad\lambda_{2}=\frac{n+ \sqrt{dn(d+n-1)}}{n(d+n)}\,.\] (3.17) For both cases above, the Kretschmann scalar is singular as \(u\to u_{0}\) \[\mathcal{K} =-\frac{4\lambda_{1}^{2}r_{1}}{\ell^{4}}\left(\frac{u-u_{0}}{ \ell}\right)^{-2\lambda_{1}-2}-\frac{4\lambda_{2}^{2}r_{2}}{\ell^{4}}\left( \frac{u-u_{0}}{\ell}\right)^{-2\lambda_{2}-2}\] \[+\mathcal{O}\Big{(}\frac{u-u_{0}}{\ell}\Big{)}^{-4\lambda_{1}}+ \mathcal{O}\Big{(}\frac{u-u_{0}}{\ell}\Big{)}^{-4\lambda_{2}}\,.\] (3.18) #### 3.2.2 Regular end-points Consider the case when the scale factor of \(S^{n}\) shrinks to zero sizes as \(u\to u_{0}^{+}\), but the \(AdS_{d}\) has a finite size at this point, corresponding to (\(\lambda_{1}=0,\lambda_{2}=1\)). The position \(u_{0}\) is arbitrary, as it can be changed by a shift in \(u\) (which however may change the value of the near-boundary parameters). Solving the equations of motion using the expansion (3.10a) and (3.10b), we find the following expansions for the scale factors (\(\lambda_{1}=0,\lambda_{2}=1\)) \[e^{2A_{1}(u)} =a_{0}+\frac{a_{0}d(d+n)+\ell^{2}R_{1}}{d\ell^{2}(1+n)}(u-u_{0})^ {2}\] \[-\frac{(a_{0}d(d+n)+\ell^{2}R_{1})(a_{0}d(d-n-4)(d+n)+(d-3)\ell^{2 }R_{1})}{3a_{0}d^{2}\ell^{4}(1+n)^{2}(3+n)}(u-u_{0})^{4}\] \[+\mathcal{O}(u-u_{0})^{6}\,, \tag{3.19a}\] \[e^{2A_{2}(u)} =\frac{R_{2}}{n(n-1)}(u-u_{0})^{2}+\frac{(a_{0}(d-d^{2}+n+n^{2})- \ell^{2}R_{1})R_{2}}{3a_{0}\ell^{2}n^{2}(n^{2}-1)}(u-u_{0})^{4}\] \[+\mathcal{O}(u-u_{0})^{6}\,, \tag{3.19b}\] which is valid for all values of \(d,n>1\). The quantity \(a_{0}\) is a non-zero positive (but otherwise arbitrary) constant. Computing the Kretschmann scalar (17) at \(u=u_{0}\) we shows that \[\mathcal{K}=\frac{2(d+n)^{2}}{\ell^{4}n(n+1)}\Big{[}(d-2)d+(n+1)^{2}+\frac{(d+n -1)\,(2\bar{a}_{0}(d-1)d+1)}{\bar{a}_{0}^{2}(d-1)d(d+n)}\Big{]}+\mathcal{O}(u- u_{0})\,, \tag{20}\] where \[\bar{a}_{0}\equiv\frac{a_{0}}{\ell^{2}R_{1}}\,. \tag{21}\] Equation (20) implies that at this end-point the geometry is regular. For comparison, the Kretschmann scalar of an \(AdS_{d+n+1}\) space with length scale \(\ell\) is constant everywhere and is given by \[\mathcal{K}_{AdS}=\frac{2(d+n)(d+n+1)}{\ell^{4}}\,. \tag{22}\] We obtain \[\mathcal{K}-\mathcal{K}_{AdS}=\frac{2(d+n)(d+n-1)}{\ell^{4}\;\bar{a}_{0}^{2}d (d-1)n(n+1)}\,(d(d-1)\bar{a}_{0}+1)^{2}+\mathcal{O}(u-u_{0})\,, \tag{23}\] which suggests that at \(\bar{a}_{0}=-\frac{1}{d(d-1)}\) we obtain \(AdS_{n+d+1}\). We shall verify this in section 4. This class of solutions has only two arbitrary parameters, \(a_{0},u_{0}\), and is, therefore, a "tuned" solution as we implemented regularity. We should note that at this regular end-point, we always have \[\dot{A}_{1}\sim(u-u_{0})\;\;\;,\;\;\;\dot{A}_{2}\sim\frac{1}{u-u_{0}}\,, \tag{24}\] and \[\left\{\begin{array}{ll}\ddot{A}_{1}\geq 0\,,&a_{0}\geq-\frac{\ell^{2}R_{1}}{ d(d+n)}\,,\\ \\ \ddot{A}_{1}<0\,,&\mbox{otherwise}\,.\end{array}\right. \tag{25}\] When \(\ddot{A}_{1}<0\) it might be expected that the \(AdS\) space shrinks to zero at some point \(u>u_{0}\). We shall find such solutions in the next sections. On the other hand, we may also consider that \(AdS_{d}\) shrinks to zero sizes while the size of \(S^{n}\) is finite. This corresponds to \(\lambda_{1}=1,\lambda_{2}=0\) in the expansions (20a) and (20b), and from them, the expansions of the scale factors can be written as \[e^{2A_{1}(u)}=a_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\; \;\;,\;\;\;e^{2A_{2}(u)}=s_{0}+2s_{0}s_{1}\frac{u-u_{0}}{\ell}+\mathcal{O}(u- u_{0})^{2}\,, \tag{26}\] which by inserting into the equations of motion we find that \[a_{2}=\frac{R_{1}}{d(d-1)}<0\,. \tag{27}\] With our initial signature, \(e^{A_{1}}>0\), and therefore this case is not possible. _We can not have a solution that the \(AdS\) scale factor vanishes while the sphere scale is finite_. ### Solutions with A-bounces and monotonic solutions Except for the shrinking of \(AdS_{d}\) and \(S^{n}\) factors, we can also have places where \(\dot{A}_{1,2}=0\) and then the evolution of scale factors is not monotonic. We call points where \(\dot{A}_{1,2}=0\) "A-bounces". We shall investigate such a regime in this section. Consider the case in which the arbitrary point \(u=u_{0}\) is an A-bounce. In general, the expansions of the scale factors around such a point (that is a regular point of the equations) can be written as 13 Footnote 13: These expansions are the expansion in (3.10a) and (3.10b) for \(\lambda_{1}=\lambda_{2}=0\). The constant parameters are denoted by a hat to distinguish them from the end-point parameters. \[A_{1}(u) =\frac{1}{2}\log\hat{a}_{0}+\hat{a}_{1}\frac{u-u_{0}}{\ell}+\hat{ a}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,, \tag{3.28a}\] \[A_{2}(u) =\frac{1}{2}\log\hat{s}_{0}+\hat{s}_{1}\frac{u-u_{0}}{\ell}+\hat {s}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,. \tag{3.28b}\] From equations of motion (3.2)-(3.4) we know that all unknown coefficients above can be written as functions of three arbitrary constants. We choose these constants to be \(\hat{a}_{0},\hat{s}_{0}\) and \(\hat{s}_{1}\). From the equations, we obtain \[\hat{a}_{1}=\frac{-nd\hat{a}_{0}\hat{s}_{0}\hat{s}_{1}\pm\chi}{d( d-1)\hat{a}_{0}\hat{s}_{0}}\,, \tag{3.29a}\] \[\hat{a}_{2}=\frac{-\left(d\left(n(d+n)\hat{s}_{0}+\ell^{2}R_{2} \right)\hat{a}_{0}+\ell^{2}R_{1}\hat{s}_{0}\right)}{2(d-1)d\hat{a}_{0}\hat{s}_ {0}}\] \[\qquad+\frac{dn(1-d-2n)\hat{a}_{0}\hat{s}_{0}\hat{s}_{1}^{2}\pm(d +1)n\hat{s}_{1}\chi}{2\hat{a}_{0}(d-1)^{2}d\hat{s}_{0}}\,,\] (3.29b) \[\hat{s}_{2}=\frac{(d-1)\hat{a}_{0}\left(n(d+n)\hat{s}_{0}+\ell^{ 2}R_{2}\right)+n^{2}\hat{a}_{0}\hat{s}_{0}\hat{s}_{1}^{2}\mp n\hat{s}_{1}\chi} {2(d-1)n\hat{a}_{0}\hat{s}_{0}}\,, \tag{3.29c}\] where \[\chi\equiv\left(d\hat{a}_{0}\hat{s}_{0}\big{[}(d-1)\big{(}(d+n-1 )(d+n)\hat{a}_{0}\hat{s}_{0}+\ell^{2}(\hat{a}_{0}R_{2}+\hat{s}_{0}R_{1})\big{)}\right.\] \[\qquad\qquad\qquad\qquad\left.+n(d+n-1)\hat{a}_{0}\hat{s}_{0}\hat {s}_{1}^{2}\big{]}\right)^{\frac{1}{2}}. \tag{3.30}\] The reality of (3.30) restricts the parameters to \[|\hat{s}_{1}|\geq\sqrt{-\frac{(d-1)\left((d+n)(d+n-1)\hat{a}_{0} \hat{s}_{0}+\ell^{2}R_{2}\hat{a}_{0}+\ell^{2}R_{1}\hat{s}_{0}\right)}{n(d+n-1 )\hat{a}_{0}\hat{s}_{0}}}\,, \tag{3.31a}\] \[R_{1}+\frac{\hat{a}_{0}(\ell^{2}R_{2}+(d+n)^{2}\hat{s}_{0})}{ \ell^{2}\hat{s}_{0}}<\frac{\hat{a}_{0}(d+n)}{\ell^{2}}\,. \tag{3.31b}\] According to the above expansions, we can divide the solutions of Einstein's equations into two sets of solutions: * Solutions with A-bounce: There is at least one point where either \(\hat{a}_{1}\) or \(\hat{s}_{1}\) or both are zero * Monotonic solutions: There is no point where \(\hat{a}_{1}\) or \(\hat{s}_{1}\) are zero. As we already mentioned, we may have solutions that one or both of the scale factors have an \(A\)-bounce. At this point, the scale factor reaches a non-zero minimum or a finite maximum. Similar bounces were found in flat RG flows in [26], in which what changed direction (bounced) was the scalar field. Scale factor bounces, or \(A\)-bounces in short, were instead found to be ubiquitous in curved RG-flows with \(AdS\) slices, [32; 10; 44]. In the subsequent sections, we shall study the properties of the solutions with A-bounces. A subset of monotonic solutions was studied in section 3.2.2. Other monotonic solutions will be studied numerically in section 5.5. #### 3.3.1 \(AdS_{d}\) bounce Consider the case when the scale factor of \(AdS_{d}\) displays a bounce at some radial position \(u=u_{0}\). We call this an _\(A_{1}\)-bounce_ (\(\dot{A}_{1}=0,\dot{A}_{2}\neq 0\)). This corresponds to consider \(\hat{a}_{1}=0\) in (3.28a), i.e. \[A_{1}(u)=\frac{1}{2}\log(\hat{a}_{0})+\hat{a}_{2}\frac{(u-u_{0}) ^{2}}{\ell^{2}}+\mathcal{O}((u-u_{0})^{3})\,, \tag{3.32a}\] \[A_{2}(u)=\frac{1}{2}\log(\hat{s}_{0})+\hat{s}_{1}\frac{(u-u_{0}) }{\ell}+\hat{s}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}((u-u_{0})^{3})\,, \tag{3.32b}\] where \(\hat{a}_{0}\) and \(\hat{s}_{0}\) are the sizes of \(AdS\) and the sphere at the bounce. Moreover, one finds \[\hat{s}_{1}=\pm\frac{\sqrt{(d+n)(d+n-1)+(\hat{r}_{1}+\hat{r}_{2}) }}{\sqrt{(n-1)n}}\,, \tag{3.33a}\] \[\hat{s}_{2}=-\frac{d^{2}n+dn^{2}+(n\hat{r}_{1}+\hat{r}_{2})}{2(n-1)n}\,,\] (3.33b) \[\hat{a}_{2}=\frac{d^{2}+dn+\hat{r}_{1}}{2d}\,, \tag{3.33c}\] and \[\hat{r}_{1}\equiv\frac{\ell^{2}R_{1}}{\hat{a}_{0}}<0\;\;\;,\;\;\;\hat{r}_{2} \equiv\frac{\ell^{2}R_{2}}{\hat{s}_{0}}>0\,. \tag{3.34}\] The expansions (3.32a) and (3.32b) show the following properties for solutions of equations of motion with an \(A_{1}\)-bounce: * Since \(R_{1}<0\), then \[\begin{cases}\hat{a}_{2}\geq 0,&\text{for}\quad\hat{a}_{0}\geq-\frac{\ell^{2}R_{ 1}}{d(d+n)}\,,\\ \hat{a}_{2}<0,&\text{for}\quad-\frac{\ell^{2}R_{1}}{d(d+n)}>\hat{a}_{0}>\hat{a }_{0}^{min}\,,\end{cases}\] (3.35) where the reality of the value of \(\hat{s}_{1}\) in (3.33a) puts a lower bound on \(\hat{a}_{0}\) for any positive value of \(\hat{s}_{0}\) \[\hat{a}_{0}^{min}=-\frac{\ell^{2}R_{1}\hat{s}_{0}}{\hat{s}_{0}(d+n)(d+n-1)+\ell^{ 2}R_{2}}\,.\] (3.36) * \(\hat{s}_{1}\in\mathbb{R}\) also indicates that at \(A_{1}\)-bounce always \(\hat{s}_{2}<0\). * At a specific value \[\hat{a}_{0}=a_{0}^{c}\equiv-\frac{\ell^{2}R_{1}}{d(d+n)}\,,\] (3.37) the bounce disappears and we can find an exact solution \[e^{2A_{1}(u)} =-\frac{\ell^{2}R_{1}}{d(d+n)}\,,\] (3.38a) \[e^{2A_{2}(u)} =\hat{s}_{0}\Big{(}\lambda_{0}\sinh\big{[}k\frac{u-u_{0}}{\ell} \big{]}+\cosh\big{[}k\frac{u-u_{0}}{\ell}\big{]}\Big{)}^{2}\,,\] \[=\hat{s}_{0}(\lambda_{0}^{2}-1)\sinh^{2}\Big{[}k\frac{u-u_{0}}{ \ell}-\frac{1}{2}\log\big{(}\frac{\lambda_{0}-1}{\lambda_{0}+1}\big{)}\Big{]}\,,\] (3.38b) where \[\lambda_{0}=\sqrt{\frac{\ell^{2}R_{2}}{(n-1)(d+n)\hat{s}_{0}}+1}\,,\qquad k= \sqrt{\frac{d}{n}+1}\,.\] (3.39) This solution will be discussed in more detail in section 4. * If we have two A-bounces, both for \(AdS_{d}\) and \(S^{n}\) at the same point \(u=u_{0}\) (equivalently, when \(\hat{s}_{1}=0\)) then \[\hat{a}_{0}=a_{0}^{b}\equiv\frac{-\ell^{2}R_{1}}{(d+n)(d+n-1)}>0\,,\] (3.40) and \[\hat{a}_{2}=-\frac{1}{2}\big{(}\frac{\ell^{2}R_{2}}{\hat{s}_{0}}+(d+n)(d+n-2 )\big{)}<0\,,\] (3.41) which implies that the \(AdS\) scale factor has a finite _maximum_ at this point. As we shall see, the solutions with this property have two end-points where the \(AdS\) space shrinks to zero sizes. To summarize, the space of solutions with an \(A_{1}\)-bounce is parametrized by three free parameters \(\hat{a}_{0}\) and \(\hat{s}_{0}\) and \(u_{0}\) which represent the size of \(AdS\) and sphere at the bounce as well as the position of the bounce. The analysis above shows that: 1) From equation (3.35) we deduce that the \(AdS\) scale factor has at most one A-bounce. To see this, consider two neighboring bounces that one of them is a local maximum (\(\hat{a}_{2}<0\)) and the other is a local minimum (\(\hat{a}_{2}>0\)). According to (3.35) the value of the \(AdS\) scale factor (\(\hat{a}_{0}\)) for the local minimum should be greater than its local maximum neighbor, which is impossible, therefore we can not have more than one \(A_{1}\)-bounce in a solution. 2) There is a lower bound (3.36) on \(\hat{a}_{0}\) for an \(A_{1}\)-bounce to exist. \(\hat{a}_{0}\) controls the minimum size of \(AdS\) at the \(A_{1}\)-bounce. 3) We have a special solution with a constant scale factor of \(AdS\). This is an exact solution that will be discussed later in section 4. 4) If both \(AdS\) and the sphere have A-bunces at the same point, at this point, \(AdS\) has a finite maximum size but the sphere reaches a nonzero minimum. #### 3.3.2 \(S^{n}\) bounce An alternative possibility occurs when \(S^{n}\) has a bounce (\(A_{2}\)-bounce). This is the case with \(\hat{s}_{1}=0\) in (3.28b). Then we obtain \[A_{1}(u)=\frac{1}{2}\log(\hat{a}_{0})+\hat{a}_{1}\frac{(u-u_{0}) }{\ell}+\hat{a}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,, \tag{3.42a}\] \[A_{2}(u)=\frac{1}{2}\log(\hat{s}_{0})+\hat{s}_{2}\frac{(u-u_{0} )^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,, \tag{3.42b}\] with the following values for the above coefficients \[\hat{a}_{1}=\pm\frac{\sqrt{(d+n)(d+n-1)+(\hat{r}_{1}+\hat{r}_{2} )}}{\sqrt{(d-1)d}}\,, \tag{3.43a}\] \[\hat{a}_{2}=-\frac{d^{2}n+dn^{2}+(d\hat{r}_{2}+\hat{r}_{1})}{2(d- 1)d}\,,\] (3.43b) \[\hat{s}_{2}=\frac{n^{2}+dn+\hat{r}_{2}}{2n}\,. \tag{3.43c}\] Here unlike the \(A_{1}\)-bounce case, we always have \(\hat{s}_{2}>0\), therefore, we do not expect to have a solution with two shrinking end-points for \(S^{n}\). However, there is a constraint for the reality of \(\hat{a}_{1}\) in equation (3.43a) \[\begin{cases}\hat{a}_{0}\leq a_{0}^{b}&\Rightarrow\ \ 0<\hat{s}_{0}\leq\frac{-\hat{a}_{0}\ell^{2}R_{2}}{\ell^{2}R_{1}+\hat{a}_{0}(d+ n)(d+n-1)}\,,\\ \\ \hat{a}_{0}>a_{0}^{b}&\Rightarrow\ \ 0<\hat{s}_{0}\,,\end{cases} \tag{3.44}\] where \(a_{0}^{b}\) is given in equation (3.40). Moreover, the reality of \(\hat{a}_{1}\) in (3.43a) shows that \(\hat{a}_{2}<0\) at the \(A_{2}\)-bounce. To summarize: 1) Since at an \(A_{2}\)-bounce we always have a minimum size for the sphere scale factor (\(\hat{s}_{2}>0\)) we can not have a solution with more than one \(A_{2}\)-bounce. 2) If there is an \(A_{2}\)-bounce, we do not expect to find a solution that has two end-points with a shrinking sphere. 3) According to (3.44) in the space of solutions described by the two parameters \((\hat{a}_{0},\hat{s}_{0})\), there is an upper bound on the size of the sphere at the bounce, as far as \(\hat{a}_{0}\leq a_{0}^{b}\). ## 4 Exact solutions In this section, we shall find some exact solutions for the equations of motion (3.2)-(3.4). There are several special cases in which we can solve equations of motion exactly. The expansions in (3.19a) and (3.19b) also can help us to find these solutions. * \(AdS_{d}\times AdS_{n+1}\) **(product space) solution** As we can observe from equation (3.19a), in the special case where \[a_{0}=a_{0}^{c}=-\frac{\ell^{2}R_{1}}{d(d+n)}\,,\] (4.1) the scale factor of \(AdS_{d}\) is fixed and is independent of the \(u\) coordinate. In this situation, the equations of motion are exactly solvable and we find for \(n>1\) \[e^{2A_{1}(u)}=-\frac{\ell^{2}R_{1}}{d(d+n)}\quad,\quad e^{2A_{2}(u)}=\left(ce^ {\sqrt{\frac{d+n}{n}}\frac{u}{\ell}}-\frac{\ell^{2}R_{2}}{4c(n-1)(d+n)}e^{- \sqrt{\frac{d+n}{n}}\frac{u}{\ell}}\right)^{2},\] (4.2) where \(c\) is the constant of integration. This solution in general has an end-point for the sphere at \[u_{0}=-\frac{1}{2}\ell\sqrt{\frac{n}{d+n}}\log\left(\frac{4c^{2}(n-1)(d+n)}{ \ell^{2}R_{2}}\right).\] (4.3) Therefore, we can rewrite the scale factors as \[e^{2A_{1}(u)}=-\frac{\ell^{2}R_{1}}{d(d+n)}\quad,\quad e^{2A_{2}(u)}=\frac{ \ell^{2}R_{2}}{(n-1)(d+n)}\sinh^{2}\left(\sqrt{\frac{d+n}{n}}\frac{u-u_{0}}{ \ell}\right),\] (4.4) which means that the metric describes a product space \(AdS_{d}\times AdS_{n+1}\). The Kretschmann scalar for this solution is a constant \[\mathcal{K}=\frac{2(d+n)^{2}(2dn+d-n-1)}{n(d-1)\ell^{4}}\,,\] (4.5) and differs from the \(AdS_{d+n+1}\) value which is \[\mathcal{K}=\frac{2(d+n)(d+n+1)}{\ell^{4}}\,.\] (4.6) We should remind the reader that we already encountered this solution when we studied the \(A_{1}\)-bounces, where at the critical value of \(a_{0}^{c}\), the \(AdS\) bounce disappeared and we found a similar exact solution in (3.38b). In appendix E we have considered the fluctuations around this solution to show that the asymptotic behavior changes completely near the boundary of this solution. In particular, the boundary of this solution (in Euclidean signature) has two components: \(AdS_{d}\times S^{n}\cup S^{d-1}\times AdS_{n+1}\). * **Global \(AdS_{d+n+1}\) solution** Another exact solution for equations of motion is the global \(AdS\) solution \[ds^{2}=du^{2}+e^{2\bar{A}_{1}}\cosh^{2}\frac{u-u_{0}}{\ell}ds^{2}_{AdS_{d}}+e^{2 \bar{A}_{2}}\sinh^{2}\frac{u-u_{0}}{\ell}d\Omega_{n}^{2}\,,\] (4.7) where equations of motion fix the coefficients to \[e^{2\bar{A}_{1}}=-\frac{\ell^{2}R_{1}}{d(d-1)}\quad,\quad e^{2\bar{A}_{2}}= \frac{\ell^{2}R_{2}}{n(n-1)}\,.\] (4.8) We obtain \[\bar{a}_{0}=-\frac{1}{d(d-1)}\,,\] (4.9) for this solution, verifying the claim below (3.23). We also obtain \[R_{1}^{UV}=4e^{-2\bar{A}_{1}}R_{1}=-\frac{4d(d-1)}{\ell^{2}}\quad,\quad R_{2} ^{UV}=4e^{-2\bar{A}_{2}}R_{2}=\frac{4n(n-1)}{\ell^{2}}\,,\] (4.10) or the ratio of dimensionless curvatures are fixed by dimensions of \(AdS_{d}\) and \(S^{n}\) spaces \[\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}}=-\frac{d(d-1)}{n(n-1)}\,.\] (4.11) This can be confirmed also in the specific case of \(d=n=4\) which we have the UV expansions in (3.8a) and (3.8b). Here we realize that the global solution is equivalent to considering the vev \(C=0\) and \(\mathcal{R}_{1}=-\mathcal{R}_{2}=-48\). For a discussion on \(AdS\) space in various coordinates including the ones discussed here, see appendix B. ## 5 Numerical solutions We employ numerical techniques to uncover every potential solution to equations of motion and verify our analytical results obtained from the asymptotics. The independent equations we solve are (2.14) and (2.15) and they require three constants of integration. We assume that at a generic point \(u=u_{0}\), the following expansions of the scale factors satisfy the equations of motion \[A_{1}(u) =\frac{1}{2}\log\hat{a}_{0}+\hat{a}_{1}\frac{u-u_{0}}{\ell}+\hat{ a}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,, \tag{5.1a}\] \[A_{2}(u) =\frac{1}{2}\log\hat{s}_{0}+\hat{s}_{1}\frac{u-u_{0}}{\ell}+\hat{ s}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,. \tag{5.1b}\] Among the constant coefficients in these expansions, we select \(\hat{a}_{0},\hat{s}_{0}\) and \(\hat{s}_{1}\) as our free parameters. Using the aforementioned expansions we can read the initial conditions required for solving the equations of motion on both sides of \(u=u_{0}\). This approach will lead us to four classes of end-points for solutions: * **B:** An \(AdS\)-like boundary14 where both sphere and \(AdS\) sizes diverge. The behavior of the scale factors close to this boundary is given in section 3.1. Footnote 14: In our solutions we have only one boundary which we consider to be located at \(u=+\infty\). * **R:** A regular end-point where the sphere shrinks to a zero size and the \(AdS\) scale factor asymptotes to a constant value. Properties of this end-point are discussed in section 3.2.2. * **A:** This is a singular end-point where the sphere size diverges while the \(AdS\) size vanishes, as discussed in section 3.2.1. * **S:** This is another singular end-point where the sphere size vanishes while the \(AdS\) size diverges, as discussed in section 3.2.1. According to the above possible end-points, we find the following types of solutions. Each solution is characterized by its end-points: * (**B, R**)-type: This is a regular class of solutions. * (**R, A**)-type, (**S, B**)-type, (**A, B**)-type, (**A, A**)-type and (**S, A**)-type: These are all singular solutions. In the subsequent sections, we show examples of the above solutions. When \(S^{n}\) or \(AdS_{d}\) shrinks to zero size, this signals in Euclidean signature an endpoint of the flow. From our findings, a sphere can shrink to zero sizes, and the solution is regular there. But \(AdS_{d}\) cannot shrink to zero sizes and the solution to be regular at that point15. Footnote 15: Unless the holographic direction is timelike. ### Solutions with one regular end-point Among the solutions that we have, two classes of solutions have a regular end-point (**R**). The product space solution \(AdS_{d}\times AdS_{n+1}\) also has a regular end-point. As we already showed in section 3.2.2, the regular end-point at an arbitrary value \(u=u_{0}\) has the following expansions for scale factors \[e^{2A_{1}(u)} =a_{0}+\frac{a_{0}d(d+n)+\ell^{2}R_{1}}{d\ell^{2}(1+n)}\Big{(}(u- u_{0})^{2}\] \[-\frac{a_{0}d(d-n-4)(d+n)+(d-3)\ell^{2}R_{1}}{3a_{0}d\ell^{2}(1+n )(3+n)}(u-u_{0})^{4}+\mathcal{O}(u-u_{0})^{6}\Big{)}\,, \tag{5.2a}\] \[e^{2A_{2}(u)} =\frac{R_{2}}{n(n-1)}(u-u_{0})^{2}+\frac{(a_{0}(d-d^{2}+n+n^{2})- \ell^{2}R_{1})R_{2}}{3a_{0}\ell^{2}n^{2}(n^{2}-1)}(u-u_{0})^{4}\] \[+\mathcal{O}(u-u_{0})^{6}\,. \tag{5.2b}\] Here we have a free parameter \(a_{0}=e^{2A_{1}(u_{0})}\). Varying \(u_{0}\) does not give more solutions as such a variation can be undone by a translation in \(u\). For a point at \(u=u_{0}+\epsilon\) with \(\epsilon>0\) the initial conditions required to numerically solve the equations of motion are \[A_{1}(\epsilon)=\frac{1}{2}\log a_{0}+\mathcal{O}(\epsilon^{2}) \;\;\;,\;\;\;\dot{A}_{1}(\epsilon)=\frac{a_{0}d(d+n)+\ell^{2}R_{1}}{a_{0}d \ell^{2}(1+n)}\epsilon+\mathcal{O}(\epsilon^{3})\,, \tag{100a}\] \[A_{2}(\epsilon)=\frac{1}{2}\log\big{(}\frac{R_{2}}{n(n-1)} \epsilon^{2}\big{)}+\mathcal{O}(\epsilon^{2})\;\;\;,\;\;\;\dot{A}_{2}(\epsilon )=\frac{1}{\epsilon}+\mathcal{O}(\epsilon)\,, \tag{100b}\] where the higher order terms depend on \(a_{0}\). According to the value of the only parameter \(a_{0}\) in this class, we find three different types of answers.16 Footnote 16: In the rest of this paper, for the numerical solutions we fix \(d=n=4\), \(R_{1}=-1\), \(R_{2}=2\), and \(\ell=1\). * **(R, B)-type:** This is a solution that starts from a regular end-point at \(u=u_{0}\) and asymptotes to an \(AdS\) boundary at \(u\to+\infty\). At the end-point, the scale factor of the sphere is zero but the \(AdS\) space has a finite size. This solution exists as far as \(a_{0}>a_{0}^{c}\) where \[a_{0}^{c}\equiv-\frac{\ell^{2}R_{1}}{d(d+n)}\,.\] (101) An example of this type is sketched in figure 1. Figure 1: (R, B)–type: The scale factor for \(AdS\) (blue curve), and the scale factor of \(S\) (red curve), start at a regular end-point (dashed line). At this point, the sphere scale factor shrinks to a zero size but \(AdS\) has a finite non-zero size. Both scale factors reach the \(AdS\) boundary (\(u\to+\infty\)). * The product space solution: This is an \(AdS_{d}\times AdS_{n+1}\) solution that we discussed in section 4. While the scale factor of \(AdS_{d}\) is fixed, the scale factor of \(S^{n}\subset AdS_{n+1}\) starts from a zero value at the end-point and reaches the UV boundary at \(u\rightarrow+\infty\). This is a single solution corresponding to choosing \(a_{0}=a_{0}^{c}\) from (5.4). Figure 2 shows this solution. * **(R, A)-type:** If we choose the value of \(a_{0}\) such that according to (3.25) \(\ddot{A}_{1}(u_{0})<0\), or equivalently if \(a_{0}<a_{0}^{c}\), then although we start from a regular end-point at \(u=u_{0}\), the \(AdS\) scale factor decreases until it reaches zero at a finite \(u>u_{0}\). At this point, we can check that the scale factors behave as Figure 3: (R, A)–type: A singular solution that starts at a regular end-point (left dashed line) and reaches a singular end-point (right dashed line). Figure 2: \(AdS_{d}\times AdS_{n+1}\) solution. (3.10a) and (3.10b) with \(\lambda_{1}\) and \(\lambda_{2}\) coefficients in (3.16), so we have a singular end-point here. An example of this singular solution is given in figure 3. At an arbitrary regular end-point \(u_{0}\), as we decrease the scale factor of \(AdS_{d}\) i.e. \(a_{0}=e^{2A_{1}(u_{0})}\), we observe the transition between the above solutions. This is sketched in figure 4a. Figure 4b shows how the Kretschmann scalar changes for three different types of solutions. For (R, A)-type at the singular end-point, this scalar is diverging. ### Solutions with A-bounces If we assume that there is either an \(A_{1}\)-bounce or \(A_{2}\)-bounce at an arbitrary point \(u=u_{0}\), we find three different types of singular solutions. We observe that the existence of an A-bounce is always accompanied by one or two singular end-points. If we consider that at \(u=u_{0}\) there is an \(A_{1}\)-bounce then according to expansions of (3.32a) and (3.32b) the initial conditions to solve the equations of motion are \[A_{1}(u_{0})=\frac{1}{2}\log(\hat{a}_{0})\ \ \,\ \ \dot{A}_{1}(u_{0})=0\ \ \,\ \ \ A_{2}(u_{0})=\frac{1}{2}\log(\hat{s}_{0})\,, \tag{5.5a}\] \[\dot{A}_{2}(u_{0})=\pm\frac{\sqrt{(d+n)(d+n-1)+(\frac{\ell^{2}R_ {1}}{\hat{a}_{0}}+\frac{\ell^{2}R_{2}}{\hat{s}_{0}})}}{\ell\sqrt{(n-1)n}}\,. \tag{5.5b}\] Figure 4: (a): Transition between solutions as we change the initial value of the \(AdS_{d}\) scalar factor, \(a_{0}\). The solid curves show an example of (R, B)–type. By decreasing \(a_{0}\) and at a specific point \(a_{0}=a_{0}^{c}\) we have the \(AdS_{d}\times AdS_{n+1}\) solution (dot-dashed curves). Below that point, all solutions are the (R, A)–type. (b): The Kretschmann scalar \(\mathcal{K}\) vs \(u\). For all solutions in figure (a) we have sketched the Kretschmann scalar. Here we find two types of solutions. A solution with just one \(A_{1}\)-bounce and another solution with one \(A_{1}\)-bounce and one \(A_{2}\)-bounce: **(S, B)-type:** Figure 5 shows a solution with an \(A_{1}\)-bounce. On the left-hand side of the bounce, the solution has a singular end-point i.e. the sphere shrinks but the scale factor of \(AdS\) diverges. On the right-hand side, there is an \(AdS\) boundary at \(u\to+\infty\). This solution corresponds to the plus sign in (5.5b). With the minus sign, we find the mirror image where the \(AdS\) boundary is at \(u\to-\infty\). **(A, A)-type:** There is another type of solution with one \(A_{1}\)-bounces and one \(A_{2}\)-bounce. Here the bounces are not necessarily at the same point in the \(u\) coordinate. Figure 6 shows a solution of this type. On both sides of these bounces the scale factor of the sphere is diverging but for \(AdS\) space it shrinks to zero and so on both sides, we have singular end-points. We can consider that at \(u=u_{0}\) there is an \(A_{2}\)-bounce. Looking at the expansions (3.42a) and (3.42b) we can read the initial conditions required to solve the equations of motion \[A_{1}(u_{0})=\frac{1}{2}\log(\hat{a}_{0})\ \ \,\ \ \ A_{2}(u_{0})=\frac{1}{2}\log( \hat{s}_{0})\ \ \,\ \ \dot{A}_{2}(u_{0})=0\,, \tag{5.6a}\] \[\dot{A}_{1}(u_{0})=\pm\frac{\sqrt{(d+n)(d+n-1)+(\frac{\ell^{2}R_{ 1}}{\hat{a}_{0}}+\frac{\ell^{2}R_{2}}{\hat{s}_{0}})}}{\ell\sqrt{(d-1)d}}\,. \tag{5.6b}\] Once again we may have solutions with just one \(A_{2}\)-bounce or solutions with one \(A_{2}\)-bounce and one \(A_{1}\)-bounce which we already showed in figure 6. **(A, B)-type:** Figure 7 shows an example of solutions with just one \(A_{2}\)-bounce. On the left-hand side of the sphere bounce, the solution has a singular end-point Figure 5: (S, B)–type: Left to the \(A_{1}\)-bounce there is a singular IR end-point. Both scale factors reach the UV boundary at \(u\to+\infty\). where the scale factor of the sphere diverges but the \(AdS\) scale shrinks to zero. On the right-hand side, there is an \(AdS\) boundary as \(u\rightarrow+\infty\). This solution corresponds to the plus sign in (5.6b) and its mirror image is given by the minus sign. The behavior of the sphere scale factor that we observe in (A, B)-type and (A, A)-type is consistent with what we already found in section 3.3.2 where we show that at the \(A_{2}\)-bounce always \(\ddot{A}_{2}>0\) because \(\hat{s}_{2}>0\) in (3.43c). Figure 6: (A, A)–type: An example of solutions with two singular end-points. Both \(AdS\) and the sphere has a bounce. Figure 7: (A, B)–type: Left to the \(A_{2}\)-bounce there is a singular end-point where the \(AdS\) scale factor is zero but sphere scale factor diverges. Both scale factors reach the \(AdS\) boundary at \(u\rightarrow+\infty\). ### \(A_{1}\)-bounce space of solutions To see how the different solutions with \(A_{1}\)-bounce change under the variation of the initial values at the bounce, we can draw the space of these solutions. The initial conditions for solutions with an \(A_{1}\)-bounce in (5.5a) and (5.5b) depend on two free parameters \(\hat{a}_{0}\) and \(\hat{s}_{0}\). These two parameters describe the coordinates of the space of the solutions with an \(A_{1}\)-bounce, see figure 8. This space has the following properties: 1. For values \(\hat{a}_{0}>a_{0}^{c}\) where \(a_{0}^{c}\) is defined in (3.37), and for all the values of \(\hat{s}_{0}>0\), only solutions of the (S, B)-type can exist. 2. In the place indicated by the blue dashed line at \(\hat{a}_{0}=a_{0}^{c}\) in figure 8, we have the product space solution. 3. Left to the blue dashed line and right to the gray region only the solutions of (A, A)-type can exist. 4. The boundary of the gray region is given by equation (3.36). Inside the gray region, there is no real solution for equations of motion. 5. The red dashed line at \(\hat{a}_{0}=a_{0}^{b}\), given in equation (3.40), shows the solutions that have two bounces for \(AdS_{d}\) and \(S^{n}\) at the same point \(u=u_{0}\). Figure 8: The \(A_{1}\)-bounce space of solutions: The (S, B)–type solutions are living on \(\hat{a}_{0}>a_{0}^{c}\), the right-hand side of the blue dashed line. The (A, A)–type solutions are limited from right to the \(a_{0}=a_{0}^{c}\) and are bounded from left to the gray region. There is no solution in the gray region. Exactly on the blue dashed line we have the product space solutions. On the red dashed line, \(AdS_{d}\) and \(S^{n}\) have a bounce at the same point. Here again, the solutions are the (A, A)–type. Consider an arbitrary point in \(u=u_{0}\) where the \(A_{1}\)-bounce is happening and then change the value of \(\hat{a}_{0}\) while the value of \(\hat{s}_{0}\) is kept fixed (move horizontally in figure 10). Figure 10: Consider solutions with \(A_{1}\)-bounce at a fixed \(u=u_{0}\) (the common point through which all the blue curves pass). (a) Shows the transformation of the solutions as we change the value of \(\hat{s}_{0}\) for a fixed \(\hat{a}_{0}<a_{0}^{c}\). (b) Shows this transformation for a fixed \(\hat{a}_{0}>a_{0}^{c}\). Figure 9: (a): At a fixed value of \(\hat{s}_{0}=e^{A_{2}(u_{0})}\) (\(u_{0}\) is the location of the green vertical line), and by decreasing the initial value of \(\hat{a}_{0}\) (moving down on the green line) we first observe the solid curves which show an (S, B)–type solution. At a specific value \(\hat{a}_{0}=a_{0}^{c}\) a new solution (dot-dashed curves) will appear, This is a product space solution. Below \(a_{0}^{c}\), the solutions (dashed curves) are the (A, A)–type ones. There is a lower bound for \(\hat{a}_{0}\) given by (3.36). (b): The Kretschmann scalar for solutions in figure (a). 8). Figure 9a shows the transition between solutions as we change the parameter \(\hat{a}_{0}\) of the \(A_{1}\)-bounce. Figure 9b shows how the Kretschmann scalar diverges at the singular end-points of the solutions in figure 9a. We can also move vertically on the space of solution in figure 8, i.e. keep \(\hat{a}_{0}\) fixed and change \(\hat{s}_{0}\). Depending on which region we are in figure 8 we either have figure 10a or 10b. ### \(A_{2}\)-bounce space of solutions The initial conditions for solutions with an \(A_{2}\)-bounce in (5.6a) and (5.6b) depend on two free parameters \(\hat{a}_{0}\) and \(\hat{s}_{0}\). These two parameters describe the coordinates of the space of the solutions with an \(A_{2}\)-bounce, see figure 11. This space has the following properties: 1. For every value of \(\hat{a}_{0}>a_{0}^{b}\) (right to the red dashed line in figure 11) and for all values of \(\hat{s}_{0}>0\), only solutions of (A, B)-type can exist. 2. Exactly on the dashed line where \(\hat{a}_{0}=a_{0}^{b}\) (defined in (3.40)) both the \(AdS\) and sphere have a bounce at the same point \(u=u_{0}\). At this point as we already discussed we have (A, A)-type solutions. 3. Left to the dashed line and right to the gray region only the (A, A)-type solutions can exist. Figure 11: \(A_{2}\)-bounce space of solutions: (A, B)–type solutions are living on the right-hand side of the red dashed line which is drawn at \(\hat{a}_{0}=a_{0}^{b}\), see equation (3.40). (A, A)–type solutions are limited from the right to the dashed line and are bounded from the left. In the gray region, we do not have any solution. Exactly on the dashed line, both \(AdS_{d}\) and \(S^{n}\) scale factors bounce at the same point \(u=u_{0}\). Here we have the (A, A)–type solutions. 4. The reality of solutions forbids the parameters inside the gray region, see equation (3.43a). Figures 12a, 12b and 13 show the transformations and transitions between solutions with \(A_{2}\)-bounce as we move inside the space of solutions in figure 11. Figure 12: (a): For certain values of \(\hat{a}_{0}<a_{0}^{b}\) (for example at the lower horizontal dashed line which is the intersection of all the blue curves), the parameter \(\hat{s}_{0}\) (the location of \(A_{2}\)-bounces) is bounded by equation (3.44) (the upper dashed horizontal line). All the solutions in this region are the (A, A)–type. (b): For a fixed \(\hat{a}_{0}>a_{0}^{b}\) the value of \(\hat{s}_{0}\) is unbounded. In this case, all the solutions are the (A, B)–type. Figure 13: For a fixed value of \(\hat{s}_{0}\) by decreasing the value of \(\hat{a}_{0}\) (moving horizontally to the left in figure 11) we see a transition from the (A, B)–type to (A, A)–type solutions. ### Monotonic solutions As we already discussed in section 3.3, we may have solutions that do not have any A-bounce. These are solutions with monotonic scale factors. There are two types of solutions with this monotonic behavior: * Solutions with one regular end-point which we already found in section 5.1. The other end-point of these solutions was either at the UV boundary or was a singular end-point. * **(S, A)-type:** These are solutions with two singular end-points. If we read the initial conditions for a point at \(u=u_{0}\) from the upper signs in (3.29a)-(3.29c), we find a solution which at the left end-point \(u_{L}<u_{0}\), the \(AdS_{d}\) scale factor diverges but \(S^{n}\) shrinks. On the right end-point \(u_{R}>u_{0}\) however, the \(AdS_{d}\) shrinks and \(S^{n}\) diverges, see figure (a)a. There is a mirror image of this solution, (A, S)-type, in which at the left end-point the\(AdS_{d}\) shrinks but \(S^{n}\) diverges and at the right end-point the \(AdS_{d}\) diverges and \(S^{n}\) shrinks, see figure (b)b. This solution is obtained by choosing the lower signs in (3.29a)-(3.29c) and keeping the values of \(\hat{a}_{0}\) and \(\hat{s}_{0}\) fixed but \(\hat{s}_{1}\to-\hat{s}_{1}\). ## 6 The space of all solutions We can observe various types of transitions between different solutions if we carefully describe the space of solutions. To do this, we choose a generic point \(u_{0}\) with the following scale factor expansions \[A_{1}(u) =\frac{1}{2}\log\hat{a}_{0}+\hat{a}_{1}\frac{u-u_{0}}{\ell}+\hat{a} _{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,, \tag{113a}\] \[A_{2}(u) =\frac{1}{2}\log\hat{s}_{0}+\hat{s}_{1}\frac{u-u_{0}}{\ell}+\hat{ s}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,. \tag{113b}\] Assuming these expansions satisfy the equations of motion, we have three free parameters, here for example we select \((\hat{a}_{0},\hat{s}_{0},\hat{s}_{1})\). These parameters construct the three dimensional space of solutions with \((\hat{a}_{0},\hat{s}_{0},\hat{s}_{1})\) coordinates. Figure 15 shows this space for some fixed slices of \(\hat{s}_{0}\). According to figure 15, we observe the following properties: * As mentioned, in some regions of this space we have no solution. For example in the slice with \(\hat{a}_{0}=0.01\) there is a void space inside the blue region. This is coming from the reality of \(\chi\) in equation (10) which implies that both conditions (10a) and (10b) should be satisfied. * For fixed values of \(\hat{a}_{0}\) which \(\hat{a}_{0}<a_{0}^{c}=\frac{1}{32}\approx 0.031\), by increasing \(\hat{s}_{1}\) we observe the transition: **(A, B) \(\rightarrow\) (A, A) \(\rightarrow\) (R, A) \(\rightarrow\) (S, A)**. We should emphasize that the **(R, A)**-type solutions are at the boundary of the blue and green regions in figure 15. * At the critical value of \(\hat{a}_{0}=a_{0}^{c}=\frac{1}{32}\), equation (10), we should have the product space solution. Fixing \(\hat{a}_{0}\) to this value gives a relation between \(\hat{s}_{1}\) and \(\hat{s}_{0}\) which draws the black curved in figure 15. This curve is the place where the blue region **((A, A)**-type) and the yellow region (**(S, B)**-type) are terminated. We saw the same behavior in figure 8 when we studied the \(A_{1}\)-bounce space of solutions. Moreover, this curve is the boundary between the green (**(S, A)**-type) and red (**(A, B)**-type) regions. * For fixed values of \(\hat{a}_{0}\) which \(\hat{a}_{0}>a_{0}^{c}\), by increasing \(\hat{s}_{1}\) we observe the transition: **(A, B) \(\rightarrow\) (R, B) \(\rightarrow\) (S, B) \(\rightarrow\) (S, A)**. * There is an orange surface in figure 15 which belongs to the regular solutions, the **(R, B)**-type. This surface is the boundary between the yellow, **(S, B)**-type, and the red region **(A, B)**-type. It terminates at the black curve, the product space solution. The above analysis is performed when we read the initial conditions from (111a)-(111c) by choosing the upper signs. We can start with the lower signs. The results are similar but we shall find the mirror solutions, i.e. at fixed \((\hat{a}_{0},\hat{s}_{0})\) we should send \(\hat{s}_{1}\rightarrow-\hat{s}_{1}\). ## 7 The boundary CFT data As we observed so far, there are regular and singular solutions that reach the \(AdS\) boundary. In this section, we are returning to the holographic correspondence. In this, we need solutions that are everywhere regular, and therefore the only class to \begin{table} \begin{tabular}{|c|c|c|} \hline Color & Solution & Figure \\ \hline Yellow & **(S, B)**–type & 5 \\ \hline Red & **(A, B)**–type & 7 \\ \hline Blue & **(A, A)**–type & 6 \\ \hline Green & **(S, A)**–type & 14a \\ \hline Black & \(AdS_{d}\times AdS_{n+1}\) & 2 \\ \hline Orange & **(R, B)**–type & 1 \\ \hline Magenta & **(R, A)**–type & 3 \\ \hline \end{tabular} \end{table} Table 1: Different solutions in figure 15 and their related figures. Figure 15: The space of solutions. To see all possible transitions between different solutions we have sketched five slices of this space. The void space inside the blue region on the slice \(\hat{a}_{0}=0.01\) is a forbidden region where solutions are not real. Here we have fixed \(u_{0}=3\). The orange surface belongs to the regular solutions, the (R, B)–type, which terminates at the black curve (product space solution). This surface is the boundary between the yellow and red regions. See table 1 for the related links to solutions. consider is **(R, B)**. For the regular solutions, we shall compute the near-boundary data that according to the holographic dictionary corresponds to data in the dual CFT. For example, we shall compute the dimensionless curvatures of \(AdS_{d}\) and \(S^{n}\) spaces at the UV boundary where the CFT is living. We are also interested in finding the parameter \(C\) which is proportional to the vev of stress-energy tensor of the boundary CFT. ### Boundary data of (R, B)-type There are two free parameters for regular IR end-points, the end-point location \(u_{0}\), and \[T^{IR}_{AdS}\equiv R_{1}e^{-2A_{1}(u_{0})}=\frac{R_{1}}{a_{0}}\,, \tag{108}\] with \(a_{0}\) that has appeared in the expansions in equations (19a) and (19b). On the other hand, we have three free parameters on the UV boundary \(R^{UV}_{AdS}=R^{UV}_{1},R^{UV}_{S}=R^{UV}_{2}\) (101), and \(C\), which represents the vev of stress-energy tensor of the boundary QFT, see appendix D and equations, (19a) and (19b). According to the asymptotic expansions of the scale factors, i.e. equations (8a) and (8b), under a shift \(u\to u+u_{\infty}\) near the boundary we obtain \[R^{UV}_{AdS,S}\sim e^{-\frac{2u_{\infty}}{\ell}}\quad,\quad C\sim e^{-\frac{8 u_{\infty}}{\ell}}\,, \tag{109}\] so the following dimensionless ratios are independent of \(u_{\infty}\) \[\frac{R^{UV}_{AdS}}{R^{UV}_{S}}\quad,\quad\frac{C}{(R^{UV}_{S})^{4}\ell^{8}}\,. \tag{110}\] Now we find the behavior of these UV parameters in terms of the \(T^{IR}_{AdS}\) on the IR side numerically. In the following figures we have fixed \[R_{1}=-1\quad,\quad R_{2}=2\quad,\quad\ell=1\quad,\quad d=n=4\,. \tag{111}\] With the above choices, the critical value of \(a_{0}\) is \(a_{0}^{c}=\frac{1}{32}\). We observe the following behaviors for the physical curvatures \(R^{UV}_{AdS}\), \(R^{UV}_{S}\) and \(C\) as functions of the IR parameter \(T^{IR}_{AdS}\): * As \(a_{0}\to\infty\) or \(T^{IR}_{AdS}\to 0\) the UV curvature \(R^{UV}_{AdS}\to 0\) but \(R^{UV}_{S}\) has a finite value. At this point, \(C\) also has a positive finite value. * As far as \(R^{UV}_{S}>|R^{|UV}_{AdS}\) we have \(C>0\) and visa-verse and at the point where \(R^{UV}_{S}=|R^{|UV}_{AdS}\) the value of \(C\) vanishes and we have the global solution (111). * At the lowest value for regular solutions i.e. at \(a_{0}=a_{0}^{c}\) which is given in (106), we have the product space solution (111). As \(a_{0}\) tends to this point \(R^{UV}_{AdS}\to-\infty\) and \(R^{UV}_{S}\to 0\) and \(C\to-\infty\). * Below \(a_{0}^{c}\) we find solutions that have a singular end-point and do not reach the UV boundary at \(u\to+\infty\). The dimensionless ratios of the UV parameters i.e. \(R_{AdS}^{UV}/R_{S}^{UV}\) and \(C/(\ell^{2}R_{S}^{UV})^{4}\) in terms of \(\ell^{2}T_{AdS}^{IR}\) have been shown in figure 16. ## 8 The on-shell action and the free energy In this section, we find the on-shell action and free energy for regular solutions of the theory. We begin again with the following action \[S=M_{P}^{d+n-1}\int dud^{d+n}x\sqrt{|g|}\Big{(}R^{(g)}-\frac{1}{2}\partial_{a} \varphi\partial^{a}\varphi-V(\varphi)\Big{)}+S_{GHY}\,. \tag{8.1}\] In this action we have \[R^{(g)}=\frac{1}{2}(\partial\varphi)^{2}-\frac{1+n+d}{1-n-d}V(\varphi)\;\;\;,\;\;\;\partial_{a}\varphi\partial^{a}\varphi=\dot{\varphi}^{2}\;\;\;,\;\;\; \sqrt{|g|}=e^{dA_{1}+nA_{2}}\sqrt{|\zeta^{1}||\zeta^{2}|}\,. \tag{8.2}\] Figure 16: The logarithm of the dimensionless ratios of the UV parameters vs. \(\ell^{2}T_{AdS}^{IR}\). The black dashed line on the left corresponds to the lower bound \(a_{0}=a_{0}^{c}\) of regular IR end-point solutions where \(|\ell^{2}R_{AdS}^{UV}|\to+\infty\) and \(\ell^{2}R_{S}^{UV}\to 0\) (product space solution). The green dashed line shows the location where \(|R_{AdS}^{UV}|=R_{S}^{UV}\) or \(C=0\). This point corresponds to the global solution. Substituting into (8.1) we obtain the on-shell action 17 Footnote 17: Here we consider solutions in which the boundary (UV) is at \(u=+\infty\) while the \(S^{n}\)-shrinking end-point (IR) is at \(u=u_{0}\) in (8.5). \[S_{on-shell}=\frac{2M_{P}^{d+n-1}}{d+n-1}V_{S^{n}}V_{AdS_{d}}\int_{u_{0}}^{+ \infty}du\,e^{dA_{1}+nA_{2}}V(\varphi)+S_{GHY}\,, \tag{8.3}\] where \(V_{S^{n}}\) and \(V_{AdS_{d}}\) are the volume of the the sphere and \(AdS\) space respectively. However, we can write the potential in terms of the scale factors from the equation of motion (2.14) as follows \[V(\varphi) =\frac{(d+n-1)e^{-2(A_{1}+A_{2})}}{d+n}\Big{(}R_{1}e^{2A_{2}}+R_{2 }e^{2A_{1}}\] \[-e^{2(A_{1}+A_{2})}\big{(}d\ddot{A}_{1}+n\ddot{A}_{2}+(d\dot{A}_{ 1}+n\dot{A}_{2})^{2}\big{)}\Big{)}\,. \tag{8.4}\] Therefore, the on-shell action can be written in terms of the scale factors and their derivatives \[S_{on-shell} =\frac{2M_{P}^{d+n-1}}{d+n}V_{S^{n}}V_{AdS_{d}}\Big{(}\int_{u_{0} }^{+\infty}du\,e^{dA_{1}+nA_{2}}\left(R_{1}e^{-2A_{1}}+R_{2}e^{-2A_{2}}\right)\] \[-\Big{[}e^{dA_{1}+nA_{2}}\big{(}d\dot{A}_{1}+n\dot{A}_{2}\big{)} \Big{]}_{u_{0}}^{+\infty}\Big{)}+S_{GHY}\,. \tag{8.5}\] The Gibbons Hawking York (GHY) term at the boundary \(u=+\infty\), is given as \[S_{GHY}=-2M_{P}^{d+n-1}\Big{[}\int d^{d+n}x\sqrt{|\gamma|}K\Big{]}^{u=+\infty}\,, \tag{8.6}\] where \(\gamma_{ij}\) is the induced metric on the \(AdS_{d}\times S^{n}\) slices and the extrinsic curvature is \(K_{ij}=-\frac{1}{2}\partial_{u}\gamma_{ij}\). Therefore we find \[K=-d\dot{A}_{1}-n\dot{A}_{2}\quad,\quad\sqrt{|\gamma|}=e^{dA_{1}+nA_{2}}\sqrt {|\zeta^{1}||\zeta^{2}|}\,. \tag{8.7}\] This gives \[S_{GHY}=2M_{P}^{d+n-1}V_{S^{n}}V_{AdS_{d}}\Big{[}e^{dA_{1}+nA_{2}}(d\dot{A}_{1 }+n\dot{A}_{2})\Big{]}^{u=+\infty}\,. \tag{8.8}\] Moreover, the contribution of the last term in (8.5) from the \(u_{0}\) endpoint vanishes. This can be seen by calculating the derivative of \(e^{2A_{1}(u)}\) and \(e^{2A_{2}(u)}\) in (3.19a) and (3.19b) with respect to the \(u\) coordinate. Using the previous observation and substituting (8.8) in equation (8.5) we obtain \[S_{on-shell} =\frac{2M_{P}^{d+n-1}}{d+n}V_{S^{n}}V_{AdS_{d}}\Big{(}\int_{u_{0} }^{+\infty}du\,e^{dA_{1}+nA_{2}}\left(R_{1}e^{-2A_{1}}+R_{2}e^{-2A_{2}}\right)\] \[+(d+n-1)\Big{[}e^{dA_{1}+nA_{2}}\big{(}d\dot{A}_{1}+n\dot{A}_{2} \big{)}\Big{]}^{u=+\infty}\Big{)}\,. \tag{8.9}\] We introduce two potentials \(U_{1}(u)\) and \(U_{2}(u)\) which satisfy the following differential equations \[\big{(}(n-2)\dot{A}_{2}+d\dot{A}_{1}\big{)}U_{1}+\dot{U}_{1} =-1\,, \tag{111a}\] \[\big{(}(d-2)\dot{A}_{1}+n\dot{A}_{2}\big{)}U_{2}+\dot{U}_{2} =-1\,. \tag{111b}\] Then we can use these potentials to write the free energy (\(\mathcal{F}=-S_{on-shell}\)) as \[\mathcal{F} =-\frac{2M_{P}^{d+n-1}}{d+n}V_{S^{n}}V_{AdS_{d}}\Big{(}-e^{dA_{1} +nA_{2}}\big{(}\frac{U_{2}R_{1}}{e^{2A_{1}}}+\frac{U_{1}R_{2}}{e^{2A_{2}}} \big{)}\Big{|}_{u_{0}}^{+\infty}\] \[+(d+n-1)e^{dA_{1}+nA_{2}}\big{(}d\dot{A}_{1}+n\dot{A}_{2}\big{)} \Big{|}^{u=+\infty}\Big{)}\,. \tag{112}\] The volume of the \(n\)-sphere in slices is finite and is given in terms of its curvature by \[V_{S^{n}}\equiv\frac{V_{S}}{R_{2}^{\frac{n}{2}}}=\frac{2\pi^{\frac{n+1}{2}}}{ \Gamma(\frac{n+1}{2})}\Big{[}\frac{(n(n-1))}{R_{2}}\Big{]}^{\frac{n}{2}}\,. \tag{113}\] The volume of \(AdS_{d}\) space, on the other hand, is infinite and we should regularize it. Starting from the Poincare coordinates with length scale \(\hat{\ell}\) \[ds_{AdS_{d}}^{2}=\frac{\hat{\ell}^{2}}{z^{2}}\big{(}dz^{2}+dx_{i}dx^{i}\big{)}\,, \tag{114}\] the volume can be regularized as \[V_{AdS_{d}}=\int_{0}^{L}d^{d-1}x\int_{\hat{\epsilon}}^{\infty}dz\frac{\hat{ \ell}^{d}}{z^{d}}=\frac{\hat{\ell}^{d}}{d-1}\frac{L^{d-1}}{\hat{\epsilon}^{d- 1}}\,. \tag{115}\] Using the value of \(AdS_{d}\) curvature \(R_{1}=-\frac{d(d-1)}{\hat{\ell}^{2}}\) we can rewrite the volume as \[V_{AdS_{d}}\equiv\frac{V_{A}}{|R_{1}|^{\frac{d}{2}}}=\frac{1}{d-1}\big{(} \frac{L}{\hat{\epsilon}}\big{)}^{d-1}\Big{[}\frac{d(d-1)}{|R_{1}|}\Big{]}^{ \frac{d}{2}}\,. \tag{116}\] By the above values for the volumes, we can write the free energy as \[\mathcal{F} =\frac{2M_{P}^{d+n-1}}{d+n}\frac{V_{S}V_{A}}{|R_{1}|^{\frac{d}{2 }}R_{2}^{\frac{n}{2}}}\Big{(}e^{dA_{1}+nA_{2}}\big{(}\frac{U_{2}R_{1}}{e^{2A_{ 1}}}+\frac{U_{1}R_{2}}{e^{2A_{2}}}\big{)}\Big{|}_{u_{0}}^{+\infty}\] \[-(d+n-1)e^{dA_{1}+nA_{2}}\big{(}d\dot{A}_{1}+n\dot{A}_{2}\big{)} \Big{|}^{u=+\infty}\Big{)}\,. \tag{117}\] To find the free energy, we need to compute \(U_{1}\) and \(U_{2}\) from (111a) and (111b) at the \(AdS\) boundary and the IR end-point \(u=u_{0}\). For \(d=n=4\), we already computed the scale factors near the \(AdS\) boundary in equations (100a) and (100b). Moreover, the expansion of these functions near the regular IR end-point is given in (3.19a) and (3.19b). Therefore, we find the following expansions near the \(AdS\) boundary for \(U_{1}\) and \(U_{2}\) as \(u\to+\infty\) \[U_{1} =-\frac{\ell}{6}+\frac{\ell(8\mathcal{R}_{1}+\mathcal{R}_{2})}{2016 }e^{-\frac{2u}{\ell}}+\frac{\ell(11\mathcal{R}_{1}^{2}-76\mathcal{R}_{1} \mathcal{R}_{2}+11\mathcal{R}_{2}^{2})}{338688}e^{-\frac{4u}{\ell}}+\mathcal{B }_{1}e^{-\frac{6u}{\ell}}\] \[+\frac{-26\mathcal{R}_{1}^{3}+216\mathcal{R}_{1}^{2}\mathcal{R}_{ 2}-78\mathcal{R}_{1}\mathcal{R}_{2}^{2}+23\mathcal{R}_{2}^{3}}{9483264}ue^{- \frac{6u}{\ell}}+\cdots\,, \tag{8.17a}\] \[U_{2} =-\frac{\ell}{6}+\frac{\ell(8\mathcal{R}_{2}+\mathcal{R}_{1})}{201 6}e^{-\frac{2u}{\ell}}+\frac{\ell(11\mathcal{R}_{1}^{2}-76\mathcal{R}_{1} \mathcal{R}_{2}+11\mathcal{R}_{2}^{2})}{338688}e^{-\frac{4u}{\ell}}+\mathcal{ B}_{2}e^{-\frac{6u}{\ell}}\] \[+\frac{-26\mathcal{R}_{2}^{3}+216\mathcal{R}_{2}^{2}\mathcal{R}_ {1}-78\mathcal{R}_{2}\mathcal{R}_{1}^{2}+23\mathcal{R}_{1}^{3}}{9483264}ue^{- \frac{6u}{\ell}}+\cdots\,, \tag{8.17b}\] and at the end-point, we obtain (\(u\to u_{0}^{+}\)) \[U_{1} =-\frac{12a_{0}\ell^{2}\mathfrak{b}_{1}}{40a_{0}+\ell^{2}R_{1}} \frac{1}{(u-u_{0})^{2}}+\mathfrak{b}_{1}-\frac{1}{3}(u-u_{0})\] \[-\frac{\mathfrak{b}_{1}\left(20768a_{0}^{2}+1168a_{0}\ell^{2}R_{1} +17\ell^{4}R_{1}^{2}\right)}{250a_{0}\ell^{2}\left(40a_{0}+\ell^{2}R_{1} \right)}(u-u_{0})^{2}+\cdots\,, \tag{8.18a}\] \[U_{2} =\frac{126000a_{0}^{2}\ell^{4}\mathfrak{b}_{2}}{264512a_{0}^{2}+6 112a_{0}\ell^{2}R_{1}+53\ell^{4}R_{1}^{2}}\frac{1}{(u-u_{0})^{4}}\] \[-\frac{2100a_{0}\ell^{2}\mathfrak{b}_{2}\left(112a_{0}+\ell^{2}R_ {1}\right)}{264512a_{0}^{2}+6112a_{0}\ell^{2}R_{1}+53\ell^{4}R_{1}^{2}}\frac{1 }{(u-u_{0})^{2}}-\frac{1}{5}(u-u_{0})+\cdots\,, \tag{8.18b}\] where \(\mathcal{B}_{1},\mathcal{B}_{2},\mathfrak{b}_{1}\) and \(\mathfrak{b}_{2}\) are constants of integration 18. Footnote 18: Since the free energy is dimensionless, \([L]^{0}\), then from (8.16) \(U_{1}\) and \(U_{2}\) should be \([L]^{3}\) and also \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\). Therefore we expect \(\mathcal{B}_{1,2}=\ell^{3}\mathcal{B}_{1,2}(\mathcal{R}_{1}^{3},\mathcal{R}_{ 2}^{3},\mathcal{R}_{1}^{2}\mathcal{R}_{2},\mathcal{R}_{1}\mathcal{R}_{2}^{2})\). ### Regularization The on-shell action as defined is infinite due to the infinite volume of the total space. We now introduce a regulated boundary at \(u=-\ell\log\epsilon\) and define a dimensionless cut-off \[\Lambda\equiv\frac{e^{\frac{A_{1}+A_{2}}{2}}}{\ell|R_{1}R_{2}|^{ \frac{1}{4}}}\Big{|}_{u=-\ell\log\epsilon}=\frac{1}{\epsilon|\mathcal{R}_{1} \mathcal{R}_{2}|^{\frac{1}{4}}}\,. \tag{8.19}\] The free energy can be computed as \[\mathcal{F}=\mathcal{F}^{\Lambda}-\mathcal{F}^{u_{0}}\,, \tag{8.20}\] where we have \[\mathcal{F}^{\Lambda} =-\frac{M_{P}^{7}\ell^{7}}{4}V_{S}V_{A}\Big{(}56\Lambda^{8}+\frac{4} {3}\Lambda^{6}\big{(}|\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}}|^{\frac{1}{2}}-| \frac{\mathcal{R}_{2}}{\mathcal{R}_{1}}|^{\frac{1}{2}}\big{)}-\frac{\Lambda^{4 }}{504}\big{(}\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}}+16+\frac{\mathcal{R}_{2} }{\mathcal{R}_{1}}\big{)}\] \[-\frac{\Lambda^{2}}{84672}\big{(}-2|\frac{\mathcal{R}_{1}}{ \mathcal{R}_{2}}|^{\frac{3}{2}}-29|\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}}|^{ \frac{1}{2}}+29|\frac{\mathcal{R}_{2}}{\mathcal{R}_{1}}|^{\frac{1}{2}}+2| \frac{\mathcal{R}_{2}}{\mathcal{R}_{1}}|^{\frac{3}{2}}\big{)}\] \[-\frac{\log(|\mathcal{R}_{1}\mathcal{R}_{2}|\Lambda^{4})}{3793305 6}\big{(}23(\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}})^{2}-104\frac{\mathcal{R}_ {1}}{\mathcal{R}_{2}}+432-104\frac{\mathcal{R}_{2}}{\mathcal{R}_{1}}+23(\frac {\mathcal{R}_{2}}{\mathcal{R}_{1}})^{2}\big{)}\] \[-\frac{19}{113799168}\big{(}(\frac{\mathcal{R}_{1}}{\mathcal{R}_{ 2}})^{2}-10\frac{\mathcal{R}_{1}}{\mathcal{R}_{2}}+\frac{4041}{19}-10\frac{ \mathcal{R}_{2}}{\mathcal{R}_{1}}+(\frac{\mathcal{R}_{2}}{\mathcal{R}_{1}})^ {2}\big{)}-\frac{\mathcal{R}_{2}\mathcal{B}_{1}+\mathcal{R}_{1}\mathcal{B}_{2 }}{\ell\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2}}\Big{)}\] \[+\mathcal{O}(\Lambda^{-2})\,, \tag{108}\] where \[\mathcal{R}_{1}=\ell^{2}R_{1}^{UV}\quad,\quad\mathcal{R}_{2}=\ell^{2}R_{2}^{UV }\,, \tag{109}\] and \[\mathcal{F}^{u_{0}}=\frac{M_{P}^{7}}{4}V_{S}V_{A}\Big{(}\frac{875a_{0}^{3} \ell^{4}\mathfrak{b}_{2}}{264512a_{0}^{2}R_{1}+6112a_{0}\ell^{2}R_{1}^{2}+53 \ell^{4}R_{1}^{3}}-\frac{a_{0}^{3}\ell^{2}\mathfrak{b}_{1}}{R_{1}^{2}\,(40a_{ 0}+\ell^{2}R_{1})}\Big{)}\,, \tag{110}\] with \[a_{0}=e^{2A_{1}(u_{0})}\,. \tag{111}\] It should be noted that the free energy is independent of the constants of integration for \(U_{1}\) and \(U_{2}\) because it depends on the difference of the UV and IR parts i.e. equation (107). Therefore, we can choose \(\mathfrak{b}_{1}=\mathfrak{b}_{2}=0\) and use these conditions to find the values of \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) in the UV. The free energy that we have found so far depends on the UV cut-off \(\Lambda\). To find a renormalized free energy we can add counter-terms on the \(AdS\) boundary. The induced metric on this boundary is given by \[ds^{2}=\gamma_{\mu\nu}dx^{\mu}dx^{\nu}=e^{2A_{1}(u)}ds_{AdS_{d}}^{2}+e^{2A_{2} (u)}ds_{S^{n}}^{2}\Big{|}_{u=-\ell\log\epsilon}\,. \tag{112}\] We can read some scalar tensors on the \(AdS\) boundary at \(u=-\ell\log\epsilon\) as follows \[\sqrt{\gamma} =e^{dA_{1}(u)+nA_{2}(u)}\sqrt{|\zeta^{1}||\zeta^{2}|}\,, \tag{113a}\] \[R^{(\gamma)} =e^{-2A_{1}(u)}R_{1}+e^{-2A_{2}(u)}R_{2}\,,\] (113b) \[R^{(\gamma)}_{\mu\nu}R^{(\gamma)\mu\nu} =e^{-4A_{1}(u)}\frac{R_{1}^{2}}{d}+e^{-4A_{2}(u)}\frac{R_{2}^{2}}{n}\,. \tag{113c}\] The counter-terms required to cancel the \(\Lambda\)-dependent terms in (108) are given by \[S^{ct} =-\frac{M_{P}^{7}}{\ell}\int d^{8}x\sqrt{\gamma}\Big{(}14+\frac{ \ell^{2}}{6}R^{(\gamma)}+\frac{\ell^{4}}{144}(R^{(\gamma)}_{\mu\nu}R^{(\gamma )\mu\nu}-\frac{2}{7}R^{(\gamma)2})\] \[+\frac{\ell^{6}}{677376}(31R^{(\gamma)3}-140R^{(\gamma)}R^{( \gamma)}_{\mu\nu}R^{(\gamma)\mu\nu})-\frac{\ell^{8}}{303464448}(193R^{(\gamma) 4}\] \[-1960R^{(\gamma)2}R^{(\gamma)}_{\mu\nu}R^{(\gamma)\mu\nu}+5488(R^ {(\gamma)}_{\mu\nu}R^{(\gamma)\mu\nu})^{2})\log(\omega|\mathcal{R}_{1} \mathcal{R}_{2}|\Lambda^{4})\Big{)}\,, \tag{114}\] where \(\omega\) is a constant and defines our scheme of free energy. Defining \(\mathcal{F}^{ct}=-S^{ct}\) we find the regularized free energy as follow \[\mathcal{F}^{ren} =\mathcal{F}+\mathcal{F}^{ct}\] \[=-M_{P}^{7}\ell^{7}V_{S}V_{A}\Big{(}\frac{267\mathcal{R}_{1}^{4}-1 004\mathcal{R}_{1}^{3}\mathcal{R}_{2}-2738\mathcal{R}_{1}^{2}\mathcal{R}_{2}^ {2}-1004\mathcal{R}_{1}\mathcal{R}_{2}^{3}+267\mathcal{R}_{2}^{4}}{910393344 \mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2}}\] \[+\frac{(23\mathcal{R}_{1}^{4}-104\mathcal{R}_{1}^{3}\mathcal{R}_{ 2}+432\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2}-104\mathcal{R}_{1}\mathcal{R}_{2} ^{3}+23\mathcal{R}_{2}^{4})}{151732224\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2}}\log\omega\] \[-\frac{\mathcal{R}_{2}\mathcal{B}_{1}+\mathcal{R}_{1}\mathcal{B}_ {2}}{4\ell\mathcal{R}_{1}^{2}\mathcal{R}_{2}^{2}}\Big{)}\,. \tag{111}\] ### Fixing the scheme As it was already shown, an exact solution of equations of motion is the globally \(AdS_{d+n+1}\) solution (101). This solution among the regular solutions is a special case, for which \(\mathcal{R}_{1}=-\mathcal{R}_{2}\). For this solution, we can either compute the free energy directly from (100) or compute the potentials \(U_{1}\) and \(U_{2}\). For example, we find \[U_{1} =\frac{1}{192{\rm sinh}^{2}(u-u_{0}){\rm cosh}^{4}(u-u_{0})}\Big{(} 3\big{(}c_{1}+{\rm sinh}(2(u-u_{0}))\] \[-{\rm sinh}(4(u-u_{0}))+4u\big{)}-{\rm sinh}(6(u-u_{0}))\Big{)}\,, \tag{112a}\] \[U_{2} =\frac{1}{192{\rm sinh}^{2}(u-u_{0}){\rm cosh}^{4}(u-u_{0})}\Big{(} 3\big{(}c_{2}+{\rm sinh}(2(u-u_{0}))\] \[+{\rm sinh}(4(u-u_{0}))-4u\big{)}-{\rm sinh}(6(u-u_{0}))\Big{)}\,, \tag{112b}\] where \(c_{1}\) and \(c_{2}\) are constants of integration. The free energy before renormalization can be read as \[\mathcal{F}=-M_{P}^{7}\ell^{7}V_{S}V_{A}\big{(}14\Lambda^{8}-\frac{\Lambda^{4 }}{144}-\frac{1}{55296}\log(4\sqrt{3}\Lambda)\big{)}\,, \tag{113}\] which is obviously independent of \(c_{1}\) and \(c_{2}\). This result can be confirmed by using the results in equations (104) and (105) with IR boundary conditions \(\mathfrak{b}\mathfrak{1}=\mathfrak{b}_{2}=0\) when we choose \(\mathcal{R}_{1}=-\mathcal{R}_{2}=-48\) for the global solution. To renormalize (113) we can use the counter-terms in (110) with an appropriate scheme \[\omega=e^{-\frac{89}{42}}\,, \tag{114}\] which finally gives the free energy of the global \(AdS\) \[\mathcal{F}^{Global}=0\,. \tag{115}\] Now in this scheme, we can compute the free energy of all the regular solutions in the theory. This is given by \[{\cal F}^{ren} =M_{P}^{7}\ell^{7}V_{S}V_{A}\Big{(}\frac{89{\cal R}_{1}^{4}-1114{ \cal R}_{1}^{3}{\cal R}_{2}+28807{\cal R}_{1}^{2}{\cal R}_{2}^{2}-1114{\cal R}_{ 1}{\cal R}_{2}^{3}+89{\cal R}_{2}^{4}}{3186376704{\cal R}_{1}^{2}{\cal R}_{2}^{ 2}}\] \[+\frac{{\cal R}_{2}{\cal B}_{1}+{\cal R}_{1}{\cal B}_{2}}{4\ell{ \cal R}_{1}^{2}{\cal R}_{2}^{2}}\Big{)}\,. \tag{113}\] The logarithm of the renormalized free energy in terms of the dimensionless ratio of UV curvatures is sketched in figure 17. We should state that all different regular bulk solutions correspond to different UV sources and therefore to distinct CFTs. The distinct dual theories involve the same CFT but on \(AdS\times S\) with a different ratio of radii of curvatures. There is therefore no competition between these saddle points. We observe, however, that the free energy (on shell action) is maximal for the globally \(AdS\) solution19. Footnote 19: The logarithm of the action in figure 17 should be \(-\infty\) at the \(AdS\) solution. It is not because of a lack of perfect numerical accuracy. ## 9 Solutions with \(AdS_{d}\times S^{1}\) slices There is a special case of \(S^{n}\), namely \(n=1\), that is not covered by our previous analysis, as \(S^{1}\) has no curvature (\(R_{2}=0\)). In this section, we study this case. As Figure 17: The logarithm of the renormalized free energy vs. the logarithm of the dimensionless ratio of UV curvatures. The free energy is maximum (\({\cal F}^{ren}\leq 0\)) for global \(AdS_{d+n+1}\) solution (\(|R_{AdS}^{UV}/R_{S}^{UV}|=1\)). in the previous cases studied, whatever we say is valid if replace \(AdS_{d}\) with any \(d\)-dimensional constant negative curvature manifold. The equations of motion (11)-(12) simplify to \[\big{(}d\dot{A}_{1}+\dot{A}_{2}\big{)}^{2}-d\dot{A}_{1}^{2}-\dot{A}_{2}^{2}-e^{ -2A_{1}}R_{1}=\frac{1}{\ell^{2}}d(d+1)\,, \tag{134}\] \[d\big{(}d\ddot{A}_{1}+\ddot{A}_{2}\big{)}+d(\dot{A}_{1}-\dot{A}_{2})^{2}+e^{-2A _{1}}R_{1}=0\,, \tag{135}\] \[\ddot{A}_{1}+\dot{A}_{1}(d\dot{A}_{1}+\dot{A}_{2})-\frac{1}{d}e^{-2A_{1}}R_{1}= \ddot{A}_{2}+\dot{A}_{2}(d\dot{A}_{1}+\dot{A}_{2})\,. \tag{136}\] By solving \(\dot{A}_{2}\) and \(\ddot{A}_{2}\) from equation (134) and (135) and then inserting in (136) we find the following equation for \(A_{1}(u)\) \[de^{2A_{1}}\left(2\ell^{2}\ddot{A}_{1}+(d+1)(\ell^{2}\dot{A}_{1}^{2}-1)\right) -\ell^{2}R_{1}=0\,. \tag{137}\] This equation can be integrated to obtain \[e^{(d+1)A_{1}}\dot{A}_{1}^{2}-\frac{e^{(d-1)A_{1}}}{d(d-1)\ell^{2}}\big{(}d(d -1)e^{2A_{1}}+\ell^{2}R_{1}\big{)}+\alpha_{1}=0\,, \tag{138}\] where \(\alpha_{1}\) is a constant of integration. Given \(A_{1}\), \(A_{2}\) can be obtained from \[d(d-1)\dot{A}_{1}^{2}+2d\dot{A}_{1}\dot{A}_{2}=\frac{1}{\ell^{2}}d(d-1)+R_{1} e^{-2A_{1}}\,. \tag{139}\] ### Asymptotics Performing the same analysis as in previous sections we find the following properties for solutions with \(AdS_{d}\times S^{1}\) slices: * **Near boundary expansions:** Solving the equations of motion (134)-(136), near the putative boundary either at \(u\to+\infty\) or \(u\to-\infty\) gives expansions for scale factors of \(AdS_{d}\) and \(S^{1}\) spaces. For example, for \(d=3\) we find the following expansions: \[A_{1}(u) =\bar{A}_{1}\pm\frac{u}{\ell}-\frac{\mathcal{R}_{1}}{24}e^{\mp \frac{2u}{\ell}}-\big{(}\frac{\mathcal{R}_{1}^{2}}{1152}+\frac{C}{2}\big{)}e^ {\mp\frac{4u}{\ell}}+\mathcal{O}(e^{\mp\frac{6u}{\ell}})\,,\] (140a) \[A_{2}(u) =\bar{A}_{2}\pm\frac{u}{\ell}+\frac{\mathcal{R}_{1}}{24}e^{\mp \frac{2u}{\ell}}-\big{(}\frac{\mathcal{R}_{1}^{2}}{1152}-\frac{3C}{2}\big{)}e^ {\mp\frac{4u}{\ell}}+\mathcal{O}(e^{\mp\frac{6u}{\ell}})\,,\] (140b) where \(\mathcal{R}_{1}=\ell^{2}R_{1}e^{-2\bar{A}_{1}}\) is the dimensionless curvature parameter. * **Singular end-points:** Considering the expansions in (137a) and (137b), the only singular end-point possibility is when the \(AdS_{d}\) scale factor vanishes while the scale factor of the circle diverges i.e. \[A_{1}(u) =\frac{2}{d+1}\log\frac{u-u_{0}}{\ell}+\frac{1}{2}\log a_{0}+ \mathcal{O}(u-u_{0})\,,\] (141a) \[A_{2}(u) =\frac{1-d}{1+d}\log\frac{u-u_{0}}{\ell}+\frac{1}{2}\log s_{0}+ \mathcal{O}(u-u_{0})\,.\] (141b) To see this, it is easy to put \(n=1\) in equation (3.16) which gives \(\lambda_{1}=\frac{2}{d+1}\) and \(\lambda_{2}=\frac{1-d}{d+1}\) while equation (3.17) gives \(\lambda_{1}=0\) and \(\lambda_{2}=1\) which describes a regular end-point. * **Regular end-points:** Solving equations of motion (9.1)-(9.3) by inserting the expansions (3.10a) and (3.10b) for \(\lambda_{1}=0\) and \(\lambda_{2}=1\) we find the following scale factors near the regular end-point (\(S^{1}\) shrinks but \(AdS_{d}\) has a finite size) \[e^{2A_{1}(u)} =a_{0}+\frac{a_{0}d(d+1)+\ell^{2}R_{1}}{2d\ell^{2}}(u-u_{0})^{2}\] \[-\frac{(a_{0}d(d+1)+\ell^{2}R_{1})(a_{0}d(d-5)(d+1)+(d-3)\ell^{2} R_{1})}{48a_{0}d^{2}\ell^{4}}(u-u_{0})^{4}\] \[+\mathcal{O}(u-u_{0})^{6}\,,\] (9.9a) \[e^{2A_{2}(u)} =\frac{c_{0}}{\ell^{2}}(u-u_{0})^{2}+\frac{(a_{0}(2+d-d^{2})- \ell^{2}R_{1})c_{0}}{6a_{0}\ell^{4}}(u-u_{0})^{4}+\mathcal{O}(u-u_{0})^{6}\,,\] (9.9b) where \(c_{0}\) is an arbitrary positive constant. These scale factors can be read also from (3.19a) and (3.19b) by replacing \(n=1\) and \(\frac{R_{2}}{n-1}=\frac{\alpha_{0}}{\ell^{2}}\). Similar to the discussion (at the end of section 3.2.2) for the general \(S^{n}\) case we cannot have a regular end-point where \(AdS_{d}\) shrinks while \(S^{1}\) is finite. * **Bounces:** The analytic computations show that only the circle can have an A-bounce and the scale factor of \(AdS_{d}\) is always monotonic. To see this, starting from the expansions (3.32a) and (3.32b) for an \(AdS_{d}\) bounce, the only possible solution for coefficients is when \(\hat{a}_{2}=\hat{a}_{3}=\cdots=0\) i.e. the scale factor of \(AdS_{d}\) is constant. On the other hand at the \(S^{1}\) bounce, we have \[A_{1}(u) =\frac{1}{2}\log(\hat{a}_{0})+\hat{a}_{1}\frac{(u-u_{0})}{\ell}+ \hat{a}_{2}\frac{(u-u_{0})^{2}}{\ell^{2}}+\mathcal{O}(u-u_{0})^{3}\,,\] (9.10a) \[A_{2}(u) =\frac{1}{2}\log(\hat{s}_{0})+\hat{s}_{2}\frac{(u-u_{0})^{2}}{ \ell^{2}}+\hat{s}_{3}\frac{(u-u_{0})^{3}}{\ell^{3}}+\mathcal{O}(u-u_{0})^{4}\,,\] (9.10b) with the following coefficients \[\hat{a}_{1} =\pm\frac{\sqrt{d(d+1)+\frac{\ell^{2}R_{1}}{\hat{a}_{0}}}}{\sqrt {d(d-1)}}\quad,\quad\hat{a}_{2}=-\frac{d^{2}+d+\frac{\ell^{2}R_{1}}{\hat{a}_{ 0}}}{2d(d-1)}\,,\] (9.11a) \[\hat{s}_{2} =\frac{d+1}{2}\quad,\quad\hat{s}_{3}=-\frac{d(d+1)\sqrt{d^{2}+d+ \frac{\ell^{2}R_{1}}{\hat{a}_{0}}}}{6\sqrt{d(d-1)}}\,.\] (9.11b) Knowing all the properties above we have the following classes of solutions: 1. The regular solutions of \((\mathbf{R},\mathbf{B})\) type. This describes the solution outside the horizon of the black hole i.e. stretched from the horizon to the asymptotic boundary. 2. The singular solutions of \(({\bf R},{\bf A})\) type. This describes the solution behind the horizon of the black hole i.e. stretched from horizon to singularity. 3. The singular solutions of \(({\bf A},{\bf B})\) type. This describes a solution that is stretched from singularity to boundary (solutions with a naked singularity). In this case, there are also several analogs of the product space solution. One of them contains an \(AdS_{2}\) wormhole. We have plotted this solution in figure 18. It will be described analytically in the next subsection. ### Exact solutions To proceed, from equations (108)-(109) we must distinguish two cases: * \(\dot{A}_{1}=0\) From equations of motion (108)-(109) we find \[e^{2A_{1}}=-\frac{\ell^{2}R_{1}}{d(d+1)}\;\;\;,\;\;\;\ddot{A}_{2}+\dot{A}_{2}^{ 2}-\frac{d+1}{\ell^{2}}=0\,.\] (110) We perform the following change of variable \[A_{2}(u)=\log r(u)\,,\] (111) so that \(0\leq r<+\infty\). The equation of motion in (110) becomes \[\ddot{r}-\frac{d+1}{\ell^{2}}r=0\longrightarrow\dot{r}^{2}=\frac{d+1}{\ell^{2} }(r^{2}+k)\,,\] (112) Figure 18: A wormhole solution with geometry given in equation (111). where \(k\) is the constant of integration. Therefore the relation between metrics in two coordinates \(u\) and \(r\) is given by \[ds^{2} =du^{2}+e^{2A_{2}(u)}d\theta^{2}+e^{2A_{1}(u)}ds^{2}_{AdS_{d}}\] \[=\frac{\ell^{2}}{d+1}\frac{dr^{2}}{r^{2}+k}+r^{2}d\theta^{2}- \frac{\ell^{2}R_{1}}{d(d+1)}ds^{2}_{AdS_{d}}\,. \tag{9.15}\] where we have normalized the angle \(\theta\) to have period \(2\pi\). Any rescaling of that period via a rescaling of \(r\) corresponds to a rescaling of the constant \(k\). This metric describes a product space \(\mathcal{M}_{2}\times AdS_{d}\). Depending on the value of \(k\) we have different geometries for \(\mathcal{M}_{2}\): 1. For \(k>0\), the radius of \(S^{1}\) shrinks to zero sizes as \(r\to 0\) but the geometry is regular at this point if \[k=\frac{\ell^{2}}{d+1}\,. \tag{9.16}\] Otherwise, there is a conical singularity at \(r=0\). 2. For \(k=0\), the geometry is the Euclidean \(AdS_{2}\) (\(EAdS_{2}\)) space in Poincare coordinates but with one of them compact. 3. For \(k<0\), the geometry needs a better coordinate system than we now describe. We return to the \(u\) coordinate and we have the following geometries respectively: * For \(k>0\) we obtain \[ds^{2}=du^{2}+k\sinh^{2}\Big{[}\frac{\sqrt{d+1}}{\ell}(u-u_{0})\Big{]}d\theta^ {2}-\frac{\ell^{2}R_{1}}{d(d+1)}ds^{2}_{AdS_{d}}\,,\] (9.17) with \(u\geq 0\). The parameter \(u_{0}\) translates into an arbitrary radius for \(\theta\). This has a generic conical singularity. The regular metric has \(k\) given in (9.16). and in such a case \(\mathcal{M}_{2}\) is the Euclidean hyperboloid given by \[-(x^{0})^{2}+(x^{1})^{2}+(x^{2})^{2}=-\frac{\ell^{2}}{d+1}\,.\] (9.18) We denote this space with \(EAdS_{2}^{+}\). * \(k=0\). The associated metric in the \(u\) coordinate is \[ds^{2}=du^{2}+\exp\Big{[}\frac{2\sqrt{d+1}}{\ell}(u+c)\Big{]}d\theta^{2}-\frac {\ell^{2}R_{1}}{d(d+1)}ds^{2}_{AdS_{d}}\quad,\quad u\in\mathbb{R}\,.\] (9.19) Again the parameter \(c\) translates into an arbitrary radius for the coordinate \(\theta\). This metric, if \(\theta\) is non-compact is \(EAdS_{2}\) in Poincare coordinates, and it is diffeomorphic to the hyperboloid. When \(\theta\) is compact, this is not true anymore. We denote this space as \(EAdS_{2}^{0}\). * \(k<0\). In this case the metric (9.15) extends from \(r\in[-\sqrt{|k|},+\infty)\). This gives the metric in the \(u\) coordinate \[ds^{2}=du^{2}+|k|\cosh^{2}\Big{[}\frac{\sqrt{d+1}}{\ell}(u-u_{0})\Big{]}d\theta ^{2}-\frac{\ell^{2}R_{1}}{d(d+1)}ds^{2}_{AdS_{d}}\,,\] (9.20) with \(u>u_{0}\). However, this metric is not geodesically complete and one has to extend \(u\) to all real values. The manifold now is a wormhole with \(S^{1}\) boundaries that is the usual \(EAdS_{2}\). We shall denote it as \(EAdS_{2}^{-}\) The constant \(u_{0}\) allows an arbitrary radius for the \(S^{1}\). All the \(\mathcal{M}_{2}\) metrics above also exist when \(\theta\) is taking values in the real line. In all three cases above the Kretschmann scalar of the total \((d+2)\)-dimensional manifold is the same and is equal to \[\mathcal{K}=\frac{2(d+1)^{2}(3d-2)}{(d-1)\ell^{4}}\,.\] (9.21) We can obtain Minkowski signature solutions by analytically continuing \(\theta\to it\). In this case \(EAdS_{2}^{+,0}\) becomes the \(AdS_{2}\) black hole while \(EAdS_{2}^{-}\) becomes \(AdS_{2}\). * \(\dot{A}_{1}\neq 0\). In this case \(A_{2}(u)\) can be obtained from \[A_{2}(u)=\int\frac{\alpha_{1}(d-1)\ell^{2}e^{-(d+1)A_{1}}+2}{2\ell^{2}\dot{A}_ {1}}du+\alpha_{2}\,.\] (9.22) Here \(\alpha_{2}\) is another constant of integration. By defining \[A_{1}(u)=\log r(u)\,,\] (9.23) equation (9.5) becomes \[(\frac{dr}{du})^{2}\equiv f(r)=-\alpha_{1}r^{1-d}+\frac{R_{1}}{d(d-1)}+\frac{ r^{2}}{\ell^{2}}\,,\] (9.24) and equation (9.22) gives \[A_{2}(r(u))=\frac{1}{2}\log f(r)+\frac{1}{2}\log(\ell^{2}d(d-1))+\alpha_{2}\,.\] (9.25) Therefore the metric in these two coordinates are related as follows \[ds^{2} =du^{2}+e^{2A_{1}(u)}ds^{2}_{AdS_{d}}+e^{2A_{2}(u)}d\theta^{2}\] \[=\frac{dr^{2}}{f(r)}+r^{2}ds^{2}_{AdS_{d}}+\ell^{2}d(d-1)e^{2\alpha _{2}}f(r)d\theta^{2}\,, \tag{111}\] where \(f(r)\) is defined in equation (110). The last metric describes the (well-known) topological black holes with a negative cosmological constant, [61, 62], (they are reviewed in appendix F). The function \(f(r)\) in (110) has the following properties (\(d\geq 2\)). 1. \(f\rightarrow+\infty\) as \(r\rightarrow+\infty\). 2. \(r\to 0\) is always a curvature singularity of the metrics in (111) and the Kretchmann scalar is given by \[\mathcal{K}=\alpha_{1}^{2}(d^{2}-1)d^{2}r^{-2(d+1)}+\frac{2(d+1)(d+2)}{\ell^{ 4}}\,.\] (112) 3. \(f\rightarrow+\infty\) as \(r\to 0^{+}\) when \(\alpha_{1}<0\), and \(f\rightarrow-\infty\) as \(r\to 0^{+}\) when \(\alpha_{1}>0\). As we show below when \(\alpha_{1}=0\) the space is \(AdS_{d+2}\) provided \(\alpha_{2}\) is chosen appropriately. 4. At a fixed value of \(R_{1}\), there is a value \(\alpha_{1}^{crit}<0\) so that if \(\alpha_{1}<\alpha_{1}^{crit}\), then always \(f>0\) as \(r\in[0,+\infty)\). All solutions with \(\alpha_{1}<\alpha_{1}^{crit}\) have a bad naked singularity. 5. When \(\alpha_{1}=\alpha_{1}^{crit}\) then there is a single positive double zero of \(f\). In this case, the Minkowski signature solution has an extremal horizon. The geometry near this extremal horizon is AdS\({}_{2}\times\)AdS\({}_{d}\). 6. When \(\alpha_{1}^{crit}<\alpha_{1}<0\) then \(f\) has two positive zeroes with \(f\) being negative in between the zeroes. The structure of such black holes is similar to Reissner-Norstrom ones. In particular, the inner horizon is a Cauchy horizon. If the hyperbolic slice is a finite volume manifold, then such black holes have finite entropy. 7. When \(\alpha_{1}>0\), then \(f\) has a single positive zero. Beyond this zero (at small values of \(r\)), \(f<0\). 8. \(r\rightarrow\infty\) is a regular conformal boundary of the metrics in (111). 9. The relevant (Euclidean) solutions are all segments between a zero or a divergence of \(f\), while \(f\geq 0\). 10. Most of these solutions are singular. The only potentially regular Euclidean solutions are those between \(r=+\infty\) and the first non-trivial zero \(r_{*}\) for \(f\). The regular solutions are obtained by adjusting the constant \(\alpha_{2}\) as \[e^{2\alpha_{2}}=\frac{4}{d(d-1)\ell^{2}(f^{\prime}(r_{*}))^{2}}\,. \tag{111}\] We conclude that in this case we have two families of regular solutions in the Euclidean case: \(\bullet\) The solutions with \(\alpha_{1}^{crit}<\alpha_{1}<0\) that become RN-like black holes upon analytic continuation of \(\theta\to i\theta\). \(\bullet\) The solutions with \(\alpha_{1}>0\), that become Schwarzschild-like black holes upon analytic continuation of \(\theta\to i\theta\). Minkowski signature solutions can also be obtained by giving to the \(AdS_{d}\) a Minkowski signature. The "ground" state solution, is the extremal solution with \(\alpha_{1}=\alpha_{1}^{crit}\). It has zero temperature, but the asymptotic circle can have any radius. All other solutions have a fixed asymptotic circle radius that is correlated with their temperature. At a fixed asymptotic circle radius the non-extremal black holes have a lower free energy compared to the extremal one, [62]. ### The global \(AdS_{d+2}\) solution The solution obtained from (108) and (109) when we choose \(\alpha_{1}=0\) is \[ds^{2}=du^{2}-\frac{\ell^{2}R_{1}}{d(d-1)}\cosh^{2}\frac{u-u_{0}}{\ell}ds_{ AdS_{d}}^{2}-e^{2\alpha_{2}}\ell^{2}R_{1}\sinh^{2}\frac{u-u_{0}}{\ell}d \theta^{2}\,. \tag{112}\] Choosing \[e^{2\alpha_{2}}R_{1}=-1\,, \tag{113}\] this metric is the metric of \(AdS_{d+2}\) in global coordinates. By the following change of variables \[-\ell^{2}k\cosh^{2}\frac{u-u_{0}}{\ell}=r^{2}\;\;\;,\;\;\;\theta=\sqrt{\frac{ k}{e^{2\alpha_{2}\ell^{2}R_{1}}}}i\,t\;\;\;\;;\;\;\;k\equiv\frac{R_{1}}{d(d-1)}\,, \tag{114}\] the metric becomes \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}ds_{AdS_{d}}^{2}\;\;\;;\;\;\;f(r )=\frac{r^{2}}{\ell^{2}}+k\,. \tag{115}\] This is the solution that has been discussed in appendix F when \(M=0\). ### Relations between parameters in two coordinates Let us for simplicity consider \(d=3\) and \(\dot{A}\neq 0\). Solving equation (107) gives us \[r^{2}(u)=\frac{1}{24}\left(\ell^{2}e^{-2\sqrt{6}c_{1}-\frac{2u}{\ell}}\left( 144\alpha_{1}+\ell^{2}R_{1}^{2}\right)+e^{2\sqrt{6}c_{1}+\frac{2u}{\ell}}-2 \ell^{2}R_{1}\right)\,, \tag{116}\] where \(c_{1}\) is the constant of integration. Moreover, there is another solution that can be found from (9.33) by replacing \(u\to-u\). At large values of \(r\) or when \(u\to+\infty\) we can find the expansions of scale factors in (9.7a) and (9.7b) by using equations (9.23) and (9.25) together with (9.24) if we choose \[\bar{A}_{1}=\sqrt{6}c_{1}-\frac{1}{2}\log 24\quad,\quad\bar{A}_{2}=\alpha_{2}+ \sqrt{6}c_{1}-\log 2\quad,\quad\alpha_{1}=-\frac{4C}{\ell^{2}}e^{4\bar{A}_{1}}\,, \tag{9.34}\] where \(\bar{A}_{1},\bar{A}_{2}\) and \(C\) are parameters in \(u\) coordinate while \(\alpha_{1},\alpha_{2}\) and \(c_{1}\) are in \(r\) coordinate. Now let us consider a solution which is regular at \(u=u_{0}\) and has a boundary at \(u\to+\infty\). The regularity at \(u=u_{0}\) implies that the scale factor of \(A_{1}\) is constant but \(e^{A_{2}}\to 0\). In \(r\) coordinate this translates to a solution where at \(r=r_{h}\) \[f(r_{h})=-\alpha_{1}r_{h}^{-2}+\frac{R_{1}}{6}+\frac{r_{h}^{2}}{\ell^{2}}=0\,. \tag{9.35}\] This describes a topological black hole with a horizon at \(r=r_{h}\), see appendix F. Now we can compare the expansions near the regular end-points. The expansions in \(u\) coordinate are given in (9.9a) and (9.9b). The two free parameters \(a_{0}\) and \(c_{0}\) then are given by \[a_{0}=r_{h}^{2}\quad,\quad c_{0}=e^{2\alpha_{2}}\frac{(12r_{h}^{2}+\ell^{2}R_{ 1})^{2}}{6r_{h}^{2}}\,. \tag{9.36}\] We can read the values of the sources and vevs in terms of the black hole solutions \[\mathcal{R}_{1}=\ell^{2}R_{1}e^{-2\bar{A}_{1}}=24\ell^{2}R_{1}e^{-2\sqrt{6}c_ {1}}\quad,\quad C=-\frac{1}{4}\alpha_{1}\ell^{2}e^{-4\bar{A}_{1}}=-144\omega M \ell^{2}e^{-4\sqrt{6}c_{1}}\,, \tag{9.37}\] where \(\omega\) and \(M\), the mass of the black hole, are defined in (F.8) and (F.11). We can choose \(\bar{A}_{1}=\bar{A}_{2}=0\) for simplicity then we have \(c_{1}=\frac{\log 24}{2\sqrt{6}}\) and \(\alpha_{2}=-\frac{1}{2}\log 6\) and \[\mathcal{R}_{1}=\ell^{2}R_{1}\quad,\quad C=-\frac{\omega M\ell^{2}}{4}\,. \tag{9.38}\] Knowing the above parameters, we now compute the free energy of the regular solutions in both coordinates. The Euclidean action is given by \[I_{E} =\frac{M_{P}^{3}}{2}V_{S}V_{AdS_{3}}\Big{(}e^{A_{1}+A_{2}}UR_{1} \Big{|}_{u_{0}}^{+\infty}-3e^{3A_{1}+A_{2}}\big{(}3\dot{A}_{1}+\dot{A}_{2} \big{)}\Big{|}^{u=+\infty}\] \[+e^{3A_{1}+A_{2}}(3\dot{A}_{1}+\dot{A}_{2})\Big{|}_{u_{0}}\Big{)}\,, \tag{9.39}\] where the last term is non-zero, unlike the \(n>1\) cases. In this equation \(V_{AdS_{3}}\sim 1/|R_{1}|^{\frac{3}{2}}\) is the volume of three dimensional slice and \[V_{S}=\int_{0}^{\beta}d\theta\,, \tag{9.40}\] where \(\beta\) is the length of \(S^{1}\). In (9.39), \(U\) is a scalar field given by \[(\dot{A}_{1}+\dot{A}_{2})U+\dot{U}+1=0\,, \tag{9.41}\] and by using the expansions (9.7a) and (9.7b) we find that as \(u\to+\infty\) \[U(u)=-\frac{\ell}{2}+\mathcal{B}e^{-2\frac{u}{\ell}}+(C\ell-\frac{\ell^{5}R_{1 }}{576e^{4\bar{A}_{1}}})e^{-4\frac{u}{\ell}}+\mathcal{O}(e^{-6\frac{u}{\ell}} )\,. \tag{9.42}\] Moreover, near the regular end-point \(u=u_{0}\) equations (9.9a) and (9.9b) give \[U(u)=\frac{\mathfrak{b}}{u-u_{0}}-(\frac{1}{2}+\frac{2\mathfrak{b}}{3\ell^{2}} )(u-u_{0})+\mathcal{O}(u-u_{0})^{3}\,. \tag{9.43}\] We can consider \(\mathfrak{b}=0\), so the contribution to the free energy of the first term of (9.39) at \(u=u_{0}\) is zero. On the other hand, we can solve \(U\) in \(r\) coordinate exactly, which we find \[\big{(}\frac{R_{1}}{3}+4\frac{r^{2}}{\ell^{2}}\big{)}U+2rfU^{\prime}+2rf^{ \frac{1}{2}}=0\,,\to U(r)=\frac{2c_{2}-\sqrt{6}\ell r^{2}}{2\sqrt{6r^{4}+\ell^ {2}r^{2}R_{1}-6\alpha_{1}\ell^{2}}}\,, \tag{9.44}\] where \(c_{2}\) is another constant of integration. \(U(r)\) is diverging at \(r=r_{h}\) because of (9.35). To have a regular function at this point we should have \[c_{2}=\sqrt{\frac{3}{2}}\ell r_{h}^{2}\,. \tag{9.45}\] If we expand the solution (9.44) near the boundary at \(r\to+\infty\) and change \(r\to u\) by using (9.33) we obtain \[\mathcal{B}=\left(4\sqrt{6}c_{2}+\ell^{3}R_{1}\right)e^{-2\sqrt{6}c_{1}}=\ell \left(12r_{h}^{2}+\ell^{2}R_{1}\right)e^{-2\sqrt{6}c_{1}}\,. \tag{9.46}\] Returning to (9.39), if we do the proper counter-terms we can compute the renormalized action as following \[I_{E}^{ren} =\frac{1}{2}M_{P}^{3}V_{S}V_{AdS_{3}}\big{(}e^{\bar{A}_{1}+\bar{A }_{2}}R_{1}\mathcal{B}-\frac{1}{\ell}a_{0}^{\frac{3}{2}}c_{0}^{\frac{1}{2}} \big{)}\] \[=M_{P}^{3}V_{S}V_{AdS_{3}}\sqrt{6}e^{\alpha_{2}}\ell\big{(}-\frac {r_{h}^{4}}{\ell^{2}}+\frac{1}{6}R_{1}r_{h}^{2}+\frac{1}{48}R_{1}^{2}\ell^{2} \big{)}\,. \tag{9.47}\] The last relation is the known result of free energy for topological black holes with negative cosmological constant and \(r_{h}=r_{+}\), see appendix F for more details. By using the definition of temperature, we find that \[\beta=\frac{2\pi}{(e^{A_{2}})^{\prime}}\Big{|}_{u=u_{0}}\to T=\frac{c_{0}^{ \frac{1}{2}}}{2\pi\ell}\,. \tag{9.48}\] The free energy of the black hole is given by \[\mathcal{F}=\frac{I_{E}^{ren}}{\beta}=(M-M_{crit})-TS\,, \tag{9.49}\] where \(M,M_{crit}\) and \(S\) are mass, critical mass, and entropy of the black hole respectively, and are given in equations (F.11), (F.14) and (F.15). ## 10 On general Einstein manifold solutions with constant negative curvature. The general solutions with constant negative curvature we have found in this paper, and many previous ones provide a hierarchical construction of such solutions as conifolds of conifolds of conifolds etc. A few examples are as follows: In two dimensions, the solutions to Einstein's equations with a negative cosmological constant, up to diffeomorphisms consist of the family of manifolds \({\cal M}_{2}\) we described in the previous section. In three dimensions, the solutions to Einstein's equations with a negative cosmological constant, up to diffeomorphisms consist of the two-parameter family of rotating \(AdS_{3}\)-Schwarszschild black holes (that includes also \(AdS_{3}\). Consider now solutions in four dimensions with a negative cosmological constant. The maximally symmetric solution is \(AdS_{4}\) and in global coordinates, it has \(S^{1}\times S^{2}\) slices. The Euclidean symmetry is \(O(4,1)\). In this same slicing belongs also the \(AdS_{4}\)-Schwarszschild black hole with generic symmetry \(O(2)\times O(3)\). The difference between these two solutions is that, in the first, it is \(S^{2}\) that shrinks to zero size ending the geometry while in the second, it is the \(S^{1}\) that shrinks to zero size ending the geometry. There are however further solutions where the slices are \(S^{3}\), [10], with generic symmetry \(O(4)\), as well as conifold solutions with \(S^{1}\times S^{1}\times S^{1}\) solutions (tori) that correspond to \(AdS_{4}\)-Schwarszschild black holes with the toroidal horizon and generic symmetry \(O(2)^{3}\). There are also the \(AdS_{2}\times S^{1}\) solutions studied here with generic symmetry \(O(2)\times O(2,1)\). In the place of \(AdS_{2}\) above we can have any of our \({\cal M}_{2}\) solutions. We can also replace \(S^{1}\) with \(R\). All of these four-dimensional solutions are generically distinct and provide a large class of solutions with four-dimensional constant negative curvature. The structure of their boundaries differs. Some solutions are diffeomorphic to each other, but most are distinct manifolds. We do not know if they exhaust all solutions with the negative cosmological constant. We now move to the next dimension which is five and describe the various conifold solutions to the constant negative curvature equations. The slices can be \((S^{1})^{4}\), \(S^{1}\times S^{3}\), \(S^{2}\times S^{2}\), \((S^{1})^{2}\times S^{2}\), \(AdS_{2}\times S^{2}\), \(AdS_{2}\times(S^{1})^{2}\), \(AdS_{3}\times S^{1}\) and \(AdS_{2}\times AdS_{2}\) and \(AdS_{4}\). For example, \((S^{1})^{4}\) are the five-dimensional black holes with the toroidal horizon, \(S^{1}\times S^{3}\) are the five-dimensional black holes with \(S^{3}\) horizon, and \((S^{1})^{2}\times S^{2}\) are the five-dimensional black holes with \(S^{1}\times S^{2}\) horizon. The \(S^{2}\times S^{2}\) solution was analyzed in [15] and exhibited Effimov phenomena. In the \(AdS_{2}\times S^{2}\) and \(AdS_{2}\times(S^{1})^{2}\) solutions \(AdS_{2}\) stands for the one-parameter family of \(AdS_{2}\) black holes. The \(AdS_{3}\times S^{1}\) solutions contained in the slice the full two-parameter family of BTZ black holes. Finally, \(AdS_{2}\times AdS_{2}\) slices have not been systematically stud ied so far but we expect these solutions to have two boundaries as neither \(AdS_{2}\) can shrink regularly to zero size. This algorithm clearly generalizes to higher dimensions. The structure of the boundaries of such solutions is variable. ## Acknowledgements We would like to thank C. Behan, T. Brennan, M. Chernodub, J. Gauntlett, C. Herzog, A. Konechny, A. Lerda, V. Niarchos, M. Roberts, C. Rosen, J. Russo, A. Stergiou, E. Tonni and A. Tseytlin for helpful conversations. This work was supported in part by CNRS grant IEA 199430. The work of A. G. is supported by Ferdowsi University of Mashhad under grant 2/60036 (1402/03/06). ## Appendix A Product space ansatz for the slice Consider the following ansatz, a block diagonal \((d+1)\)-dimensional metric \[ds^{2}=g_{ab}dx^{a}dx^{b}=du^{2}+\sum_{i=1}^{n}\mathrm{e}^{2A_{i}(u)}\zeta^{i}_{ \alpha_{i},\beta_{i}}dx^{\alpha_{i}}dx^{\beta_{i}}\,, \tag{114}\] where \(\zeta^{i}_{\alpha_{i}\beta_{i}}\) is the \(d_{i}\)-dimensional metric of the \(i\)th Einstein manifold, \(\alpha_{i}\) and \(\beta_{i}\) take values in the \(d_{i}\) coordinates of this manifold. Each Einstein manifold is associated with a different scale factor, all depending on the coordinate \(u\) only. Note that every \(d\)-dimensional slice at constant \(u\) is given by the product of \(n\) Einstein manifolds of dimension \(d_{1},\ldots,d_{n}\). For this ansatz, the Ricci tensor reads \[R_{uu}=-\sum_{k=1}^{n}d_{k}\big{(}\ddot{A_{k}}+\dot{A_{k}^{2}} \big{)}\,, \tag{115a}\] \[R_{u\alpha}=0\qquad\text{for}\qquad\alpha\neq u\,,\] (115b) \[R_{\alpha_{i}\beta_{i}}=-\big{(}\ddot{A_{i}}+\dot{A_{i}}\sum_{k =1}^{n}d_{k}\dot{A_{k}}\big{)}g_{\alpha_{i}\beta_{i}}+R^{\zeta^{i}_{i}}_{ \alpha_{i}\beta_{j}}\,,\] (115c) \[R_{\alpha_{i}\beta_{i}}=0\qquad\text{for}\qquad i\neq j\,, \tag{115d}\] where \(R^{\zeta^{i}}_{\alpha_{i}\beta_{i}}\) is the Ricci tensor of the \(d_{i}\)-dimensional Einstein metric \(\zeta^{i}_{\alpha_{i}\beta_{i}}\). Thus, the Ricci scalar is \[R=-2\sum_{k=1}^{n}d_{k}\ddot{A_{k}}-\big{(}\sum_{k=1}^{n}d_{k}\dot{A_{k}} \big{)}^{2}-\sum_{k=1}^{n}d_{k}\dot{A_{k}^{2}}+\sum_{k=1}^{n}\mathrm{e}^{-2A_{ k}}R^{\zeta^{k}}\,, \tag{116}\] where \(R^{\zeta^{k}}\) is the Ricci scalar of the metric \(\zeta^{k}\). We consider in general an Einstein-dilaton theory in a \(d+1\) dimensional bulk space-time. The most general two-derivative action is \[S=M_{P}^{d-1}\int d^{d+1}x\sqrt{-g}\Big{(}R-\frac{1}{2}g^{ab}\partial_{a} \varphi\partial_{b}\varphi-V(\varphi)\Big{)}\,. \tag{117}\] The energy-momentum tensor \(T_{\mu\nu}=\partial_{\mu}\varphi\partial_{\nu}\varphi-g_{\mu\nu}(\frac{1}{2} \partial_{a}\varphi\partial^{a}\varphi+V)\) would be as follow \[T_{uu}=\frac{1}{2}\dot{\varphi}^{2}-V\,, \tag{118a}\] \[T_{\alpha_{i}\beta_{j}}=-g_{\alpha_{i}\beta_{j}}\big{(}\frac{1}{ 2}\dot{\varphi}^{2}+V\big{)}\,. \tag{118b}\] Finally, the Einstein tensor reads \[G_{uu} =\frac{1}{2}\big{(}\sum_{k=1}^{n}d_{k}\dot{A}_{k}\big{)}^{2}-\frac{1} {2}\sum_{k=1}^{n}d_{k}\dot{A}_{k}^{2}-\frac{1}{2}\sum_{k=1}^{n}\mathrm{e}^{-2A_{ k}}R^{\zeta^{k}}\,, \tag{111a}\] \[G_{\alpha_{i}\beta_{i}} =\Big{(}-\big{(}\ddot{A}_{i}+\dot{A}_{i}\sum_{k=1}^{n}d_{k}\dot{A} _{k}\big{)}+\sum_{k=1}^{n}d_{k}\ddot{A}_{k}+\frac{1}{2}\big{(}\sum_{k=1}^{n}d_ {k}\dot{A}_{k}\big{)}^{2}\] \[\quad+\frac{1}{2}\sum_{k=1}^{n}d_{k}\dot{A}_{k}^{2}-\frac{1}{2} \sum_{k\neq i}\mathrm{e}^{-2A_{k}}R^{\zeta^{k}}\Big{)}g_{\alpha_{i}\beta_{i}}+ G_{\alpha_{i}\beta_{i}}^{\zeta^{i}}\,, \tag{111b}\] where \(G_{\alpha_{i}\beta_{i}}^{\zeta^{i}}\) is the Einstein tensor of the metric \(\zeta^{i}\). Since \(\zeta^{i}\) is an Einstein metric, we have \[G_{\alpha_{i}\beta_{i}}^{\zeta^{i}}=\big{(}\frac{1}{d_{i}}-\frac{1}{2}\big{)} R^{\zeta^{i}}\zeta_{\alpha_{i}\beta_{i}}=\big{(}\frac{1}{d_{i}}-\frac{1}{2} \big{)}\mathrm{e}^{-2A_{i}}R^{\zeta^{i}}g_{\alpha_{i}\beta_{i}}\,. \tag{112}\] Hence, we can rewrite the \(\alpha_{i}\beta_{i}\) component of the Einstein tensor as \[G_{\alpha_{i}\beta_{i}}=\Big{(}-\big{(}\ddot{A}_{i}+\dot{A}_{i}\sum_{k=1}^{n} d_{k}\dot{A}_{k}-\frac{1}{d_{i}}\mathrm{e}^{-2A_{i}}R^{\zeta^{i}}\big{)}+\sum_{k=1}^{ n}d_{k}\ddot{A}_{k}+\sum_{k=1}^{n}d_{k}\dot{A}_{k}^{2}+G_{uu}\Big{)}g_{\alpha_{i} \beta_{i}}\,. \tag{113}\] Therefore, the equations of motions, given by \(2G_{\mu\nu}=T_{\mu\nu}\), read \[\big{(}\sum_{k=1}^{n}d_{k}\dot{A}_{k}\big{)}^{2}-\sum_{k=1}^{n}d_ {k}\dot{A}_{k}^{2}-\sum_{k=1}^{n}\mathrm{e}^{-2A_{k}}R^{\zeta^{k}}-\frac{1}{2 }\dot{\varphi}^{2}+V=0\,, \tag{114a}\] \[-\big{(}\ddot{A}_{i}+\dot{A}_{i}\big{(}\sum_{k=1}^{n}d_{k}\dot{A} _{k}\big{)}-\frac{1}{d_{i}}\mathrm{e}^{-2A_{i}}R^{\zeta^{i}}\big{)}+\sum_{k=1} ^{n}d_{k}\ddot{A}_{k}+\sum_{k=1}^{n}d_{k}\dot{A}_{k}^{2}+\frac{1}{2}\dot{ \varphi}^{2}=0\,,\] (114b) \[\ddot{\varphi}+\sum_{k=1}^{n}d_{k}\dot{A}_{k}\dot{\varphi}- \partial_{\varphi}V=0\,. \tag{114c}\] Multiply (114b) by \(d_{i}\), sum over \(i\), divided by \(d\) and reorganize we obtain \[2(1-\frac{1}{d})\sum_{k=1}^{n}d_{k}\ddot{A}_{k}+\frac{2}{d}\sum_{i<j}d_{i}d_{j }(\dot{A}_{i}-\dot{A}_{j})^{2}+\frac{2}{d}\sum_{k=1}^{n}\mathrm{e}^{-2A_{k}}R ^{\zeta^{k}}+\dot{\varphi}^{2}=0\,, \tag{115}\] the symmetric version of (114b). These equations are valid for any choice of \(n\) Einstein metrics \(\zeta^{i}_{\alpha_{i}\beta_{i}}\). However, the existence of solutions is not guaranteed for an arbitrary choice. The difference between (114b) for different indices yields constraints on the scale factors and the curvatures \[\ddot{A}_{i}+\dot{A}_{i}\sum_{k=1}^{n}d_{k}\dot{A}_{k}-\frac{1}{d_{i}}\mathrm{e }^{-2A_{i}}R^{\zeta^{i}}=\ddot{A}_{j}+\dot{A}_{j}\sum_{k=1}^{n}d_{k}\dot{A}_{k} -\frac{1}{d_{j}}\mathrm{e}^{-2A_{j}}R^{\zeta^{j}}\,, \tag{116}\] for all \(i\) and \(j\). One can see that these constraints are satisfied by \(A_{i}=A(u)\) and \(R^{\zeta^{i}}=d_{i}\kappa\) for all \(i\), where \(\kappa\) is a constant and \(A(u)\) is a function of \(u\). In this case, the equations of motion (A.9a)-(A.9c) reduce to \[d(d-1)\dot{A}^{2}-e^{-2A}R^{\zeta}-\frac{1}{2}\dot{\varphi}^{2}+V= 0\,,\] (A.12a) \[2(d-1)\ddot{A}+\dot{\varphi}^{2}+\frac{2}{d}e^{-2A}R^{\zeta}=0\,,\] (A.12b) \[\ddot{\varphi}+d\dot{A}\dot{\varphi}-\partial_{\varphi}V=0\,.\] (A.12c) This could be foreseen since under these conditions there is only one scale factor and the product space is an Einstein manifold, since (A.12a)-(A.12c) are valid for any Einstein metric \(\zeta_{\mu\nu}\). ### The curvature invariants To check the regularity of the solutions we should calculate \(R^{2}\), \(R_{ab}R^{ab}\), and \(R_{abcd}R^{abcd}\) for the metrics above. The first two are straightforward to compute using (A.2a)-(A.2d). The Ricci squared is \[R_{ab}R^{ab} = R_{uu}R^{uu}+R_{\alpha_{i}\beta_{i}}R^{\alpha_{i}\beta_{i}}\] (A.13) \[= \Big{(}\sum_{i=1}^{n}d_{i}(\ddot{A}_{i}+\dot{A}_{i}^{2})\Big{)}^ {2}+\sum_{i=1}^{n}d_{i}\Big{(}\mathrm{e}^{-2A_{i}}\kappa-\big{(}\ddot{A}_{i}+ \dot{A}_{i}\sum_{j=1}^{n}d_{j}\dot{A}_{j}\big{)}\Big{)}^{2}\,.\] The Ricci scalar reads \[R=-2\sum_{i=1}^{n}d_{i}\ddot{A}_{i}-\big{(}\sum_{i=1}^{n}d_{i} \dot{A}_{i}\big{)}^{2}-\sum_{i=1}^{n}d_{i}\dot{A}_{i}^{2}+\sum_{i=1}^{n} \mathrm{e}^{-2A_{i}}R^{\zeta^{i}}\,.\] (A.14) The non-zero Riemann tensor components are given by \[R_{\alpha_{i}uu\beta_{i}} =e^{2A_{i}}\zeta^{i}_{\alpha_{i}\beta_{i}}\big{(}\ddot{A}_{i}+ \dot{A}_{i}^{2}\big{)}=-R_{u\alpha_{i}u\beta_{i}}=-R_{\alpha_{i}u\beta_{i}u}\,.\] (A.15a) \[R_{\alpha_{i}\beta_{j}\gamma_{k}\delta_{l}} =e^{2(A_{i}+A_{j})}\dot{A}_{i}\dot{A}_{j}\big{(}\delta_{il}\delta _{jk}\zeta^{i}_{\alpha_{i}\delta_{i}}\zeta^{j}_{\beta_{j}\gamma_{j}}-\delta_{ ik}\delta_{jl}\zeta^{i}_{\alpha_{i}\gamma_{i}}\zeta^{j}_{\beta_{j}\delta_{j}}\big{)}\] \[+e^{2A_{i}}\delta_{jl}\delta_{kl}\delta_{ik}R^{\zeta^{i}_{\alpha _{i}\beta_{i}\gamma_{i}\delta_{i}}}_{\alpha_{i}\beta_{i}\gamma_{i}\delta_{i}}\,,\] (A.15b) and one can see that the Riemann tensor is pairwise diagonal. So one can calculate the Kretschmann scalar as a sum of all non-zero components of the Riemann tensor \[\mathcal{K}=4K_{1}^{2}+K_{2}^{2}+2K_{3}^{2}\,,\] (A.16) where \[K_{1}=R_{u\alpha_{i}}^{\phantom{u}u\beta_{i}}=\delta_{\alpha_{ i}}^{\phantom{\alpha_{i}}\beta_{i}}(\ddot{A}_{i}+\dot{A}_{i}^{2})\,.\] (A.17) Equation (A.15b) with \(i=k\) and \(j=l\) gives \[K_{2}=R_{\alpha_{i}\beta_{j}}^{\phantom{\alpha_{i}}\gamma_{i}\delta_{j}}= \dot{A}_{i}\dot{A}_{j}(-\delta_{\alpha_{i}}^{\phantom{\alpha_{i}}\gamma_{i}} \delta_{\beta_{j}}^{\phantom{\beta_{j}}\delta_{j}})\,,\] (A.18) and with \(i=j=k=l\) gives \[K_{3}^{2}=(R_{\alpha_{i}\beta_{j}}^{\ \ \gamma_{i}\delta_{i}})^{2}=e^{-4A_{i}} \mathcal{K}^{\zeta^{i}}-4e^{-2A_{i}}(\dot{A}_{i})^{2}R^{\zeta^{i}}-2d_{i}(d_{i} -1)(\dot{A}_{i})^{4}\,.\] (A.19) Finally the Kretschmann scalar is \[\mathcal{K} =\sum_{i=1}^{n}\Big{(}e^{-4A_{i}}\mathcal{K}^{\zeta^{i}}-4e^{-2A_{ i}}(\dot{A}_{i})^{2}R^{\zeta^{i}}-2d_{i}(\dot{A}_{i})^{4}\] \[+4d_{i}(\ddot{A}_{i}+\dot{A}_{i}^{2})^{2}\Big{)}+\sum_{i,j=1}^{n}2 d_{i}d_{j}\big{(}\dot{A}_{i}\dot{A}_{j}\big{)}^{2}\,,\] (A.20) where \(\mathcal{K}^{\zeta^{i}}\) is the Kretschmann scalar related to \(\zeta^{i}\). ## Appendix B Various global coordinates on \(AdS_{d+n+1}\) and its Euclidean version In this appendix, we consider various coordinate systems of \(AdS_{d+n+1}\), both standard global coordinates as well as coordinates adapted to \(AdS_{d}\times S^{n}\) slices and their Euclidean versions. They will be important as benchmarks for the space of solutions we shall find. ### Standard global coordinates on \(AdS_{d+n+1}\) We consider the embedding equation that defines \(AdS_{d+n+1}\) \[-(x^{0})^{2}-(x^{(-1)})^{2}+\sum_{i=1}^{d+n}(x^{i})^{2}=-\ell^{2}\,.\] (B.1) We start with the standard global coordinates. First, we parameterize \[x^{i}=r_{2}n^{i}\ \,\ \ \ i=1,2,\cdots d+n\ \,\ \ n^{i}n^{i}=1\ \ \,\ \ \ r_{2}\geq 0\,,\] (B.2a) \[x^{0}=r_{1}\cos\theta\ \,\ \ \ x^{(-1)}=r_{1}\sin\theta\ \,\ \ \ r_{1}\geq 0\,.\] (B.2b) The Minkowski signature metric in \((2,d+n)\) dimensions becomes \[ds^{2}=-(dx^{0})^{2}-(dx^{(-1)})^{2}+dx^{i}dx^{i}=-dr_{1}^{2}-r_{1}^{2}d\theta ^{2}+dr_{2}^{2}+r_{2}^{2}d\Omega_{d+n-1}^{2}\,,\] (B.3) and the constraint in (B.1) can be written as \[-r_{1}^{2}+r_{2}^{2}=-\ell^{2}\ \ \ \Rightarrow\ \ \ r_{1}^{2}-r_{2}^{2}=\ell^{2}\,.\] (B.4) We now introduce new coordinates \[r_{1}=\ell\cosh(\rho)\ \,\ \ r_{2}=\ell\sinh(\rho)\ \ \,\ \ \ell\geq 0\ \,\ \ \rho\geq 0\,,\] (B.5) and rewrite the metric in (B.3) as \[ds^{2}=-d\ell^{2}-\ell^{2}\cosh^{2}(\rho)d\theta^{2}+\ell^{2}d\rho^{2}+\ell^{2} \sinh^{2}(\rho)d\Omega^{2}_{d+n-1}\,.\] (B.6) \(AdS_{d+n+1}\) is obtained from the metric above by setting \(\ell\) to be constant \[ds^{2}_{n+d+1}=\ell^{2}\Big{(}-\cosh^{2}(\rho)d\theta^{2}+d\rho^{2}+\sinh^{2}( \rho)d\Omega^{2}_{d+n-1}\Big{)}\,.\] (B.7) The usual \(AdS\) is obtained by extending the "time" \(\theta\) from \([0,2\pi]\) to the whole real line. We summarize the embedding map of \(AdS\) in global coordinates to the \((2,d+n)\) Minkowski space \[x^{0}=\ell\cosh(\rho)\cos\theta\,\ x^{(-1)}=\ell\cosh(\rho)\sin\theta\,\ x^{i}=\ell\sinh(\rho)\ n^{i}\ \,\ \ \rho\geq 0\,.\] (B.8) The global boundary of \(AdS\) is \(\rho\to\infty\) that corresponds to \[r_{1}=\Big{(}(x^{0})^{2}+(x^{(-1)})^{2}\Big{)}^{\frac{1}{2}}\to\infty\ \ \,\ \ \ r_{2}=\Big{(}\sum_{i=1}^{n+d}x^{i}x^{i}\Big{)}^{\frac{1}{2}}\to\infty\,,\] (B.9) with their ratio \(\frac{r_{1}}{r_{2}}\) fixed. Indeed the topology of the boundary is \(S^{1}\times S^{d+n-1}\). ### Coordinates fibered over \(AdS_{d}\times S^{n}\) We now introduce new coordinates for the same space. We can separate the variables in \(x^{\mu}\) with \(\mu=-1,0,1,2,...,d\) and \(y^{i}\), \(i=1,2,...,n\) and we parametrize the first set by \(AdS_{d}\) and the second by \(S^{n}\) \[x^{\mu}=rm^{\mu}\ \ \,\ \ \ m\cdot m=-1\ \ \,\ \ \ y^{i}=\rho n^{i}\ \ \,\ \ \ n\cdot n=1\,,\] (B.10) where \(r,\rho\geq 0\), and the flat \((2,d+n)\) metric is \[ds^{2}=-dr^{2}+r^{2}ds^{2}_{AdS_{d}}+d\rho^{2}+\rho^{2}d\Omega^{2}_{n}\,.\] (B.11) The \(AdS_{n+d+1}\) constraint is \[-r^{2}+\rho^{2}=-\ell^{2}\ \ \ \Rightarrow\] (B.12a) \[r=\ell\cosh(u)\ \,\ \ \rho=\ell\sinh(u)\ \,\ \ \ u>0\,.\] (B.12b) The induced metric becomes \[ds^{2}_{n+d+1}=\ell^{2}\Big{(}du^{2}+\cosh^{2}(u)ds^{2}_{AdS_{d}}+\sinh^{2}(u)d \Omega^{2}_{n}\Big{)}\ \,\ \ \ u\geq 0\,,\] (B.13) and has one patch \(u>0\).20 Footnote 20: In general, if we have a CFT on \(AdS_{d}\times S^{n}\) the physics should depend on the ratio of the two radius scales. Here this ratio is set to one. There is a related coordinate system where we map \[\sinh(u)=\tan(\phi)\ \ \,\ \ \ du=\frac{d\phi}{\cos(\phi)}\ \ \,\ \ \ \phi\in\big{[}0,\frac{\pi}{2}\big{]}\,,\] (B.14) and the new metric becomes \[ds^{2}=\frac{\ell}{\cos^{2}(\phi)}\Big{(}d\phi^{2}+\sin^{2}(\phi)d\Omega_{n}^{ 2}+ds_{AdS_{d}}^{2}\Big{)}\,,\] (B.15) which is conformal to \(AdS_{d}\times S^{n+1}\). However as \(\phi\in\big{[}0,\frac{\pi}{2}\big{]}\), we have only one hemisphere of \(S^{n+1}\). We now write explicitly (B.10) \[x^{0}=r_{3}\cosh(\rho)\cos\theta\ \ \,\ \ \ x^{(-1)}=r_{3}\cosh(\rho) \sin\theta\ \ \,\ \ \ x^{i}=r_{3}\sinh(\rho)\ n^{i}\,,\] (B.16a) \[i=1,2,\cdots,d-1\ \ \,\ \ n\cdot n=1\ \ \,\ \ \ \rho\geq 0\,,\] (B.16b) \[y^{i}=r_{4}m^{i}\ \ \,\ \ i=1,2,\cdots,n+1\ \ \,\ \ m\cdot m=1\,,\] (B.16c) \[r_{3}=\ell\cosh(u)\ \ \,\ \ r_{4}=\ell\sinh(u)\ \ \,\ \ u\geq 0\,.\] (B.16d) Consider first the limit \(u\to\infty\). In this case \[A : \Big{(}(x^{0})^{2}+(x^{(-1)})^{2}\Big{)}^{\frac{1}{2}}\to \infty\,\ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to\infty\,\ \Big{(}\sum_{i=1}^{n}(y^{i})^{2}\Big{)}^{\frac{1}{2}}\to \infty\,,\] (B.17) in the same way but this is part of the original boundary of \(AdS_{d+n+1}\). It does not contain the limit where \(\big{(}\sum_{i=1}^{d}(x^{i})^{2}\big{)}^{\frac{1}{2}}\to\infty\) and \(\big{(}\sum_{i=1}^{n}(y^{i})^{2}\big{)}^{\frac{1}{2}}\) remains finite or when \(\big{(}\sum_{i=1}^{n}(y^{i})^{2}\big{)}^{\frac{1}{2}}\to\infty\) and \(\big{(}\sum_{i=1}^{d}(x^{i})^{2}\big{)}^{\frac{1}{2}}\) remains finite. Therefore the part of the boundary obtained by \(u\to\infty\) is \(S^{1}\times S^{d-1}\times S^{n-1}\subset S^{1}\times S^{d+n-1}\). This piece also includes the special limit \[B : \Big{(}(x^{0})^{2}\!+\!(x^{-1})^{2}\Big{)}^{\frac{1}{2}}\to\infty\,\ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to \mbox{finite}\,\ \Big{(}\sum_{i=1}^{n}(y^{i})^{2}\Big{)}^{\frac{1}{2}}\to\infty\,,\] (B.18) or equivalently \[u\to\infty\ \ \,\ \ \ \rho\to 0\ \,\ \ \ \rho e^{u}\to\mbox{finite}\,.\] (B.19) Now the boundary of \(AdS_{d}\) is when \(\rho\to\infty\). In terms of the embedding coordinates \[C : \Big{(}(x^{0})^{2}\!+\!(x^{-1})^{2}\Big{)}^{\frac{1}{2}}\to\infty\,,\ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to\infty\,\ \Big{(}\sum_{i=1}^{n}(y^{i})^{2}\Big{)}^{\frac{1}{2}}\to\mbox{finite}\,.\] (B.20) This completes the missing piece of the boundary of \(u\to\infty\). The topology of the three boundary pieces is \[A+B=S^{1}\times S^{d-1}\times S^{n-1}\ \ \,\ \ \ C=S^{1}\times S^{d-1}\ \ \,\ \ \partial(AdS_{d+n+1})=A\cup C\,.\] (B.21) ### The special case \(n=0\) Consider now the special case \(n=0\). In that case, the parametrization is \[x^{0}=r_{3}\cosh(\rho)\cos\theta\ \,\ \ \ x^{(-1)}=r_{3}\cosh(\rho) \sin\theta\ \,\ \ x^{i}=r_{3}\sinh(\rho)\ n^{i}\,,\] (B.22a) \[i=1,2,\cdots,d-1\ \ \,\ \ n\cdot n=1\ \,\ \ \rho\geq 0\,,\] (B.22b) \[y=r_{4}\ \ \,\ \ r_{3}=\ell\cosh(u)\ \,\ \ \ r_{4}=\ell\sinh(u)\ \,\ \ \ u\in R\ \,\ \ y\in R\,.\] (B.22c) The metric is now \[ds_{d+1}^{2}=\ell^{2}\left(du^{2}+\cosh^{2}(u)ds_{AdS_{d}}^{2}\right)\,,\ \ \ \ u\in R\,.\] (B.23) Consider first the limit \(u\to\pm\infty\). This is embedding coordinates correspond to \[A : \left((x^{0})^{2}+(x^{(-1)})^{2}\right)^{\frac{1}{2}}\to\infty \ \,\ \ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to\infty\ \ \,\ \ y\to\pm\infty\,,\] (B.24) and \(A\) is topologically \(S^{1}\times S^{d-1}\times S^{0}\). This also includes \[B : \left((x^{0})^{2}+(x^{-1})^{2}\right)^{\frac{1}{2}}\to\infty\ \ \,\ \ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to\mbox{ finite}\ \ \,\ \ y\to\pm\infty\,.\] (B.25) This is obtained as the two limits \[u\to\pm\infty\ \ \,\ \ \ \rho\to 0\ \ \,\ \ e^{\pm u}\rho\to\mbox{finite}\,.\] (B.26) Here \(C\) is \[C : \left((x^{0})^{2}+(x^{-1})^{2}\right)^{\frac{1}{2}}\to\infty\ \ \,\ \ \ \Big{(}\sum_{i=1}^{d}(x^{i})^{2}\Big{)}^{\frac{1}{2}}\to\infty\ \ \,\ \ y\to\mbox{finite}\,,\] (B.27) and is located at the boundary \(\rho\to\infty\) of the slice. It should be stressed that all coordinate systems above, are by construction, global. ## Appendix C Analytic solutions for other signatures In this appendix, we present two more analytic solutions that exist if the signature of the metric is changed. ### The uniform solution This solution is obtained by setting \[e^{2A_{1}(u)}=\epsilon_{1}\ e^{2A(u)}\ \ \,\ \ e^{2A_{2}(u)}=\epsilon_{2}\ e^{2A(u)}\,,\] (C.1) where \(\epsilon_{1,2}\) are constants. Then, the equations (14), (15) and (16) without the scalar field become \[(d+n-1)(d+n)\big{(}\dot{A}^{2}-\frac{1}{\ell^{2}}\big{)}=(\bar{R}_{1 }+\bar{R}_{2})e^{-2A}\quad,\quad\bar{R}_{1,2}\equiv\frac{R_{1,2}}{\epsilon_{1,2 }}\,, \tag{114}\] \[(d+n-1)(d+n)\ddot{A}+(\bar{R}_{1}+\bar{R}_{2})e^{-2A}=0\,,\] (115) \[\bar{R}_{1}=\frac{d}{n}\bar{R}_{2}\,, \tag{116}\] and we have set \(V=-\frac{(d+n-1)(d+n)}{\ell^{2}}\). Adding the two first equations we obtain \[(\ddot{e^{A}})-\frac{e^{A}}{\ell^{2}}=0\,, \tag{117}\] with general solution \[e^{A}=C_{1}e^{-\frac{u}{\ell}}+C_{2}e^{\frac{u}{\ell}}\,, \tag{118}\] Then equation (114) becomes \[\dot{A}^{2}e^{2A}-\frac{e^{2A}}{\ell^{2}}=\frac{(\bar{R}_{1}+\bar{R}_{2})}{(d +n-1)(d+n)}=\frac{\bar{R}_{2}}{n(d+n-1)}\,, \tag{119}\] which implies \[C_{1}C_{2}=-\frac{\ell^{2}\bar{R}_{2}}{4n(d+n-1)}\,. \tag{120}\] We can therefore write the general solution as \[e^{A}=e^{A_{0}}\left[e^{-\frac{u}{\ell}}-\frac{\ell^{2}\bar{R}_{2}}{4e^{2A_{0 }}n(d+n-1)}e^{\frac{u}{\ell}}\right]\,. \tag{121}\] The behavior of this solution as \(u\to-\infty\) does not depend on the various arbitrary constants that appear in this solution. However, such constants affect other properties of the solution. Since \(R_{1}<0\) and \(R_{2}>0\), in order for (116) to have a non-trivial solution we must take \(\epsilon_{1}<0,\epsilon_{2}>0\) or vice versa. In the first case \(\bar{R}_{1,2}>0\), while in the second case \(\bar{R}_{1,2}<0\) \(\bullet\) If \(\bar{R}_{2}>0\) then the scale factor vanishes at a finite value \(u=u_{0}\). This is a curvature singularity of the metric. Moreover, in this case, the whole \(AdS_{d}\) part of the metric has a minus sign. \(\bullet\) If \(\bar{R}_{2}<0\) then the scale factor is regular and there is a second \(AdS\) boundary at \(u\to+\infty\). The solution describes a regular wormhole. In such a case the \(S^{n}\) part of the metric has a negative sign. ### The constant \(A_{2}\) solution If we set \(A_{2}=\bar{A}_{2}\) constant the equations for \(A_{1}\) become \[d(d-1)(\dot{A}_{1})^{2}-e^{-2A_{1}}R_{1}-\bar{R}_{2}-\frac{(d+n-1)(d+n)}{\ell^{2 }}=0\;\;\;,\;\;\;\bar{R}_{2}\equiv e^{-2\bar{A}_{2}}R_{2}\,,\] (C.10) \[(d+n-1)d(\ddot{A}_{1}+(\dot{A}_{1})^{2})-d(d-1)(\dot{A}_{1})^{2}+e^{-2A_{1}}R_{ 1}+\bar{R}_{2}=0\,.\] (C.11) Again we can deduce that \[e^{\dot{A}_{1}}-\frac{d+n}{d\ell^{2}}e^{A_{1}}=0\,,\] (C.12) with general solution \[e^{A_{1}}=C_{1}e^{-\frac{u}{\ell}}+C_{2}e^{\frac{u}{\ell}}\;\;\;,\;\;\;\hat{ \ell}\equiv\sqrt{\frac{d}{d+n}}\ \ell\,.\] (C.13) The first equation becomes \[d(d-1)\left(\frac{d}{du}e^{A_{1}}\right)^{2}-\left(\bar{R}_{2}+\frac{(d+n-1)( d+n)}{\ell^{2}}\right)e^{2A_{1}}=R_{1}\,,\] (C.14) which is satisfied if we choose \(A_{2}\) and \(C_{2}\) so that \[\bar{R}_{2}=-\frac{n(d+n)}{\ell^{2}}\;\;\;,\;\;\;C_{2}=-\frac{\ell^{2}R_{1}}{ 4(d+n)(d-1)C_{1}}\,,\] (C.15) and the solution is \[e^{A_{1}}=e^{\bar{A}_{1}}\left[e^{-\frac{u}{\ell}}-\frac{\ell^{2}\bar{R}_{1}}{ 4(d+n)(d-1)}e^{\frac{u}{\ell}}\right]\;\;\;,\;\;\;\bar{R}_{1}\equiv e^{-2\bar {A}_{1}}R_{1}\,,\] (C.16) where we set \(C_{1}=e^{\bar{A}_{1}}\). For this solution to exist we must take the contribution of the sphere to the metric to be with a negative signature so that (C.15) be satisfied. ## Appendix D The stress-energy tensor The vev of the stress-energy tensor is related to the constant \(C\) that appears in the Fefferman-Graham expansion of the metric near the boundary, for example, see the expansions (3.8a) and (3.8b). To show this, here for simplicity we restrict ourselves to the \(d=n=2\) case. For an asymptotically \(AdS\) space-time the metric near the boundary can be brought into the form \[ds^{2}=du^{2}+\ell^{2}e^{-\frac{2u}{\ell}}g_{ij}(u,x)dx^{i}dx^{j}\,,\] (D.1) where \(g_{ij}\) has the following expansion near the boundary when \(u\to+\infty\) \[g_{ij}(u,x)=g_{ij}^{(0)}(x)+e^{\frac{2u}{\ell}}g_{ij}^{(2)}(x)+e^{\frac{4u}{ \ell}}\big{(}g_{ij}^{(4)}(x)+\frac{2u}{\ell}h_{ij}^{(4)}(x)\big{)}+\cdots\,,\] (D.2) where \(g^{(0)}_{ij}(x)\) corresponds to the boundary condition for the metric. Since we have the second-order equations of motion, the two independent functions are \(g^{(0)}_{ij}(x)\) and \(g^{(4)}_{ij}(x)\) which the latter is related to the expectation value of the stress-energy tensor of the dual theory. The other functions, \(g^{(2)}_{ij}(x)\) and \(h^{(4)}_{ij}(x)\) are determined in terms of \(g^{(0)}_{ij}(x)\) \[g^{(2)}_{ij}=\frac{1}{2}R_{ij}-\frac{1}{12}Rg^{(0)}_{ij}\,, \tag{115a}\] \[g^{(4)}_{ij}=\frac{1}{8}g^{(0)}_{ij}\big{[}(Trg^{(2)})^{2}-Tr[(g ^{(2)})^{2}]\big{]}+\frac{1}{2}(g^{(2)})^{2}_{ij}-\frac{1}{4}Tr[g^{(2)}]g^{(2) }_{ij}+T_{ij}\,,\] (115b) \[h^{(4)}_{ij}=\frac{1}{16\sqrt{g^{(0)}}}\frac{\delta}{\delta g^{( 0)ij}}\int d^{4}x\sqrt{g^{(0)}}(R_{ij}R^{ij}-\frac{1}{3}R^{2})\,, \tag{115c}\] where the integrand in the last term, is the conformal anomaly in \(d+n=4\) dimensions [63, 64]. To read the \(T_{ij}\) we first compute \(g^{(2)}_{ij}\) and the first part of \(g^{(4)}_{ij}\) by using \[g^{(0)}_{ij}dx^{i}dx^{j}=e^{2\bar{A}_{1}}\zeta^{(1)}_{\alpha\beta}dx^{\alpha} dx^{\beta}+e^{2\bar{A}_{2}}\zeta^{(2)}_{\mu\nu}dx^{\mu}dx^{\nu}\,, \tag{116}\] and we find (we set \(\bar{A}_{1}=\bar{A}_{2}=0\)) \[g^{(2)}_{\alpha\beta}=\frac{1}{144}(2R_{1}-R_{2})^{2}\zeta^{(1) }_{\alpha\beta}, g^{(2)}_{\mu\nu}=\frac{1}{144}(R_{1}-2R_{2})^{2}\zeta^{(2)}_{\mu\nu}\,, \tag{117a}\] \[g^{(4)}_{\alpha\beta}=\frac{1}{576}(R_{1}+R_{2})^{2}\zeta^{(1)}_ {\alpha\beta}+T_{\alpha\beta}, g^{(4)}_{\mu\nu}=\frac{1}{576}(R_{1}+R_{2})^{2}\zeta^{(2)}_{\mu \nu}+T_{\mu\nu}\,, \tag{117b}\] On the other hand, we can compute the scale factors from equations of motion. The results for \(d=n=2\) are given by \[A_{1}=\log a_{0}-\frac{u}{\ell}+a_{2}e^{\frac{2u}{\ell}}+a_{4}e ^{\frac{4u}{\ell}}+a_{5}\frac{u}{\ell}e^{\frac{4u}{\ell}}+\cdots\,, \tag{118a}\] \[A_{2}=\log s_{0}-\frac{u}{\ell}+s_{2}e^{\frac{2u}{\ell}}+s_{4}e ^{\frac{4u}{\ell}}+s_{5}\frac{u}{\ell}e^{\frac{4u}{\ell}}+\cdots\,, \tag{118b}\] with coefficients (\(a_{0}=e^{\bar{A}_{1}},s_{0}=e^{\bar{A}_{2}}\)) \[a_{2}=-\frac{\ell^{2}}{24}\big{(}\frac{2R_{1}}{a_{0}^{2}}-\frac{ R_{2}}{s_{0}^{2}}\big{)}\quad,\quad s_{2}=\frac{\ell^{2}}{24}\big{(}\frac{R_{1}}{a _{0}^{2}}-\frac{2R_{2}}{s_{0}^{2}}\big{)}\,, \tag{119a}\] \[a_{4}=-\frac{\ell^{4}\big{(}5a_{0}^{4}R_{2}^{2}-8a_{0}^{2}R_{1} R_{2}s_{0}^{2}+5R_{1}^{2}s_{0}^{4}\big{)}}{2304a_{0}^{4}s_{0}^{4}}-C\,,\] (119b) \[s_{4}=-\frac{\ell^{4}\big{(}5a_{0}^{4}R_{2}^{2}-8a_{0}^{2}R_{1} R_{2}s_{0}^{2}+5R_{1}^{2}s_{0}^{4}\big{)}}{2304a_{0}^{4}s_{0}^{4}}+C\,,\] (119c) \[s_{5}=-a_{5}=-\frac{\ell^{4}}{192}\big{(}\frac{R_{1}^{2}}{a_{0}^ {4}}-\frac{R_{2}^{2}}{s_{0}^{4}}\big{)}\,. \tag{119d}\] Similar to \(d+n=8\) in (105a) and (105b) the \(\frac{u}{\ell}e^{\frac{4u}{\ell}}\) terms in (118a) and (118b) are the conformal anomalous terms in \(d+n=4\). From the above expansions we can read \(g_{ij}^{(4)}\) from the near boundary expansion (104) \[g_{\alpha\beta}^{(4)} =\Big{[}\frac{\ell^{4}}{1152}(11R_{1}^{2}-8R_{1}R_{2}-R_{2}^{2})-2C \Big{]}\zeta_{\alpha\beta}^{(1)}\,, \tag{107a}\] \[g_{\mu\nu}^{(4)} =\Big{[}\frac{\ell^{4}}{1152}(11R_{2}^{2}-8R_{1}R_{2}-R_{1}^{2})+ 2C\Big{]}\zeta_{\mu\nu}^{(2)}\,. \tag{107b}\] By comparing the results of (104b) with (107a) and (107b) we can read the stress-energy tensor components as (again we assume \(\bar{A}_{1}=\bar{A}_{2}=0\)) \[T_{\alpha\beta} =\frac{1}{384}\left(3R_{1}^{2}-4R_{1}R_{2}-R_{2}^{2}-768C\right) \zeta_{\alpha\beta}^{(1)}\,, \tag{108a}\] \[T_{\mu\nu} =\frac{1}{384}\left(3R_{2}^{2}-4R_{1}R_{2}-R_{1}^{2}+768C\right) \zeta_{\mu\nu}^{(2)}\,. \tag{108b}\] It turns out that \(T_{ij}\) and therefore \(C\) is proportional to the vev of the stress-energy tensor of the boundary CFT \[T_{ij}=\frac{1}{4(M_{P}\ell)^{3}}\langle T_{ij}\rangle\,. \tag{109}\] Therefore we can write \[\langle T_{ij}\rangle=4(M_{P}\ell)^{3}\Bigg{[}\frac{T_{CFT}}{4} \begin{pmatrix}\zeta_{\alpha\beta}^{(1)}&0\\ 0&\zeta_{\mu\nu}^{(2)}\end{pmatrix}+\hat{T}_{CFT}\begin{pmatrix}\zeta_{\alpha \beta}^{(1)}&0\\ 0&-\zeta_{\mu\nu}^{(2)}\end{pmatrix}\Bigg{]}\,, \tag{110}\] where the trace part \(T_{CFT}\) and traceless part \(\hat{T}_{CFT}\) are defined \[T_{CFT} =\frac{1}{96}\left(R_{1}^{2}-4R_{1}R_{2}+R_{2}^{2}\right)\,, \tag{111a}\] \[\hat{T}_{CFT} =\frac{1}{96}\big{(}\frac{1}{2}R_{1}^{2}-\frac{1}{2}R_{2}^{2}-48C \big{)}\,. \tag{111b}\] ## Appendix E Perturbations around the product space solution We consider a solution that is a perturbation around the product space solution. For simplicity, in notation, we choose a new variable \[z\equiv\sqrt{\frac{d+n}{n}}\frac{(u-u_{0})}{\ell}\,. \tag{112}\] The scale factors are defined as follows \[A_{1}(z)=A_{1}^{(0)}(z)+\delta A_{1}(z)\quad,\quad A_{2}(z)=A_{2}^{(0)}(z)+ \delta A_{2}(z)\,, \tag{113}\] where \[A_{1}^{(0)}(z)=\frac{1}{2}\log\Big{[}-\frac{\ell^{2}R_{1}}{d(d+ n)}\Big{]}\,, \tag{114a}\] \[A_{2}^{(0)}(z)=\frac{1}{2}\log\Big{[}\frac{\ell^{2}R_{2}}{(n-1)( d+n)}\sinh^{2}(z)\Big{]}\,, \tag{114b}\] are the product space scale factors. We can insert (E.2) into the equation of motion (14) and read \(\delta A_{2}^{\prime}(z)\) and \(\delta A_{2}^{\prime\prime}(z)\). Then by substituting these derivatives into either (15) or (16) we find the following differential equation for \(\delta A_{1}(z)\) \[-2n\delta A_{1}+n\coth(z)\delta A_{1}^{\prime}+\delta A_{1}^{\prime\prime}=0\,.\] (E.4) The solution for this equation is \[\delta A_{1} =\frac{C_{1}}{\cosh^{\frac{m+n}{2}}(z)}\,_{2}F_{1}\left(\frac{m+ n}{4},\frac{1}{4}(m+n+2);\frac{n+1}{2};\tanh^{2}(z)\right)\] \[+\frac{C_{2}\tanh^{1-n}(z)}{\cosh^{\frac{m+n}{2}}(z)}\,_{2}F_{1} \left(\frac{1}{4}(m-n+2),\frac{1}{4}(m-n+4);\frac{3-n}{2};\tanh^{2}(z)\right)\,,\] (E.5) where \(C_{1}\) and \(C_{2}\) are two constants of integration and we have defined \[m\equiv\sqrt{n(n+8)}\,.\] (E.6) Expanding around the end-point \(u=u_{0}\) or equivalently \(z=0\) we read \[\delta A_{1}=C_{1}\big{(}1+\frac{nz^{2}}{n+1}+\mathcal{O}(z^{4})\big{)}+C_{2}z ^{-n}\big{(}z-\frac{n(n+5)z^{3}}{6(n-3)}+\mathcal{O}(z^{4})\big{)}\,.\] (E.7) This expansion shows that in order the scale factor of \(AdS\) i.e. \(e^{2A_{1}^{(0)}(u)+2\delta A_{1}(u)}\) be finite as \(z\to 0\) we should choose \[C_{2}=0\,.\] (E.8) We can also expand \(\delta A_{1}\) near the UV boundary as \(z\to+\infty\) \[\delta A_{1}=C_{1}\frac{(m-2)2^{n-2}\Gamma\left(\frac{m}{2}-1\right)\Gamma \left(\frac{n+1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{m+n}{2}\right)}\,(e^{ z})^{\frac{m-n}{2}}+\cdots\,,\] (E.9) which means that although the fluctuations are small near the IR end-point but grow exponentially as \(z\) moves toward the UV boundary. The equation of motion for \(\delta A_{2}\) in terms of the new variable \(z\) is given by \[\coth(z)\,(d\delta A_{1}^{\prime}+(n-1)\delta A_{2}^{\prime})-d\delta A_{1}+( n-1)\text{csch}^{2}(z)\delta A_{2}=0\,.\] (E.10) The solution is obtained by \[\delta A_{2}=C_{3}\coth(z)+\coth(z)\int_{1}^{z}\frac{d}{n-1}\tanh(w)\,(\tanh( w)\delta A_{1}-\delta A_{1}^{\prime})\,dw\,,\] (E.11) where \(C_{3}\) is another constant of integration. Equation (E.11) is hard to solve but to see the series expansion of \(\delta A_{2}\) we can solve (E.10) near \(z=0\). The series is \[\delta A_{2}=-C_{1}(\frac{d}{3(n+1)}z^{2}+\frac{d(8n-3)}{45(n+1)(n+3)}z^{4}+ \cdots)\,.\] (E.12) Here the constant of integration \(C_{3}\) in (E.11) is related to \(C_{1}\) to have a regular solution for the scale factor of the sphere \(A_{2}\). Moreover, near the UV as \(z\to+\infty\) we have \[\delta A_{2}=C_{1}\frac{2^{n-2}d\left(m-n-2\right)\Gamma\left(\frac{m}{2} \right)\Gamma\left(\frac{n-1}{2}\right)}{\sqrt{\pi}(n-m)\Gamma\left(\frac{m+n} {2}\right)}\left(e^{z}\right)^{\frac{m-n}{2}}+\cdots\,,\] (E.13) however, here unlike the \(\delta A_{1}\), the fluctuations remain small compared to the leading term which is growing like \(e^{2z}\) because \[2>\frac{m-n}{2}\quad,\quad\text{for}\quad n>0\,.\] (E.14) ## Appendix F Topological Black holes with a negative cosmological constant We consider solutions to the Einstein equation with a negative cosmological constant \[G_{\mu\nu}=\frac{d(d+1)}{\ell^{2}}\,,\] (F.1) in \(d+2\) dimensions, with an ansatz \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}h_{ij}(x)dx^{i}dx^{j}\,.\] (F.2) \[f=k-\frac{\omega_{d}M}{r^{d-1}}+\frac{r^{2}}{\ell^{2}}\quad,\quad\omega_{d}= \frac{16\pi G}{d\text{\,Vol($h$)}}\quad,\quad\text{\it Vol($h$)}=\int d^{d}x \sqrt{h}\,,\] (F.3) and \(h_{ij}\) is a constant curvature metric \[R_{ij}(h)=(d-1)k\ h_{ij}\,.\] (F.4) If the constant curvature manifold \(h\) is maximally symmetric, then \[R_{ijkl}(h)=k(h_{ik}h_{jl}-h_{il}h_{jk})\,.\] (F.5) The \(M=0\) solution above is isomorphic to a maximally symmetric constant curvature space satisfying \[R_{\mu\nu\rho\sigma}=-\frac{1}{\ell^{2}}(g_{m\rho}g_{\nu\sigma}-g_{\mu\sigma} g_{\nu\rho})\,.\] (F.6) Therefore, the solution with \(M=0\) is locally isometric to AdS\({}_{d+2}\) space, but the topology depends on the sign of \(k\). The boundary is conformally equivalent to \(AdS_{d}\times S^{1}\) if we take the \(t\) coordinate to be an angle \(t\in[0,2\pi]\). The metric above is invariant under the following rescaling \[t\to\frac{t}{\lambda}\quad,\quad r\to\lambda\ r\quad,\quad k\to\lambda^{2}k \quad,\quad M\to\lambda^{d+1}M\quad,\quad h_{ij}\to\frac{h_{ij}}{\lambda^{2}}\,.\] (F.7) By choosing \(\lambda=\frac{1}{\sqrt{|k|}}\) when \(k\neq 0\) the metric can be written as \[f=\epsilon-\frac{\omega_{d}M}{r^{d-1}}+\frac{r^{2}}{\ell^{2}}\;\;\;,\;\;\;\epsilon =0,\pm 1\;\;\;,\;\;\;\omega_{d}=\frac{16\pi G}{d\mbox{\it Vol}(h)}\;\;,\;\;\mbox{ \it Vol}(h)=\!\!\int d^{d}x\sqrt{h}\,, \tag{111}\] and \(h_{ij}\) is a constant curvature metric with \[R_{ij}(h)=(d-1)h_{ij}\,. \tag{112}\] In the maximally symmetric case, the horizon surface can be an \(S^{d}\) or any quotient, \(T^{3}\) or any quotient, or a compact quotient of \(AdS_{3}\). We now take \(d=3\) and set \(k=-|k|\). The equation for the horizon position is \[-|k|\ell^{2}r^{2}-\omega_{3}M\ell^{2}+r^{4}=0\;\;\;\;\rightarrow\;\;\;\;r_{ \pm}^{2}=\ell^{2}\frac{|k|\pm\sqrt{k^{2}+4\frac{\omega_{3}M}{\ell^{2}}}}{2}\,. \tag{113}\] As \(r=0\) is curvature singularity, we are interested in solutions with \(r>0\). We can distinguish the following cases 1. \(\omega_{3}M<-\frac{k^{2}\ell^{2}}{4}\equiv\omega_{3}M_{crit}\). In this case, the solutions are complex and there is no horizon. 2. \(\omega_{3}M=-\frac{k^{2}\ell^{2}}{4}\equiv\omega_{3}M_{crit}\). We have a double root \(r_{+}=r_{-}=\ell\sqrt{\frac{|k|}{2}}\). This is an extremal horizon. 3. \(0>\omega_{3}M>-\frac{k^{2}\ell^{2}}{4}\). There are two distinct real roots with \(r_{+}\) the largest. This is a case similar to the RN black holes and \(r_{-}\) is a Cauchy horizon. 4. \(M=0\). In this case \(r_{+}=\ell\sqrt{|k|}\) while \(r_{-}=0\). Now \(r=0\) is not anymore a curvature singularity and the solution is now locally \(AdS_{5}\) 5. \(M>0\). In this case, there is a single root \(r_{+}>0\) and the black hole structure is as in Schwarzschild. Parametrizing \[M=\frac{r_{+}^{d-1}}{\omega_{d}}\left(\frac{r_{+}^{2}}{\ell^{2}}-|k|\right)\,,\] (114) the temperature is given in general \(d\) as \[T=\frac{(d+1)r_{+}^{2}-(d-1)|k|\ell^{2}}{4\pi\ell^{2}r_{+}}\,.\] (115) When \(T=0\), \[r_{+}^{2}=\frac{(d-1)}{d+1}|k|\ell^{2}\equiv r_{crit}^{2}\,,\] (116) \[M=M_{crit}=-\frac{2}{(d+1)\omega_{d}}\left(\frac{d-1}{d+1}\right)^{\frac{d-1}{2}}|k |^{\frac{d+1}{2}}\ell^{d-1}\,. \tag{111}\] Using the extremal solution \(M=M_{crit}\) as the reference solution, we can write the energy and entropy as \[E=M-M_{crit}\quad,\quad S=\frac{\text{Vol}(h)\ r_{+}^{d}}{4G}\,. \tag{112}\] The specific heat is \[\frac{\partial E}{\partial T}=\frac{4\pi r_{+}^{d-1}}{\omega_{d}}\frac{(d+1)r_ {+}^{2}-(d-1)|k|\ell^{2}}{(d+1)r_{+}^{2}-(d-1)|k|\ell^{2}}=\frac{4\pi r_{+}^{d- 1}}{\omega_{d}}\frac{r_{+}^{2}-r_{crit}^{2}}{r_{+}^{2}+r_{crit}^{2}}\,. \tag{113}\] It is clear that for \(M>M_{crit}\) all solutions are thermodynamically stable. More details, as well as the analysis of the thermodynamics and possible phase transitions, can be found in [61, 62].
2302.14739
Deep Learning for Mean Field Optimal Transport
Mean field control (MFC) problems have been introduced to study social optima in very large populations of strategic agents. The main idea is to consider an infinite population and to simplify the analysis by using a mean field approximation. These problems can also be viewed as optimal control problems for McKean-Vlasov dynamics. They have found applications in a wide range of fields, from economics and finance to social sciences and engineering. Usually, the goal for the agents is to minimize a total cost which consists in the integral of a running cost plus a terminal cost. In this work, we consider MFC problems in which there is no terminal cost but, instead, the terminal distribution is prescribed. We call such problems mean field optimal transport problems since they can be viewed as a generalization of classical optimal transport problems when mean field interactions occur in the dynamics or the running cost function. We propose three numerical methods based on neural networks. The first one is based on directly learning an optimal control. The second one amounts to solve a forward-backward PDE system characterizing the solution. The third one relies on a primal-dual approach. We illustrate these methods with numerical experiments conducted on two families of examples.
Sebastian Baudelet, Brieuc Frénais, Mathieu Laurière, Amal Machtalay, Yuchen Zhu
2023-02-28T16:41:24Z
http://arxiv.org/abs/2302.14739v1
# Deep Learning for Mean Field Optimal Transport ###### Abstract Mean field control (MFC) problems have been introduced to study social optima in very large populations of strategic agents. The main idea is to consider an infinite population and to simplify the analysis by using a mean field approximation. These problems can also be viewed as optimal control problems for McKean-Vlasov dynamics. They have found applications in a wide range of fields, from economics and finance to social sciences and engineering. Usually, the goal for the agents is to minimize a total cost which consists in the integral of a running cost plus a terminal cost. In this work, we consider MFC problems in which there is no terminal cost but, instead, the terminal distribution is prescribed. We call such problems mean field optimal transport problems since they can be viewed as a generalization of classical optimal transport problems when mean field interactions occur in the dynamics or the running cost function. We propose three numerical methods based on neural networks. The first one is based on directly learning an optimal control. The second one amounts to solve a forward-backward PDE system characterizing the solution. The third one relies on a primal-dual approach. We illustrate these methods with numerical experiments conducted on two families of examples. ## 1 Introduction Mean field games (MFGs) have been introduced by Lasry and Lions [41, 42, 43] and Caines, Huang and Malhame [40, 39] to approximate Nash equilibria in games with a very large number of players. At a high level, the main idea is to use a mean field approximation to represent the state of the population, and then to focus on the interactions between a single representative player and the distribution of the states of the other players. Mean field control (MFC) [14] relies on a similar approximation but aims at representing situations in which a large number of agents cooperate to minimize a common social cost. The problem can be interpreted as an optimal control problem for a McKean-Vlasov (MKV) stochastic differential equation (SDE) or an optimal control for a Kolmogorov-Fokker-Planck (KFP) partial differential equation (PDE). In the past decade, the analysis of both MFGs and MFC problems has been extensively developed, see e.g. [14] for an introduction to this topic, and [21] for a probabilistic viewpoint. In the most common setup, the players try to minimize a total cost which is composed of a running cost integrated over time and a terminal cost. These costs generally account for the efforts made to control the dynamics as well as the preferences for some states over others. Another class of models has been introduced, in which there is no terminal cost and instead the terminal distribution of the population is imposed as a constraint. Nash equilibria have been studied under the name of planning problem for mean field game. This class of problems has been analyzed mostly using PDE-based techniques [1, 50, 51, 48, 34, 16]. In the special case of linear dynamics and a quadratic running cost in the control, the problem is related to the Schrodinger bridge problem, and equilibrium conditions can be phrased in terms of ordinary differential equations (ODEs) [25, 27, 26, 28, 29]. This research direction is tightly connected to optimal transport (OT). Benamou and Brenier proposed in [10] a fluid mechanics framework for the \(L^{2}\) Monge-Kantorovich mass transport problem. Many works built on this approach to relate optimal transport and optimal control problems for continuity equations. Of particular interest for MFGs is the work [19], which clarified the link between geodesics for a class of distances between probability measures and a PDE system similar to the one arising in MFGs. For more background on OT, we refer the interested reader to the monographs [58, 59, 55, 49, 8]. However, the solutions of MFGs correspond to Nash equilibria, and hence, in general, MFGs do not admit a variational structure. Furthermore, in many applications, it is not immediately clear to us why selfish players caring only about their individual costs would manage to agree and reach a target terminal distribution. Imposing a fixed terminal distribution seems more natural in the MFC setting, where the agents behave in a cooperative way to minimize the social cost. In the present work, we focus on such MFC with planning problems, in which a mean field of agents try to collectively minimize a social cost while ensuring that a fixed distribution is attained at the terminal time. Since the work of Benamou and Brenier [10], several numerical methods have been investigated for similar problems, including MFGs with planning. Achdou et al. have proposed in [1] a method based on finite differences and Newton method to solve the PDE system of MFG with planning. Benamou and Carlier have used in [11, 13] an Augmented Lagrangian method approach with the alternating direction method of multipliers to solve OT and MFG (without planning). Similar methods have been used in [9, 5] to solve MFGs and MFC problems (still without planning). Benamou et al. proposed in [12] a method to solve MFG with planning through entropy regularization and Sinkhorn algorithm. Recently, several deep learning methods have been proposed to solve high-dimensional optimal control problems and PDEs, such as the DeepBSDE method [36, 37, 35], the Deep Galerkin Method [57] and physics-informed neural networks [52]. Some of these methods have been extended to MFGs and MFC problems. In particular, [7, 22] proposed deep learning methods to solve the PDE systems arising in mean field problems, [23, 32, 31] introduced deep learning methods for differential MFC problems. Ruthotto et al. introduced a deep learning method for variational MFG with degenerate diffusion in [54]. Lin et al. introduced a deep learning method in [45] that utilizes the primal-dual relationship of variational MFG. Cao et al. noticed a connection between MFGs, generative adversarial networks and OT in [18]. We refer the interested reader to e.g. [24, 33, 38] for recent surveys on this topic. The work most related to ours is the work of Liu et al. in [46], where they considered the planning problems in a class of MFGs based on a generalized version of the Schrodinger bridge problem and proposed a neural network-based numerical method to solved it. The main goal of this paper is to propose numerical methods based on deep learning to solve MFC problems with planning constraint, that we will call mean field optimal transport problems. To the best of our knowledge, the theory remains to be investigated in detail, and this is beyond the scope of the present work. Here, we proceed formally when needed, and we focus on the numerical aspects using machine learning tools. The rest of the paper is organized as follows. In Section 2, we introduce the problem and discuss several examples. In Section 3, we describe three numerical methods, each based on a different approach for the problem. In Section 4, we present numerical results on several benchmark problems. ## 2 Definition of the problem Before presenting the mean field optimal transport problem, let us first recall the definition of a typical mean field control problem. Let \(T\) be a time horizon. Let \(\mathcal{Q}=\mathbb{R}^{d}\) and \(\mathcal{Q}_{T}=[0,T]\times\mathcal{Q}\) denote the space domain and the time-space domain. Denote by \(\mathcal{P}_{2}(\mathcal{Q})\) the set of square-integrable probability measures on \(\mathcal{Q}\). Let \(f:\mathcal{Q}\times\mathcal{P}_{2}(\mathcal{Q})\times\mathbb{R}^{k}\to \mathbb{R}\) be a running cost function, \(g:\mathcal{Q}\times\mathcal{P}_{2}(\mathcal{Q})\to\mathbb{R}\) be a terminal cost function, \(b:\mathcal{Q}\times\mathcal{P}_{2}(\mathcal{Q})\times\mathbb{R}^{k}\to \mathbb{R}^{d}\) be a drift function and \(\sigma\in\mathbb{R}\) be a non-negative constant diffusion coefficient. In a classical MFC problem with given initial distribution \(\rho_{0}\) in \(\mathcal{P}_{2}(\mathcal{Q})\), the goal is to find a feedback control \(v^{*}:\mathcal{Q}_{T}\to\mathbb{R}^{k}\) minimizing: \[J^{MFC}:v\mapsto\mathbb{E}\left[\int_{0}^{T}f(X_{t}^{v},\mu^{v}(t),v(t,X_{t}^{ v}))\mathrm{d}t+g(X_{T}^{v},\mu^{v}(T))\right] \tag{1}\] where \(\mu^{v}(t)\) is the distribution of \(X_{t}^{v}\), under the constraint that the process \(X^{v}=(X_{t}^{v})_{t\geq 0}\) solves the SDE \[\begin{cases}X_{0}^{v}\sim\rho_{0}\\ \mathrm{d}X_{t}^{v}=b(X_{t}^{v},\mu^{v}(t),v(t,X_{t}^{v}))\mathrm{d}t+\sigma \mathrm{d}W_{t},\qquad t\geq 0,\end{cases} \tag{2}\] where \(W\) is a standard \(d\)-dimensional Brownian motion. It would also be interesting to consider open-loop controls, but since we are motivated by numerical applications, we restrict our attention to feedback controls. The cost (1) can be interpreted either as the expected cost for a single representative player, or as the average cost for the whole population, which we refer to as the social cost. In this work, we are interested in a modified version of the above problem, where instead of having a terminal cost, a terminal distribution is imposed. This type of problem encompasses optimal transport as a special case, but it may incorporate mean field interactions in the drift and the running cost. For this reason, we will refer to this class of problems as mean field optimal transport (MFOT for short).1 Given two distributions \(\rho_{0}\) and \(\rho_{T}\in\mathcal{P}_{2}(\mathcal{Q})\), the goal is to find a feedback control \(v^{*}:\mathcal{Q}_{T}\to\mathbb{R}^{k}\) minimizing Footnote 1: By analogy with MFG of planning type, we could also call such problems “MFC of planning type”. But referring to “optimal transport” seems clearer so we will stick to the MFOT terminology. \[J^{MFOT}:v\mapsto\mathbb{E}\left[\int_{0}^{T}f(X_{t}^{v},\mu^{v}(t),v(t,X_{t}^ {v}))\mathrm{d}t\right], \tag{3}\] where \(\mu^{v}(t)\) is the distribution of \(X_{t}^{v}\), under the constraint that the process \(X^{v}=(X_{t}^{v})_{t\geq 0}\) solves the SDE \[\begin{cases}X_{0}^{v}\sim\rho_{0},\qquad X_{T}^{v}\sim\rho_{T}\\ dX_{t}^{v}=b(X_{t}^{v},\mu^{v}(t),v(t,X_{t}^{v}))\mathrm{d}t+\sigma\mathrm{d}W _{t},\qquad t\geq 0.\end{cases} \tag{4}\] We stress that the terminal constraint implicitly restricts the class of admissible controls since we are interested in minimizing only over controls \(v\) that make \(X_{T}^{v}\) have distribution \(\rho_{T}\). We now present a few useful examples, some of which will be revisited in the numerical experiments (see Section 4). **Example 1** (Optimal transport).: _When \(b(x,\mu,a)=a\), \(f(x,\mu,a)=\frac{1}{2}a^{\top}a\) and \(\sigma=0\), the MFOT problem reduces to a standard OT problem. See e.g. [10]._ **Example 2** (Linear-quadratic).: _Take \(b(x,\mu,a)=Ax+\bar{A}\bar{\mu}+Ba\), \(f(x,\mu,a)=x^{\top}Qx+\bar{\mu}^{\top}\bar{Q}\bar{\mu}+a^{\top}Ra\), and \(g(x,\mu)=x^{\top}Q_{T}x+\bar{\mu}^{\top}\bar{Q}_{T}\bar{\mu}\), where \(\bar{\mu}=\int\xi\mu(\mathrm{d}\xi)\), where \(A,\bar{A},B,Q,\bar{Q},R,Q_{T}\) and \(\bar{Q}_{T}\) are matrices of suitable sizes. In this setting, the MFC problem has an explicit solution, up to solving a forward-backward system of ODEs. Furthermore, if the initial distribution is Gaussian, then the optimal flow of distribution remains Gaussian. See e.g. [14, Chapter 6]. To the best of our knowledge, in the MFOT setting, a similar result is available in the literature only when \(Q=\bar{Q}=0\), which corresponds to the Schrodinger bridge problem. See [29, Section 7.1]._ **Example 3** (Crowd motion with congestion).: _Take \(b(x,\mu,a)=a\), \(f(x,\mu,a)=(c+\rho\star\mu(x))^{\gamma}|a|^{2}+\ell(x,\mu(x))\), where \(c\geq 0\) is a constant, \(\rho\) is a regularizing kernel and \(\star\) denotes the convolution. For \(\gamma=0\), the model is linear-quadratic in the control. If \(\gamma>0\), the cost of moving increases with the density surrounding the agent, which represents the fact that the "energy" spent to move is higher in regions with higher density. This models a congestion effect. The last term in \(f\) can be used to represent crowd aversion if \(\ell\) is increasing with respect to \(\mu(x)\), and it can be used to represent spatial preferences by taking for instance \(\ell(x,\mu(x))=|x_{*}-x|^{2}\), where \(x_{*}\) is a preferred position. The terminal cost \(g\) can also be used to represent crowd aversion or spatial preferences. See e.g. [3, 4] for more details on the analysis of the MFC PDE system for this class of models and [5] for numerical aspects. When \(\ell=0\) and \(\sigma=0\), the corresponding MFOT problem has been studied e.g. in [19]. Similar models have also been studied in the context of MFGs, see e.g. [6, 2]._ ## 3 Numerical methods In this section, we introduce three different numerical methods to solve MFOT. Section 3.1 introduces a direct approach to solve a MFC problem that approximates the MFOT problem. Section 3.2 discusses the Deep Galerkin Method (DGM) to solve the underlying PDE system that characterizes the optimal solution to MFOT, which is composed of a coupled Hamilton-Jacobi-Bellman equation and a Kolomogrov-Fokker-Planck equation. Section 3.3 introduces the DeepADMM algorithm that solves a variational reformulation of the MFOT problem based on an augmented Lagrangian approach. ### Direct approach for the optimal control formulation We first introduce the direct approach, which does not require any derivation of optimality conditions. In order to make the problem numerically tractable, we make approximations on several levels. Motivated by the deep learning method for MFC problems proposed in [22] (see also the first algorithm in [24]), we first approximate the MFOT problem (3) by an MFC problem in which a terminal penalty is incurred based on the distance between the terminal distribution and the target distribution. We can then apply the algorithm of [22], which trains a neural network to learn the optimal control of the MFC problem. This method itself relies on three approximations. #### 3.1.1 Problem Approximation Instead of directly tackling the MFOT problem (3), we first consider the following MFC problem as an approximation of the original problem: Find a feedback control \(v^{*}:\mathcal{Q}_{T}\rightarrow\mathbb{R}^{k}\) minimizing (1) under the constraint (2) when the terminal cost is: \[g(x,\mu)=G(\mathcal{W}_{2}(\mu,\rho_{T})),\qquad\mu\in\mathcal{P}_{2}( \mathcal{Q}). \tag{5}\] where \(G:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is an increasing function and \(\mathcal{W}_{2}\) denotes the Wasserstein distance on \(\mathcal{P}_{2}(\mathcal{Q})\). A typical example that we will use in the experiments is a linear function. The purpose of introducing \(G(\mathcal{W}_{2}(\mu,\rho_{T}))\) is to add a penalty that enforces the planning constraint for the terminal distribution. Here we focus on the Wasserstein distance because of its connection with optimal transport, see e.g. [10, 55], although other similarity measures could be used. In our numerical experiments, we will take an increasing linear function for \(G\). Then, we use the following approximations: * Since it is not possible to optimize overall feedback controls, we restrict the space of controls to the space of neural networks with a given architecture. We will denote by \(v_{\theta}\) a representative neural network of this class with parameter \(\theta\). The problem becomes a finite-dimensional optimization problem, in which the goal is to find a value for the parameter \(\theta\) that minimizes the loss \(J(v_{\theta})\), i.e., the total cost of the MFC problem when using control \(v_{\theta}\). * Since it is not possible to represent the mean field state \(\mu^{v_{\theta}}(t)\) or to compute its evolution exactly, we approximate it by the empirical distribution \(\bar{\mu}^{N,v_{\theta}}(t)=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{t}^{i,v_{ \theta}}}\), where each \(X^{i,v_{\theta}}\) is a solution of, \[\begin{cases}X_{0}^{i,v_{\theta}}\sim\rho_{0}\quad\text{ i.i.d.}\\ dX_{t}^{i,v_{\theta}}=b(X_{t}^{i,v_{\theta}},\bar{\mu}^{N,v_{\theta}}(t),v_{ \theta}(t,X_{t}^{i,v_{\theta}}))\mathrm{d}t+\sigma\mathrm{d}W_{t}^{i},\qquad t \geq 0,\end{cases}\] (6) where \((W^{i})_{i=1,\ldots,N}\) is a family of \(N\) independent \(d\)-dimensional Brownian motions, which represent idiosyncratic noises affecting each particle independently. All the SDEs are based on the same control function \(v_{\theta}\). * Last, in order to be able to compute these dynamics using Monte Carlo simulations, we discretize the time variable \(t\). Letting \(N_{T}\) be a number of regular time steps of length \(\Delta t=T/N_{T}\), we replace the interval \([0,T]\) by the time steps \(\{t_{0}=0,t_{1}=\Delta t,\ldots,t_{N_{T}}=N_{T}\Delta t\}\). The time steps are \(t_{n}=n\Delta t\), \(n=0,\ldots,N_{T}\). We then approximate the SDE system (6) using an Euler-Maruyama scheme. The family of trajectories \(((X_{t}^{i,v_{\theta}})_{t\in[0,T]})_{i=1,\ldots,N}\) is approximated by the family of sequences \(((X_{t_{n}}^{i,v_{\theta},N_{T}})_{n=0,\ldots,N_{T}})_{i=1,\ldots,N}\) satisfying: \[\begin{cases}X_{0}^{i,v_{\theta},N_{T}}\sim\rho_{0}&\text{i.i.d.}\\ X_{t_{n+1}}^{i,v_{\theta},N_{T}}=X_{t_{n}}^{i,v_{\theta},N_{T}}+b(X_{t_{n}}^{i, v_{\theta},N_{T}},\bar{\mu}_{t_{n}}^{N,v_{\theta},N_{T}},v_{\theta}(t_{n},X_{t_{n}}^ {i,v_{\theta},N_{T}}))\Delta t+\sigma\Delta W_{n}^{i},\end{cases}\] (7) where \(\bar{\mu}_{t_{n}}^{N,v_{\theta},N_{T}}=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{t_ {n}}^{i,v_{\theta},N_{T}}}\) is the empirical distribution associated with the samples \(X_{t_{n}}^{i,\theta,N_{T}}\). Here, \((\Delta W_{n}^{i})_{i=1,\ldots,N,n=0,\ldots,N_{T}-1}\) are independent Gaussian random variables with variance \(\Delta t\). To summarize, the new problem is to find \(\theta^{*}\) minimizing: \[J^{N,N_{T}}(\theta)=\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}\sum_{n=0}^{N_{T }-1}f(X_{t_{n}}^{i,\theta,N_{T}},\bar{\mu}_{t_{n}}^{N,\theta,N_{T}},v_{\theta} (t_{n},X_{t_{n}}^{i,\theta,N_{T}}))\Delta t+g(X_{T}^{N,\theta,N_{T}},\bar{\mu} _{T}^{N,\theta,N_{T}})\right]\] subject to the dynamics (7). The full analysis of this problem and its rigorous connection with the original MFOT problem (3) is beyond the scope of this paper and is left for future work. We expect the control \(v_{\theta^{*}}\), with the parameter value that is optimal for the above problem, to be approximately optimal for (3), under suitable assumptions on \(b\) and \(f\). In particular, \(b\) and \(f\) should probably depend smoothly on the distribution so that they can be evaluated in a meaningful way at the empirical distribution \(\bar{\mu}_{t_{n}}^{N,\theta,N_{T}}\). #### 3.1.2 Description of the algorithm **Optimization method.** To find an approximate minimizer, we use stochastic gradient descent (SGD) or one of its variants. At iteration \(k\), we have a parameter \(\theta_{k}\) that we wish to update. We sample the initial positions \((X_{0}^{i,v_{\theta_{k}},N_{T}})_{i=1,\ldots,N}\) and the Brownian motion increments \((\Delta W_{n}^{i})_{i=1,\ldots,N,n=0,\ldots,N_{T}-1}\). We then compute the empirical cost for this realization of the \(N\)-particle population, and use its gradient with respect to \(\theta_{k}\) to update the parameter. In other words, we apply SGD to the following loss function: \[\mathcal{L}(\theta)=J^{N,N_{T}}(\theta)=\mathbb{E}_{S}[\mathcal{L}(\theta;S)],\] with: \[\mathcal{L}(\theta;S)=\frac{1}{N}\sum_{i=1}^{N}\sum_{n=0}^{N_{T}-1}f(X_{t_{n} }^{i,\theta,N_{T}},\bar{\mu}_{t_{n}}^{N,\theta,N_{T}},v_{\theta}(t_{n},X_{t_{ n}}^{i,\theta,N_{T}}))\Delta t+g(X_{T}^{N,\theta,N_{T}},\bar{\mu}_{T}^{N,\theta,N_{T}})\] where \(S=\left((X_{0}^{i,v_{\theta},N_{T}})_{i=1,\ldots,N},(\Delta W_{n}^{i})_{i=1, \ldots,N,n=0,\ldots,N_{T}-1}\right)\) denotes one random sample. **Computation of the Wasserstein distance.** As shown in (5), the new problem we considered involves a Wasserstein distance between two continuous distributions, namely, the mean field distribution at terminal time \(\mu_{T}^{v}\) and the target distribution \(\rho_{T}\). This is in general hard to compute. However, in our implementation, the mean field distribution is approximated by an empirical distribution obtained by Monte Carlo simulations, as is explained above. We then sample the same number of points from the target distribution and compute the Wasserstein distance between the two empirical distributions. This is done in the following way. Let \(X\) and \(Y\) be two sets of \(N\) points each sampled from distributions \(\mu,\nu\), \(M_{p}\) the distance matrix, \((M_{p})_{ij}=|X_{i}-Y_{j}|^{p}\), and the following set: \[U_{N}=\left\{A\in\mathbb{R}^{N\times N}\Big{|}\sum_{j=1}^{N}A_{ij}=\sum_{i=1}^ {N}A_{ij}=\frac{1}{N}\right\}. \tag{8}\] Then \[\left(\mathcal{W}_{p}\left(\mu,\nu\right)\right)^{p}=\lim_{N\to\infty}\min_{T \in U_{N}}\left\langle T,M_{p}\right\rangle.\] In order to efficiently compute the Wasserstein distance, we follow the algorithm proposed by Cuturi in [30]. We consider an extra entropy regularization of the following form. Let \(\alpha>0\), we want to find \(T_{\alpha}^{*}\), which is the solution to the following program: \[\min_{T\in U_{N}}\left\langle T,M_{p}\right\rangle-\alpha\left\langle T\log( T),1\right\rangle.\] Optimality conditions and Sinkhorn-Knopp [56] theorem give us the existence and uniqueness of the solution, as well as a unique decomposition of \(T_{\alpha}^{*}\) using two vectors \(u\) and \(v\) such that: \[T_{\alpha}^{*}=\mathrm{Diag}(u)\exp\left(-\frac{M_{p}}{\alpha} \right)\mathrm{Diag}(v).\] We can then compute \(u\) and \(v\) with Sinkhorn-Knopp algorithm. Further explanations on this algorithm can be found in [30]. This method allows for fast computations and is easy to export to greater dimensions, at the cost of adding a layer of approximation due to the extra parameter \(\alpha\). It can be noticed that as \(\alpha\) tends to zero, the regularized solution tends to the solution of discrete optimal transport. In practice, reducing \(\alpha\) to zero increases the number of iterations required for Sinkhorn algorithm to converge. However, in our numerical experiments we usually obtain good results with a small but non-zero \(\alpha\). **Remark 1**.: _Notice that using our approach, we have one empirical distribution and one continuous distribution. Indeed, we have the empirical distribution obtained by Monte Carlo simulation and the target distribution \(\rho_{T}\), which is generally given by a closed-form formula for its density. We could thus try to use the designated methods, such as Semi-discrete Optimal transport [47]. While being more accurate, these methods do not scale well in higher dimensions compared to Sinkhorn's alternative._ **Terminal penalty.** In our implementation, we take \(G\) as a linear function \(G(r)=C_{W}r\), where \(C_{W}\) is a positive constant that weighs the importance of the terminal penalty in comparison with the running cost. This leads to a trade-off between minimizing the running cost and satisfying the terminal constraint. We noticed that when \(C_{W}\) is too small, the algorithm minimizes the running cost without much consideration for the terminal condition and hence the terminal distribution is far from the target distribution. Therefore, the penalization has to be a significant component of the total loss if we want the terminal planning constraint to be approximately satisfied with good accuracy. ### Deep Galerkin Method for the PDE system We now turn our attention to a method based on solving a forward-backward PDE system that characterizes the solution. We first discuss the PDE system and then use a deep learning method to solve this system. #### 3.2.1 PDE system for MFOT As recalled above, in a standard MFC, the whole population uses a given feedback control \(v\). Assuming that the distribution \(\mu_{t}^{v}=\mathcal{L}(X_{t}^{v})\) of a representative agent with dynamics (2) admits a smooth enough density \(m_{t}\), the latter satisfies the Kolmogorov-Fokker-Planck (KFP) PDE: \[\begin{cases}&\frac{\partial m}{\partial t}(t,x)-\nu\Delta m(t,x)+\operatorname{ div}\bigl{(}m(t,x)b(x,m(t,\cdot),v(t,x))\bigr{)}=0\qquad t\in(0,T],x\in\mathcal{Q}\\ &m(0,x)=m_{0}(x),\qquad x\in\mathcal{Q},\end{cases}\] where \(m_{0}\) is the density of the initial distribution \(\rho_{0}\) and \(\nu=\frac{\sigma^{2}}{2}\). The MFC problem (1) can then be viewed as an optimal control problem driven by the above KFP PDE. Under suitable conditions, the optimal control can be characterized through an adjoint PDE, which can be derived for instance via calculus of variations. See _e.g._[14, Chapter 4] for more details. Let \(H:\mathcal{Q}\times L^{2}(\mathcal{Q})\times\mathbb{R}^{d}\to\mathbb{R}\) be the Hamiltonian of the control problem faced by an infinitesimal agent in the first point above, which is defined by: \[H:(x,m,p)\mapsto H(x,m,p)=\max_{v\in\mathbb{R}^{k}}\{-L(x,m,v,p)\}, \tag{9}\] where \(m\) denotes the density of \(\mu\), \(L:\mathcal{Q}\times L^{2}(\mathcal{Q})\times\mathbb{R}^{k}\times\mathbb{R}^{ d}\to\mathbb{R}\) is the Lagrangian, defined by: \[L:(x,m,v,p)\mapsto L(x,m,v,p)=f(x,m,v)+\langle b(x,m,v),p\rangle. \tag{10}\] A necessary condition for the existence of an optimal control \(v^{*}\) is that: \[v^{*}(t,x)=\operatorname*{argmax}_{v\in\mathbb{R}^{k}}\big{\{}-L(x,m(t,\cdot ),v,\nabla u(t,x))\big{\}},\] where \((u,m)\) solve the following system of partial differential equations: \[0=-\frac{\partial u}{\partial t}(t,x)-\nu\Delta u(t,x)+H(x,m(t,\cdot),\nabla u (t,x))\] \[\qquad+\int_{\mathcal{Q}}\frac{\partial H}{\partial m}(\zeta,m(t, \cdot),\nabla u(t,\zeta))(x)m(t,\zeta)\mathrm{d}\zeta, \text{in }(0,T]\times\mathcal{Q}, \tag{11a}\] \[0=\frac{\partial m}{\partial t}(t,x)-\nu\Delta m(t,x)- \operatorname{div}\Bigl{(}m(t,x)\partial_{p}H(x,m(t,\cdot),\nabla u(t,x)) \Bigr{)}, \text{in }[0,T)\times\mathcal{Q},\] (11b) \[u(T,x)=g(x,m(T,\cdot))+\int_{\mathcal{Q}}\frac{\partial g}{ \partial m}(\zeta,m(T,\cdot))(x)m(T,\zeta)\mathrm{d}\zeta, \text{in }\mathcal{Q},\] (11c) \[m(0,x)=m_{0}(x), \text{in }\mathcal{Q}. \tag{11d}\] The partial derivatives with respect to \(m\) appear in the backward PDE due to the fact that the population distribution changes when the control changes. These partial derivatives with respect to \(m\) should be understood in the following sense: if \(\varphi:L^{2}(\mathbb{R}^{d})\to\mathbb{R}\) is differentiable, \[\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\varphi(m+\varepsilon\tilde{m})(x)_{ \big{|}\varepsilon=0}=\int_{\mathbb{R}^{d}}\frac{\partial\varphi}{\partial m }(m)(\zeta)\tilde{m}(\zeta)\mathrm{d}\zeta.\] We refer to e.g. [14, Chapter 4] for more details and for the derivation using calculus of variations, which clarifies why the partial derivatives with respect to \(m\) appear. If the cost functions and the drift function depend on the density only locally (i.e., only on the density at the current position of the agent), \(\frac{\partial}{\partial m}\) becomes a derivative in the usual sense. In this PDE system, \(m\) plays the role of the MFC problem's state. The forward equation is a Kolmogorov-Fokker-Planck (KFP) equation which describes the evolution of the mean field distribution. The other unknown function, \(u\), plays the role of an adjoint state. Although the backward PDE has the form of a Hamilton-Jacobi-Bellman (HJB) equation, \(u\) cannot, in general, be interpreted as the value function associated to problem (1) because the value function depends on the population distribution, see e.g. [44, 15]. We refer the interested reader to e.g. [14, Chapters 3 and 4] for the comparison with the MFG PDE system, in which the terms involving a derivative with respect to \(m\) are absent, and \(u\) can be interpreted as the value function of an infinitesimal player. Now, for the MFOT problem, we can proceed formally in a similar way. We derive an analogous PDE system, except that the terminal condition for \(u\) disappears, and a terminal condition for \(m\) is added to the system. More precisely, we (formally) obtain the following PDE system: \[0 =-\frac{\partial u}{\partial t}(t,x)-\nu\Delta u(t,x)+H(x,m(t, \cdot),\nabla u(t,x))\] \[\qquad+\int_{\mathcal{Q}}\frac{\partial H}{\partial m}(\zeta,m(t, \cdot),\nabla u(t,\zeta))(x)m(t,\zeta)\mathrm{d}\zeta, \text{in }(0,T]\times\mathcal{Q}, \tag{12a}\] \[0 =\frac{\partial m}{\partial t}(t,x)-\nu\Delta m(t,x)-\mathrm{div} \Big{(}m(t,x)\partial_{p}H(x,m(t,\cdot),\nabla u(t,x))\Big{)}, \text{in }[0,T)\times\mathcal{Q},\] (12b) \[m(0,x)=m_{0}(x),\qquad m(T,x)=m_{T}(x) \text{in }\mathcal{Q}. \tag{12c}\] where \(m_{0}\) and \(m_{T}\) are respectively the densities of \(\rho_{0}\) and \(\rho_{T}\). To the best of our knowledge, this PDE system has not been derived nor analyzed in a general setting. Notice that even the existence of a solution is a non trivial question due to the fact that there is both an initial and a terminal constraint on the density. However, this system has been analyzed in special cases corresponding to optimal transport [10, 19] or to MFGs with planning [1, 50, 51, 34]. In the numerical examples of Section 4, we will mostly focus on cases that have been previously studied, such as standard optimal transport or MFOT with congestion effects captured by the running cost. #### 3.2.2 Description of the algorithm To solve the PDE system (12), we follow the idea of the Deep Galerkin Method (DGM) introduced by Sigignano and Spiliopoulos [57] and adapted to the MFG and PDE systems in [7, 23, 24]. The main motivation underlying this approach is to learn the PDE solutions using parameterized functions. This avoids computing the functions on a mesh, which is not feasible in high dimensions. In the DGM, we replace the function(s) solving the PDE(s) with a neural network(s), which are trained to minimize the PDE residual(s) as well as the boundary condition(s). To be specific, in our setting, we replace the functions \(m\) and \(u\) with neural networks, denoted by \(m_{\theta}\) and \(u_{\omega}\) and parameterized by \(\theta\) and \(\omega\) respectively. When the state \(x\) is in high dimension, i.e., \(d\) is large, we expect \(m_{\theta}\) and \(u_{\omega}\) to provide good approximations of \(m\) and \(u\) using much fewer parameters than the number of points in a grid. Furthermore, for the numerical implementation, we restrict our attention to a compact domain \(\tilde{\mathcal{Q}}\). We denote \(\tilde{\mathcal{Q}}_{T}=[0,T]\times\tilde{\mathcal{Q}}\). We expect the density to have a negligible mass outside a compact set so that by solving the PDE system on a large enough compact set, we obtain a good approximation of the solution, at least in the region where the density is significantly positive. We then define the loss function: \[\mathcal{L}(\theta,\omega)=\mathcal{L}^{(\mathrm{KFP})}(m_{\theta},u_{\omega} )+\mathcal{L}^{(\mathrm{HJB})}(m_{\theta},u_{\omega}),\] where, for any \((m,u)\in\mathcal{C}^{1,2}(\tilde{\mathcal{Q}}_{T})\times\mathcal{C}^{1,2}( \tilde{\mathcal{Q}}_{T})\), the two losses are as: \[\mathcal{L}^{(\mathrm{KFP})}(m,u) =C^{(\mathrm{KFP})}\left\|\frac{\partial m}{\partial t}-\nu\Delta m -\mathrm{div}\Big{(}m\partial_{p}H(m,\nabla u)\Big{)}\right\|_{L^{2}(\tilde{ \mathcal{Q}}_{T})}^{2}\] \[\qquad+C_{0}^{(\mathrm{KFP})}\left\|m(0,\cdot)-m_{0}\right\|_{L^ {2}(\tilde{\mathcal{Q}})}^{2}+C_{T}^{(\mathrm{KFP})}\left\|m(T,\cdot)-m_{T} \right\|_{L^{2}(\tilde{\mathcal{Q}})}^{2}, \tag{13}\] and \[\mathcal{L}^{(\mathrm{HJB})}(m,u)=C^{(\mathrm{HJB})}\Big{\|}-\frac{\partial u }{\partial t}-\nu\Delta u+H(m,\nabla u)+\int_{\mathcal{Q}}\frac{\partial H}{ \partial m}(\zeta,m(t,\cdot),\nabla u(t,\zeta))(\cdot)m(t,\zeta)\mathrm{d} \zeta\Big{\|}_{L^{2}(\tilde{\mathcal{Q}}_{T})}^{2}.\] Here, \(C^{(\mathrm{KFP})},C_{0}^{(\mathrm{KFP})},C_{T}^{(\mathrm{KFP})},C^{(\mathrm{ HJB})}\) are positive weights that give more or less importance to each component. If the space domain is bounded, we must include more penalty terms. Note that any smooth enough solution \((m,u)\) to the PDE system (12) makes \(\mathcal{L}^{(\mathrm{KFP})}\) and \(\mathcal{L}^{(\mathrm{HJB})}\) vanish. The goal is to find two neural networks which approximately minimize these losses. Since it is not possible to compute exactly the above residuals, we approximate the \(L^{2}\) norms using Monte Carlo samples. For example, we rewrite: \[\left\|\frac{\partial m}{\partial t}-\nu\Delta m-\mathrm{div} \Big{(}m\partial_{p}H(m,\nabla\,u)\Big{)}\right\|_{L^{2}(\tilde{\mathcal{Q}}_{ T})}^{2}\] \[=C(\tilde{\mathcal{Q}}_{T})\cdot\mathbb{E}_{\tau,\xi}\left[ \left|\frac{\partial m}{\partial t}(\tau,\xi)-\nu\Delta m(\tau,\xi)-\mathrm{ div}\Big{(}m(\tau,\xi)\partial_{p}H(m(\tau),\nabla u(\tau,\xi))\Big{)}\right|^{2} \right],\] where \((\tau,\xi)\) follows a uniform distribution over \(\tilde{\mathcal{Q}}_{T}\), and \(C(\tilde{\mathcal{Q}}_{T})\) is a normalizing constant that depends on the domain. Likewise, for the other norms, it would also be possible to use different norms and different distributions to sample \((\tau,\xi)\). But for the sake of simplicity, we will stick to this setting for the present work. We obtain the following probabilistic formulation of the loss function \(\mathcal{L}\): \[\mathcal{L}(\theta,\omega)=\mathbb{E}_{S}\left[\mathcal{L}(\theta,\omega;S) \right],\qquad\mathcal{L}(\theta,\omega;S)=\mathcal{L}^{(\mathrm{KFP})}(m_{ \theta},u_{\omega};S)+\mathcal{L}^{(\mathrm{HJB})}(m_{\theta},u_{\omega};S),\] where \(S=(\tau,\xi,\xi_{0},\xi_{T})\in[0,T]\times\tilde{\mathcal{Q}}\times\tilde{ \mathcal{Q}}\times\tilde{\mathcal{Q}}\) denotes one sample, and for any \((m,u)\in\mathcal{C}^{1,2}(\mathcal{Q}_{T})\times\mathcal{C}^{1,2}(\mathcal{Q}_ {T})\), the two losses at \(S\) are as: \[\mathcal{L}^{(\mathrm{KFP})}(m,u;S)=C^{(\mathrm{KFP})}\left| \frac{\partial m}{\partial t}(\tau,\xi)-\nu\Delta m(\tau,\xi)-\mathrm{div} \Big{(}m(\tau,\xi)\partial_{p}H(m(\tau),\nabla u(\tau,\xi))\Big{)}\right|^{2}\] \[\qquad\qquad\qquad\qquad\qquad+C_{0}^{(\mathrm{KFP})}|m(0,\xi_{0 })-m_{0}(\xi_{0})|^{2}+C_{T}^{(\mathrm{KFP})}|m(T,\xi_{T})-m_{T}(\xi_{T})|^{2},\] and \[\mathcal{L}^{(\mathrm{HJB})}(m,u;S)=C^{(\mathrm{HJB})}\Big{|}- \frac{\partial u}{\partial t}(\tau,\xi)-\nu\Delta u(\tau,\xi)+H(\xi,m( \tau),\nabla u(\tau,\xi))\] \[\qquad\qquad\qquad\qquad+\int_{\mathcal{Q}}\frac{\partial H}{ \partial m}(\zeta,m(t,\cdot),\nabla u(t,\zeta))(\xi)m(t,\zeta)\mathrm{d}y \Big{|}^{2}.\] Finally, to optimize over \((\theta,\omega)\), we use SGD (or one of its variants) on the loss \(\mathcal{L}\). In practice, we use a mini-batch of samples at each iteration, which amounts to approximate the expectation by an empirical average over several samples. ### Augmented Lagrangian Method with Deep Learning In this subsection, we present an approach based on a primal-dual formulation of the MFOT problem. We then introduce a deep learning adaptation of the alternating direction method of multipliers. We focus on the case when the interactions are local, and the drift is the control. #### 3.3.1 Primal and dual problems Under suitable assumptions, the MFOT problem admits a variational formulation, which can be tackled using a direct optimization approach. As in the previous subsection, we assume that \(\rho_{0}\) and \(\rho_{T}\) have respectively density \(m_{0}\) and \(m_{T}\). We focus on a model with local interactions, meaning that an agent at state \(x\) interacts with the density of the population at \(x\). To alleviate the presentation, we will use the same notations for the costs and the drift functions, but now their second input is a real number \(m\) instead of an element \(\mu\in\mathcal{P}_{2}(\mathcal{Q})\). So we have \(f:\mathcal{Q}\times\mathbb{R}\times\mathbb{R}^{k}\to\mathbb{R}\), \(g:\mathcal{Q}\times\mathbb{R}\to\mathbb{R}\), and \(b:\mathcal{Q}\times\mathbb{R}\times\mathbb{R}^{k}\to\mathbb{R}^{d}\). We also modify accordingly the definition of the Hamiltonian \(H\) in (9) and the Lagrangian \(L\) in (10) in subsection 3.2. We further assume that \(f(x,m,v)\) is convex in \(v\) for every \((x,m)\), and \(mf(x,m,v)\) is convex in \(m\) for every \((x,v)\). For simplicity, we consider that \(b(x,m,v)=v\), i.e., the drift is the control. We remark that the setting here is not restrictive and satisfied by a large class of problems. **Primal problem.** The MFOT problem (3) introduced in Section 2 is formally equivalent to the following PDE-constrained optimization problem: \[\inf_{v:\mathcal{Q}\rightarrow\mathbb{R}^{k}}\int_{\mathcal{Q}_{ T}}f\big{(}x,m(t,x),v(t,x)\big{)}m(x,t)\mathrm{d}x\,\mathrm{d}t\] \[\text{subject to} \frac{\partial m}{\partial t}(t,x)-\nu\Delta m(t,x)+\operatorname{ div}\bigl{(}m(t,x)v(t,x)\bigr{)}=0\qquad t\in(0,T],x\in\mathcal{Q}\] \[m(0,x)=m_{0}(x),\qquad m(T,x)=m_{T}(x) \tag{14}\] The PDE constraint is the KFP equation corresponding to the stochastic dynamics in (4). Note that the formulation in terms of \((m,v)\), while intuitive, is not convex in general. For this reason, we consider an equivalent formulation in terms of \((m,z)=(m,mv)\). We define: \[\tilde{f}(x,m,z)=\begin{cases}mf\left(x,m,\frac{z}{m}\right)& \text{if }m>0\\ 0&\text{if }(m,z)=(0,0)\\ +\infty&\text{otherwise}\end{cases} \tag{15}\] Note that \((m,z)\mapsto\tilde{f}(x,m,z)\) is LSC on \(\mathbb{R}\times\mathbb{R}^{k}\). Under suitable conditions, it can be proved that \((m,z)\mapsto\tilde{f}(x,m,z)\) is convex on \(\mathbb{R}\times\mathbb{R}^{d}\). We also define the space \(\mathbf{K}\), \[\mathbf{K}=\Big{\{}(m,z)\,\Big{|}\,\frac{\partial m}{\partial t}(t,x)-\nu \Delta m(t,x)+\operatorname{div}z(t,x)=0,m(0,x)=m_{0}(x),m(T,x)=m_{T}(x),m \geq 0\Big{\}} \tag{16}\] With all these definitions, we are ready to present the primal problem: \[\inf_{(m,z)\in\mathbf{K}}\mathcal{B}(m,z)=\inf_{(m,z)\in\mathbf{ K}}\int_{\mathcal{Q}_{T}}\tilde{f}\big{(}x,m(t,x),z(t,x)\big{)}\mathrm{d}x \mathrm{d}t \tag{17}\] Assuming that problem (17) has a unique optimal solution \((m^{*},z^{*})\) and that problem (14) has a unique optimal control \(v^{*}\), then the following connection holds: \(v^{*}(t,x)=z^{*}(t,x)/m^{*}(t,x)\) if \(m^{*}(t,x)>0\), \(v^{*}(t,x)=0\) if \(m^{*}(t,x)=0\). **Dual problem.** We now introduce a dual optimization problem. We define the following functionals: \[\mathcal{A}(u)=\inf_{m\geq 0}\int_{\mathcal{Q}_{T}}m(t,x)\Big{(} \frac{\partial u}{\partial t}(t,x)+\nu\Delta u(t,x)-H\big{(}x,m(t,x),\nabla u (t,x)\big{)}\Big{)}\mathrm{d}x\,\mathrm{d}t \tag{18}\] \[\mathcal{F}(u)=\int_{\mathcal{Q}}\left(m_{T}(x)u(T,x)-m_{0}(x)u(0, x)\right)\mathrm{d}x\] (19) \[\mathcal{G}(\mathfrak{a},\mathfrak{b})=-\inf_{m\geq 0}\int_{ \mathcal{Q}_{T}}m(t,x)\Big{(}\mathfrak{a}(t,x)-H\big{(}x,m(t,x),\mathfrak{b} (t,x)\big{)}\Big{)}\mathrm{d}x\,\mathrm{d}t. \tag{20}\] Note that if we define the linear differential operator \(\Lambda u=\big{(}\frac{\partial u}{\partial t}+\nu\Delta u,\nabla u\big{)}\), then \(\mathcal{A}(u)=\mathcal{G}(\Lambda u)\). Consider the following problem: \[\inf_{u}\mathcal{F}(u)+\mathcal{G}(\Lambda u). \tag{21}\] Based on Fenchel-Rockafellar duality theorem (see Section 31, Theorem 31.1 in [53]), we expect problems (17) and (21) to be in duality, meaning: \[\inf_{(m,z)\in\mathbf{K}}\mathcal{B}(m,z)=\sup_{u}\mathcal{A}(u)= -\inf_{u}\mathcal{F}(u)+\mathcal{G}(\Lambda u) \tag{22}\] Note that this primal-dual relationship also plays an important role in demonstrating the uniqueness and existence of solutions to MFG and MFC PDE systems, see e.g. [43, 20, 4]. Here, we expect a similar result to hold for MFOT under suitable conditions. The rigorous definition of the two problems and the analysis of this duality relationship is left for future work. For now, we proceed formally. We can at least formally establish a connection between the primal problem, the dual problem, and the optimal control in the following way. Let \(u^{*}\) be the optimal solution to the dual (21) and let \((m^{*},z^{*})\) be the optimal solution to the primal problem (17). Then the optimal control for the original problem (14) is given by: \[v^{*}(t,x)=\partial_{p}H\big{(}x,m^{*}(t,x),\nabla u^{*}(t,x)\big{)}.\] We notice that \((u^{*},m^{*})\) forms a solution to the MFOT PDE system (12). This fact suggests that we can work on the dual problem (21) directly to solve the MFOT problem. Under suitable assumptions, it can be shown that the dual problem (21) is a strongly convex, unconstrained optimization problem over function space, which motivates the use of classic algorithms in convex optimization. However, the presence of the infinite dimensional linear operator \(\Lambda\) makes the problem hard to solve efficiently in general. Fortunately, the structure of the objective as a sum of two convex functionals makes the problem amenable to algorithms based on splitting schemes, such as the Alternating Direction Method of Multipliers (ADMM) [17]. ``` Data: Initial Guess \(\big{(}u^{(0)},q^{(0)},\lambda^{(0)}\big{)}\); number of iterations \(N\); hyperparameter \(r>0\) Result: Function \(\big{(}u^{(N)},q^{(N)},\lambda^{(N)}\big{)}\) that are close to the saddle point of \(\mathcal{L}_{r}\) defined in (24) 1begin 2for\(k=1,\cdots,N\), do 3\(u^{(k)}=\operatorname*{arg\,min}_{u:\mathcal{Q}_{T}\to\mathbb{R}}\mathcal{F}(u )-\langle\lambda^{(k-1)},\Lambda u\rangle+\frac{r}{2}\big{\|}\Lambda u-q^{(k-1 )}\big{\|}^{2}\) 4\(q^{(k)}=\operatorname*{arg\,min}_{q:\mathcal{Q}_{T}\to\mathbb{R}^{k+1}} \mathcal{G}(q)+\langle\lambda^{(k-1)},q\rangle+\frac{r}{2}\big{\|}\Lambda u^{ (k)}-q\big{\|}^{2}\) 5\(\lambda^{(k)}=\lambda^{(k-1)}-r\big{(}\Lambda u^{(k)}-q^{(k)}\big{)}\) ``` **Algorithm 1**Vanilla ADMM for MFOT #### 3.3.2 Description of the algorithm Introducing a new variable \(q\) that will play the role of \(\Lambda u\), we can rewrite problem (21) as the following constrained optimization program: \[\inf_{u,q:\,q=\Lambda u}\mathcal{F}(u)+\mathcal{G}(q). \tag{23}\] The goal is now to find a saddle point of the associated Lagrangian. In fact, for numerical purposes, we will consider an augmented Lagrangian, defined as follows. Let \(r>0\) be a constant and introduce \(\lambda:\mathcal{Q}_{T}\to\mathbb{R}^{k+1}\), the Lagrangian multiplier associated with the constraint \(q=\Lambda u\). Let \(\langle\cdot,\cdot\rangle\) denote the inner product on \(L^{2}(\mathcal{Q}_{T})\). We introduce the augmented Lagrangian: \[\mathcal{L}_{r}(u,q,\lambda)=\mathcal{F}(u)+\mathcal{G}(q)- \langle\lambda,\Lambda u-q\rangle+\frac{r}{2}\|\Lambda u-q\|^{2} \tag{24}\] Now, the original MFOT problem is reduced to finding a saddle point of \(\mathcal{L}_{r}\). Here, we state the original ADMM method in Algorithm 1 that finds the saddle point via an alternating optimization procedure. ``` Data: Initial Guess \(\big{(}u^{(0)},q^{(0)},\lambda^{(0)}\big{)}\); number of iterations \(N\); hyperparameter \(r>0\) Result: Function \(\big{(}u^{(N)},q^{(N)},\lambda^{(N)}\big{)}\) that are close to the saddle point of \(\mathcal{L}_{r}\) defined in (24) 1begin 2for\(k=1,\cdots,N\)do 3\(u^{(k)}=\operatorname*{arg\,min}_{u:\mathcal{Q}_{T}\to\mathbb{R}}\mathcal{F}( u)-\langle\lambda^{(k-1)},\Lambda u\rangle+\frac{r}{2}\big{\|}\Lambda u-q^{(k-1 )}\big{\|}^{2}\) 4\(q^{(k)}=\operatorname*{arg\,min}_{q:\mathcal{Q}_{T}\to\mathbb{R}^{k+1}} \mathcal{G}(q)+\langle\lambda^{(k-1)},q\rangle+\frac{r}{2}\big{\|}\Lambda u^{ (k)}-q\big{\|}^{2}\) 5\(\lambda^{(k)}=\lambda^{(k-1)}-r\big{(}\Lambda u^{(k)}-q^{(k)}\big{)}\) ``` **Algorithm 2**Vanilla ADMM for MFOT This general procedure can be implemented, for example, when the functions \((u,q,\lambda)\) are approximated by their values on a finite-difference grid. Such a procedure has been used for MFG and MFC problems, using finite elements [11, 9] or finite differences [5]. Furthermore, [11] proved the convergence of this method under suitable conditions. However, as already mentioned, approximating functions by their values on a mesh is not feasible in high dimensions. We thus propose a different implementation of the ADMM based on neural network approximations. In Algorithm 1, the objectives in the steps are given by functionals to be minimized over functional spaces, which is not tractable in general. We restrict our attention to spaces of parameterized functions that can be expressed as neural networks, denoted by \(\left(u_{\theta},q_{\omega},\lambda_{\psi}\right)\) with parameter \(\theta,\omega,\psi\) respectively. We then follow the strategy introduced with the DGM [57] and already used in Section 3.2 to create computable loss functions that are stochastic approximations of the functionals. Recall that the truncated space domain \(\tilde{\mathcal{Q}}\) and the associated time-space domain \(\tilde{\mathcal{Q}}\). Let \(X\sim\mathcal{U}(\tilde{\mathcal{Q}}_{T})\), \(Y\sim\mathcal{U}(\tilde{\mathcal{Q}})\) be two random variables with uniform distribution in the time-space domain and the space domain respectively. Let \(\rho_{X}\), \(\rho_{Y}\) be the value of the uniform density on \(\tilde{\mathcal{Q}}_{T}\) and \(\tilde{\mathcal{Q}}\) respectively. Here, we overload the notation \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) to represent the Euclidean inner product and norm on both \(L_{2}(\tilde{\mathcal{Q}}_{T})\) and \(\mathbb{R}^{d+1}\): \[\mathcal{L}^{(u)}(\theta;\omega,\psi)=\mathcal{L}_{1}(u_{\theta},q_{\omega}, \lambda_{\psi}),\quad\mathcal{L}^{(q)}(\omega;\theta,\psi)=\mathcal{L}_{2}(u_ {\theta},q_{\omega},\lambda_{\psi}),\quad\mathcal{L}^{(\lambda)}(\psi; \theta,\omega,\psi_{old})=\mathcal{L}_{3}(u_{\theta},q_{\omega},\lambda_{\psi _{old}},\lambda_{\psi}),\] where \[\mathcal{L}_{1}(u,q,\lambda) =\frac{1}{\rho_{Y}}\mathbb{E}_{Y}\left[u(T,Y)m_{T}(Y)-u(0,Y)m_{0} (Y)\right]+\frac{1}{\rho_{X}}\mathbb{E}_{X}\left[\frac{r}{2}\|\Lambda u(X)-q( X)\|^{2}-\langle\Lambda u(X),\lambda(X)\rangle\right] \tag{25}\] \[\mathcal{L}_{2}(u,q,\lambda) =\frac{1}{\rho_{X}}\mathbb{E}_{X}\left[\mathcal{G}(q(X))+\langle \lambda(X),q(X)\rangle+\frac{r}{2}\|\Lambda u(X)-q(X)\|^{2}\right]\] \[\mathcal{L}_{3}(u,q,\lambda_{old},\lambda) =\frac{1}{\rho_{X}}\mathbb{E}_{X}\Big{[}\|\lambda_{old}(X)-r\left( \Lambda u(X)-q(X)\right)-\lambda(X)\|^{2}\Big{]}. \tag{26}\] Here, the subscript \(old\) is used to refer to the previous iteration: the loss for \(\lambda\) involves the previous estimate \(\lambda_{old}\). When using a neural network, it amounts to using the previous neural network parameters \(\psi_{old}\). The loss function aims at mimicking the effect of the direct update in the third step of standard ADMM (Algorithm 1) when \(\lambda\) is approximated by a neural network. The algorithm DeepADMM is presented in Algorithm 2. ``` Data: Initial parameter \(\theta^{(0)},\omega^{(0)},\psi^{(0)}\); number of ADMM iterations \(K\); SGD parameters Result: Final parameter \(\theta^{(K)},\omega^{(K)},\psi^{(K)}\) 1begin 2for\(k=1,\cdots,K\)do 3 Compute \(\theta^{(k)}\) using SGD to (approximately) minimize the loss \(\mathcal{L}^{(u)}(\cdot;\omega^{(k-1)},\psi^{(k-1)})\) 4 Compute \(\omega^{(k)}\) using SGD to (approximately) minimize the loss \(\mathcal{L}^{(q)}(\cdot;\theta^{(k)},\psi^{(k-1)})\) 5 Compute \(\theta^{(k)}\) using SGD to (approximately) minimize the loss \(\mathcal{L}^{(\lambda)}(\cdot;\theta^{(k)},\omega^{(k)},\psi^{(k-1)})\) ``` **Algorithm 2**DeepADMM for MFOT We have several remarks regarding DeepADMM and the augmented Lagrangian formulation in order for readers to better understand this approach. First, compared with Algorithm 1, the updates in Algorithm 2 for functions \(u\) and \(q\) are quite straightforward to understand. Instead of searching optimizer over function space, we reduce the problem to a finite dimension through stochastic approximation of the objective and search in the parameter space instead. The computed stochastic gradient can be considered as an unbiased estimator of the population gradient with respect to the functional, and the variance of this stochastic gradient decreases as the batch size increases. In Appendix B, we discuss the computation of \(\mathcal{G}\) for several typical models. Numerical experiments In this section, we present numerical experiments obtained with the three methods discussed in the previous section. For brevity, we refer to the three methods respectively introduced in sections 3.1, 3.2 and 3.3 as Method 1, Method 2 and Method 3 (and M1, M2, and M3 for short in the plots). We first consider two test cases for which we have explicit solutions (up to solving ODE systems) and can thus be used to benchmark our algorithms in any dimension. We then consider two test cases that can be viewed as modifications of standard OT with crowd aversion or congestion effects. ### Case 1: Linear Quadratic Problem The first class of models that we consider has a linear-quadratic structure, which falls in the setting discussed in Example 2. #### 4.1.1 Description of the problem In this model, we take: \[b(x,\mu,a)=Ax+Ba,\qquad f(x,\mu,a)=a^{\top}Ra,\qquad\rho_{0}=\mathcal{N}(\bar{ x}_{0};\Sigma_{0}),\qquad\rho_{T}=\mathcal{N}(\bar{x}_{T};\Sigma_{T}),\] where \(A,B,R,\Sigma_{0},\Sigma_{T}\) are (constant) matrices of suitable sizes. The vectors \(\bar{x}_{0}\) and \(\bar{x}_{T}\) correspond to the initial and terminal means. We will consider two settings. In order to have a benchmark solution, we will take \(\sigma=B\). This enables us to use the solution provided by [29, Section 7.1], which boils down to solving a system of ODEs. For the sake of completeness, we provide the details in Appendix A. #### 4.1.2 Evaluation Metrics In this model, since we have access to the optimal solution, we can evaluate the learnt solutions given by the three methods we proposed with respect to the ground-truth solution. We denote by \(v^{*}\) the optimal control and \(\hat{v}\) a learnt control. As explained below in detail, we use the following metrics: the total cost (namely \(J^{MFOT}(\hat{v})\), with \(J^{MFOT}\) introduced in (3)), the relative error between the achieved cost, and the optimal cost (namely \(J^{MFOT}(\hat{v})\) and \(J^{MFOT}(v^{*})\)), the deviation from the terminal distribution (i.e., the Wasserstein distance between the achieved terminal distribution and the target terminal distribution, \(\rho_{T}\)), and the weighted \(L^{2}\) error between the learnt control \(\hat{v}\) and the optimal control \(v^{*}\), weighted by the population distribution. **Computation of the control.** The control is parameterized in different ways across different methods. For Method 1, \(\hat{v}(t,x)=v_{\theta}(t,x)\). For Method 2 and Method 3, \(\hat{v}(t,x)=-\frac{1}{2}BR^{-1}\nabla\hat{u}_{\theta}(t,x)\), where \(\hat{u}_{\theta}\) is the neural network that approximates the dual variable, solution to the HJB equation. **Total cost.** Recall the definition of the objective \(J^{MFOT}\) defined in (1). Let \(v\) be a control. In the present Linear-Quadratic case, we have that, \[J^{MFOT}(v)=\int_{0}^{T}\int_{\mathcal{Q}}m^{v}(t,x)f(x,m^{v},v)\mathrm{d}x \,\mathrm{d}t=\int_{0}^{T}\int_{\mathcal{Q}}m^{v}(t,x)v(t,x)^{\top}Rv(t,x) \mathrm{d}x\,\mathrm{d}t,\] where \(m^{v}\) is the density of mean field distribution driven by \(v\), which satisfies the KFP PDE (12). In order to evaluate \(J^{MFOT}(v)\), we use Monte Carlo simulations. We discretize the time variable \(t\). Let \(N_{T}\) be a number of time steps of length \(\Delta t=T/N_{T}\). We consider a equi-distanced time discretization with time-steps \(\{t_{0}=0,t_{1}=\Delta t,\dots,t_{N_{T}}=N_{T}\Delta t\}\). Again, we simulate solutions to the underlying SDE using an Euler-Maruyama scheme similar to the one used in (7). We simulate a family of \(N\) sequences \(((X_{t_{n}}^{i,v})_{n=0,\dots,N_{T}})_{i=1,\cdots,N})\) using the following update \[\begin{cases}X_{0}^{i}\sim\rho_{0}\quad\text{ i.i.d.}\\ X_{t_{n+1}}^{i,v}=X_{t_{n}}^{i,v}+(AX_{t_{n}}^{i,v}+Bv(t_{n},X_{t_{n}}^{i,v})) \Delta t+\sigma\sqrt{\Delta t}\Delta W_{n}^{i},\end{cases} \tag{27}\] where \(\{\Delta W_{n}^{i}\}\) are i.i.d standard Gaussian random variables in \(\mathbb{R}^{d}\). With these sampled sequences, we compute the objective as \[J^{MFOT}(v)=\frac{1}{N}\sum_{i=1}^{N}\sum_{n=0}^{N_{T}-1}v(t_{n},X_{t_{n}}^{i,v}) ^{\top}Rv(t_{n},X_{t_{n}}^{i,v})\Delta t.\] **Relative Error.** The relative error between \(J^{MFOT}(\hat{v})\) and \(J^{MFOT}(v^{*})\) is defined as \[\frac{|J^{MFOT}(\hat{v})-J^{MFOT}(v^{*})|}{|J^{MFOT}(v^{*})|}.\] The major reason to consider relative error instead of absolute error is because the scale of the running cost varies greatly across different problems. Also, we want to stress that even though \(v^{*}\) is the analytical optimal solution, it may happen that \(J^{MFOT}(\hat{v})<J^{MFOT}(v^{*})\) if \(\hat{v}\) does not satisfy exactly the constraint (in contrast with \(v^{*}\)). **Expected \(L^{2}\) error for control.** The expected \(L^{2}\) error between the learnt control \(\hat{v}\) and the ground-truth control \(v^{*}\) is defined as, \[d_{L^{2}}(\hat{v},v^{*})=\int_{0}^{T}\int_{\mathcal{Q}}m^{*}(t,x)\|\hat{v}(t, x)-v^{*}(t,x)\|^{2}\mathrm{d}x\,\mathrm{d}t\] where \(m^{*}\) is the density of the optimal mean-field associated with \(v^{*}\). For the LQ problem, the optimal mean field \(m^{*}\) is Gaussian for any \(t\in[0,T]\), with mean \(\mu_{t}\) and variance \(\Sigma_{t}\) given by analytical formulas in Appendix A. We can thus evaluate the \(L^{2}\) error again with Monte Carlo samples for each time step. As above, we discretize the time variable \(t\) with \(N_{T}+1\) points and for \(t_{n}=n\Delta t,n=0,\ldots,N_{T}\), we generate i.i.d. samples \((X_{t_{n}}^{i,v})_{i=1,\ldots,N}\sim\mathcal{N}(\mu_{t_{n}},\Sigma_{t_{n}})\). We then estimate the \(L^{2}\) error as: \[d_{L^{2}}(\hat{v},v^{*})\approx\frac{1}{N}\sum_{i=1}^{N}\sum_{n=0}^{N_{T}-1}\| \hat{v}(t_{n},X_{t_{n}}^{i,v})-v^{*}(t_{n},X_{t_{n}}^{i,v})\|^{2}\Delta t.\] **Deviation of distribution.** The deviation of the mean field \(\hat{\rho}_{T}\) from the terminal target distribution \(\rho_{T}\) is quantified by two different metrics: Wasserstein-2 distance \(\mathcal{W}_{2}(\hat{\rho}_{T},\rho_{T})\) and \(L^{2}\) distance \(d_{L^{2}}(\hat{m}_{T},m_{T})\). Here, \(\hat{\rho}_{T}\) is the measure of the mean field distribution driven by the learnt control \(\hat{v}\) at time \(T\). \(\hat{m}_{T}\) and \(m_{T}\) are the density of \(\hat{\rho}_{T}\) and \(\rho_{T}\) respectively. * **Wasserstein-2 distance.** We adopt a similar method to compute the Wasserstein-2 distance as the one discussed in Section 3.1. We simulate \(N\) particles following the dynamics (27), and obtain a collection of \(N\) samples \((X_{T}^{i,v})_{i=1,\ldots,N}\), which forms an empirical distribution approximating \(\hat{\rho}_{T}\). We also generate \(N\) samples directly from the target distribution \(\rho_{T}\), denoted by \((Y_{T}^{i})_{i=1,\ldots,N}\). Then, we define the distance matrix \(\mathcal{M}\) by \(\mathcal{M}_{ij}=|X_{T}^{i,\hat{v}}-Y_{T}^{j}|^{2}\), and we recall that the set \(U_{N}\) is defined by (8). We approximate the Wasserstein-2 distance between the two empirical distributions formed by \(X_{T}^{i,v}\) and \(Y_{T}^{i}\) through the following linear program: \[\mathcal{W}_{2}\left(\hat{\rho}_{T},\rho_{T}\right)\approx\Big{(}\min_{ \mathcal{A}\in U_{N}}\langle\mathcal{A},M\rangle\Big{)}^{1/2}.\] * \(L^{2}\) **distance.** We will also use as a metric the \(L^{2}\) distance between \(\hat{m}_{T}\) and \(m_{T}\) on the truncated domain \(\tilde{\mathcal{Q}}\). To evaluate the integral, in the absence of analytical formula for \(m_{T}\), we again use a Monte Carlo approach. We uniformly sample \(N\) points in \(\tilde{\mathcal{Q}}\) denoted by \((X_{j})_{j=1,\ldots,N}\). Let \(C(\tilde{\mathcal{Q}})\) denotes the inverse of the value of the uniform density. Then we can approximate the \(L^{2}\) distance by: \[\frac{C(\tilde{\mathcal{Q}})}{N}\sum_{i=1}^{N}\|\hat{m}_{T}(X_{i})-m_{T}(X_{i}) \|^{2}.\] #### 4.1.3 Numerical results In the numerical tests, we take the values given in Table 1 for the parameters of the model, with the time horizon \(T=1.0\). For LQ test 1, in dimension 1, Figure 1 displays the evolution of the density. For methods 1, 2, and 3, we obtain the control learnt using the neural networks and simulate \(N\) trajectories by the Monte Carlo method following the dynamics (27), with \(v\) replaced by the learnt control. We then estimate the mean field distribution using kernel density estimation (KDE). We see that the distributions obtained with the three methods match well the ground-truth one obtained with ODEs. The distributions move towards the right and concentrate around the final mean. Figure 2 shows the evolution of the control. We see that the three methods provide good approximations of the true optimal control, at least in the region where the density is high. In regions where the density is very low, the control is not well approximated, but this is not an issue as far as the optimal behavior of the population is concerned. The first part of Table 2 shows the results obtained for the metrics introduced above. We see that each of the three methods achieves a smaller total cost than the true optimal control. This would not be possible for controls satisfying perfectly the terminal constraint, but it is possible here due to the fact that the methods satisfy only approximately the planning constraint. The optimal control is well approximated, as shown by the \(L^{2}\) distance to the true optimal control. Furthermore, we see that Methods 2 and 3 have a higher Wasserstein-2 distance between the terminal distribution and the target distribution, but the \(L^{2}\) distance is much lower. As for LQ test 2, in dimension 2, Figure 3 displays the evolution of the density for each of the methods. The densities move from the bottom left corner to the top right corner. Furthermore, the terminal distribution is more concentrated because the terminal variance is smaller than the initial variance. Figure 4 shows the evolution of the first dimension of the control (the second dimension is similar, so we omit it for brevity). The ground-truth control is linear in space for each time step. We see that the three methods manage to learn approximately linear controls, at least in the region where the density is significantly positive. Table 3 shows the results obtained for the metrics introduced above. We see that here again, each of the three methods achieves a smaller total cost than the true optimal control due to the fact that the terminal constraint is not perfectly satisfied. The optimal control is well approximated, and the terminal distribution is matched with good accuracy. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline \hline Test & \(d\) & \(A\) & \(B\) & \(\sigma\) & \(R\) & \(\bar{x}_{0}\) & \(\Sigma_{0}\) & \(\bar{x}_{T}\) & \(\Sigma_{T}\) \\ \hline \hline LQ Test 1 & 1 & 1 & 1 & 1 & \(\frac{1}{2}\) & \(0.0\) & \(1\) & \(2.0\) & \(0.5\) \\ \hline LQ Test 2 & 2 & \(I_{d}\) & \(I_{d}\) & \(I_{d}\) & \(\frac{1}{2}I_{d}\) & \([0.0,0.0]\) & \(I_{d}\) & \([2.0,2.0]\) & \(\frac{1}{2}I_{d}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters for the two linear-quadratic test cases Figure 1: Evolution of the density in the LQ Test case 1. Each plot corresponds to one time step and displays the densities as functions of the space variable, \(x\). The densities are: The density obtained by applying the control learnt by each of the three deep learning methods as well as the ground-truth density given by the ODE method. ### Case 2: Transport with congestion effects The second class of models that we consider is inspired by crowd motion and falls in the setting discussed in Example 3. #### 4.2.1 Description of the problem In this model, intuitively, the cost is higher when moving through a crowded region, i.e., where the density is high. Specifically, we take: \[b(x,\mu,a)=a,\qquad f(x,\mu,a)=R|\ell(x,\mu)|^{\gamma}|a|^{2},\qquad\rho_{0}= \mathcal{N}(\bar{x}_{0};\Sigma_{0}),\qquad\rho_{T}=\mathcal{N}(\bar{x}_{T}; \Sigma_{T}).\] For the function \(\ell\), we take two different models. We consider the following non-local dependence: \[\ell(x,\mu)=c+\rho_{\epsilon}\star\mu(x),\] where \(c>0\) is a constant, \(\rho_{\epsilon}\) is a Gaussian kernel and \(\star\) denotes the convolution. We use Method 1 to solve the MFOT problem with this function \(\ell\). Since it is based on Monte Carlo simulations of trajectories, it is straightforward to compute a convolution with the empirical distribution at a given time step. We also consider a variation with a local dependence. \[\ell(x,\mu)=c+m(x)\] where \(c>0\) is a constant and \(m\) denotes the density of \(\mu\). For this type of model, Methods 2 and 3 are better suited since, in these methods, we directly have access to the approximate density in the form of a neural network. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|} \hline Test case & Method & Total Cost & Relative Error & \(d_{L^{2}}(\hat{v},v^{*})\) & \(\mathcal{W}_{2}(\hat{\rho}_{T},\rho_{T})\) & \(d_{L^{2}}(\hat{m}_{T},m_{T})\) \\ \hline & ODE \((v^{*})\) & \(2.126\) & - & - & - & - \\ Linear Quadratic & M1 & \(2.099\) & \(1.24\%\) & \(0.021\) & \(0.002\) & \(0.006\) \\ LQ Test 1 & M2 & \(2.096\) & \(1.41\%\) & \(0.003\) & \(0.043\) & \(0.00004\) \\ & M3 & \(2.077\) & \(2.29\%\) & \(0.011\) & \(0.031\) & \(0.001\) \\ \hline \end{tabular} \end{table} Table 2: Comparison of three different methods v.s. the analytical solution on the LQ test case 1. The evaluation metrics are described in Section 4.1.2. Figure 2: Evolution of the control in the LQ Test case 1. Each plot corresponds to one time step and displays the controls as functions of the space variable, \(x\). The controls are: The control learnt by each of the three deep learning methods as well as the ground-truth control given by the ODE method. #### 4.2.2 Numerical results We focus on one test case called "Congestion" below in dimension \(d=1\). In this model, \(\gamma=1\). For the sake of comparison, we also consider the corresponding model with the same choice of parameters except that \(\gamma=0\), i.e., there are no congestion effects in the running cost. The values that we take in the numerical tests are given in Table 4 below. In Figure 5, we present the evolution of the density under the control learnt by each of the three methods for congestion cases 1 and 2. Each row corresponds to one method. We see that, in the case where \(\gamma=0\) (no congestion effect), the mass is transported directly towards the terminal distribution without much change in its shape. In contrast, in the case with \(\gamma=1\), the mass spreads in space and one part starts moving towards the target mean \(\bar{x}_{T}=2\) whereas another part stays behind and catches up at later time steps. This is consistent with the idea that moving in congested regions is more expensive, so some agents would agree to wait until the density decreases before moving forward. Finally, in Figure 6, we present the evolution of the density under the control learnt by each of the three methods for congestion case 3, which is in dimension 5. Each row corresponds to one method. To visualize density evolution \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|} \hline Test case & Method & Total Cost & Relative Error & \(d_{L^{2}}(\hat{v},v^{*})\) & \(\mathcal{W}_{2}(\hat{\rho}_{T},\rho_{T})\) & \(d_{L^{2}}(\hat{m}_{T},m_{T})\) \\ \hline & ODE \((v^{*})\) & \(4.175\) & - & - & - & - \\ Linear Quadratic & M1 & \(4.117\) & \(1.39\%\) & \(0.074\) & \(0.043\) & \(0.007\) \\ LQ Test 2 & M2 & \(3.935\) & \(5.63\%\) & \(0.043\) & \(0.403\) & \(0.00005\) \\ & M3 & \(4.054\) & \(2.89\%\) & \(0.131\) & \(0.561\) & \(0.015\) \\ \hline \end{tabular} \end{table} Table 3: Comparison of three different methods v.s. the analytical solution on the LQ test case 2. The evaluation metrics are described in Section 4.1.2. Figure 3: Evolution of the density in the Linear Quadratic Test case 2. Each column corresponds to one time step, and each row corresponds to one of the methods. Each plot displays the density as a function of the space variable, i.e., \((m(t,x)_{1})_{x\in[-4,6]^{2}}\). The first row corresponds to the solution obtained by the ground-truth ODE method. The second, third and fourth rows correspond respectively to methods 1, 2 and 3. in dimension 5, we plot the marginal distribution of the mean field distribution on the first and second dimensions. We see that, similarly to the congestion case 2, the mass spreads in space and gradually moves towards the target mean. Compared with congestion case 2, the difference in the moving pattern and extent of spreading is due to the difference of parameters in Table 4. With a larger value \(c\), the behavior of the density would be closer to a direct transport to the terminal distribution without changes in the shape of the distribution. ### Remarks on the choice of hyperparameters Each method has several hyperparameters, including the architecture of the neural networks. We provide below some remarks about the choice of hyperparameters in our implementation. **Method 1.** In our implementation, we choose \(G(r)=C_{W}r\) where \(C_{W}\) is a hyperparameter that we adjust dynamically. We increase the constant \(C_{W}\) when we expect a higher running cost (for instance, in a higher dimension) in order to give enough importance to the penalty. The coefficient \(\alpha\) of regularization for the computation of the Wasserstein distance is also a hyperparameter that we adjust dynamically using the following heuristics. We start with \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline Test case & \(d\) & \(\gamma\) & \(c\) & \(\sigma\) & \(R\) & \(\bar{x}_{0}\) & \(\Sigma_{0}\) & \(\bar{x}_{T}\) & \(\Sigma_{T}\) \\ \hline Case 1, No congestion & \(1\) & \(0\) & \(0.1\) & \(0.1\) & \(0.5\) & \(0\) & \(0.04\) & \(2\) & \(0.04\) \\ \hline Case 2, Congestion & \(1\) & \(1\) & \(0.1\) & \(0.1\) & \(0.5\) & \(0\) & \(0.04\) & \(2\) & \(0.04\) \\ \hline Case 3, Congestion & \(5\) & \(1\) & \(1\) & \(1\) & \(0.5\) & \(0\) & \(0.1\) & \(2\) & \(0.1\) \\ \hline \end{tabular} \end{table} Table 4: Parameters for the test case with congestion and the benchmark model without congestion effects Figure 4: Evolution of the control in the Linear Quadratic Test case 2. Each column corresponds to one time step, and each row corresponds to one of the methods. Each plot displays the first dimension of the control as a function of the space variable, i.e., \((v(t,x)_{1})_{x\in[-4,6]^{2}}\). The first row corresponds to the solution obtained by the ground-truth ODE method. The second, third and fourth rows correspond respectively to methods 1, 2 and 3. a given value for \(\alpha\) and, when the estimated Wasserstein distance is small enough, we reduce the value of \(\alpha\). The idea is that, as long as the terminal distribution does not match well enough the target distribution, we need a high level of regularization in order to estimate efficiently the Wasserstein distance between them. As the two distributions get closer, we can decrease the degree of regularization in order to have a more accurate estimation of the Wasserstein distance. The way we adjust \(C_{W}\) also depends on the dimension of the state variable. There is also a computational time aspect to take into account: as \(\alpha\) becomes smaller, the computations take more time (see 3.1.2 for more details). For the neural network, we take a feedforward fully connected neural network with 6 layers of 60 neurons each. The other hyperparameters are the number of particles \(N\) and the number of time steps \(N_{T}\). We take \(N=300\) and \(N_{T}=20\). **Method 2.** In the second method, no time or space discretization is needed, and the density is directly approximated by a neural network, so we do not need to use a finite number of particles. However, we need to choose the values of the weights \(C_{0}^{(\mathrm{KFP})},\,C_{T}^{(\mathrm{KFP})},\,C^{(\mathrm{KFP})},\,\)and \(C^{(\mathrm{HJB})}\) in the loss function. We used \(C_{0}^{(\mathrm{KFP})}=20,\,C_{T}^{(\mathrm{KFP})}=50,\,C^{(\mathrm{KFP})}=20, \,C^{(\mathrm{HJB})}=1\). As for the neural network, we used the architecture proposed in the DGM article [57], with 2 layers and a width equal to 40. During the training, at each iteration of SGD, we use a minibatch of 500 points in time and space, and 500 points in space for the initial and terminal conditions. **Method 3.** The main hyperparameter in this method is \(r\), which is used in the definition of the augmented Lagrangian (24). For the experiments, we select \(r=0.1\). Even though, in theory, the convergence of ADMM is independent of the choice of \(r\), in practice, we often find that a large \(r\) value could potentially increase numerical instability and lead the algorithm to diverge. Similarly, a small \(r\) value could slow down the convergence. As for the neural networks, we use the following architectures. For both \(u_{\theta}\), \(q_{\omega}\), and \(\lambda_{\psi}\), in general, we use a fully connected neural network with residual connections, sigmoid activation function, and appropriate output dimension. We use \(6\) layers and \(100\) neurons per layer. For LQ test cases, we further consider an extra quadratic correction in addition to the neural networks: the output of \(u_{\theta}\) is the sum of neural network output and a quadratic function with trainable weights. To effectively model the mean field density, a sigmoid activation function is applied to the first dimension of the output of the neural network \(\lambda_{\psi}\), and then the result is multiplied by a constant \(C\). In this way, the first dimension of \(\lambda_{\psi}\) takes values in \((0,C)\). In the experiments, we take \(C=1\) for the LQ test cases and \(C=5\) for the congestion test cases. During training, at each iteration of SGD, we use a minibatch of 512 points in time and space, and 512 points in space for the initial and terminal conditions. Figure 5: Visualization of the mean field density \(\hat{m}(t,x)\) in Congestion test Case 1,2 ## 5 Conclusion and future directions In this work, we have proposed three numerical methods based on deep learning for mean field optimal transport problems. The three methods can tackle a larger class of problems than deep learning methods proposed previously, which were mostly focusing on the Schrodinger bridge problem or MFGs with a specific structure. The first method replaces the terminal constraint with a penalty and then directly learns the optimal control using Monte Carlo trajectories. The second method solves a PDE system which is obtained as the optimality conditions for the MFOT problem. The third method relies on an augmented Lagrangian approach for the variational formulation of the problem. The numerical results show that the three methods match the analytical solution on an LQ problem, and that they are able to handle non-trivial mean field interactions modeling congestion effects. From here, we can envision several research directions. First of all, the theoretical analysis of the MFOT problem remains to be tackled. For example, the existence and uniqueness of the solution to the PDE system have been proved only in relatively specific cases, see e.g. [1, 19, 50, 34]. It would be interesting to extend the analysis to more general forms of dynamics and cost functions. From the numerical point of view, it would be interesting to scale-up the methods proposed in this work to a higher dimension, and to explore other deep learning methods. The numerical analysis and the convergence proof of the proposed methods also remain to be investigated in future work.
2309.17142
Stirling complexes
In this paper we study natural reconfiguration spaces associated to the problem of distributing a fixed number of resources to labeled nodes of a tree network, so that no node is left empty. These spaces turn out to be cubical complexes, which can be thought of as higher-dimensional geometric extensions of the combinatorial Stirling problem of partitioning a set of named objects into non-empty labeled parts. As our main result, we prove that these Stirling complexes are always homotopy equivalent to wedges of spheres of the same dimension. Furthermore, we provide several combinatorial formulae to count these spheres. Somewhat surprisingly, the homotopy type of the Stirling complexes turns out to depend only on the number of resources and the number of the labeled nodes, not on the actual structure of the tree network.
Dmitry N. Kozlov
2023-09-29T11:14:12Z
http://arxiv.org/abs/2309.17142v1
# Stirling complexes ###### Abstract. In this paper we study natural reconfiguration spaces associated to the problem of distributing a fixed number of resources to labeled nodes of a tree network, so that no node is left empty. These spaces turn out to be cubical complexes, which can be thought of as higher-dimensional geometric extensions of the combinatorial Stirling problem of partitioning a set of named objects into non-empty labeled parts. As our main result, we prove that these Stirling complexes are always homotopy equivalent to wedges of spheres of the same dimension. Furthermore, we provide several combinatorial formulae to count these spheres. Somewhat surprisingly, the homotopy type of the Stirling complexes turns out to depend only on the number of resources and the number of the labeled nodes, not on the actual structure of the tree network. ## 1. Stirling complexes ### Motivation Consider the situation where \(n\) unique resources need to be distributed among \(m\) locations. Clearly, subject to the only condition that \(n\geqslant m\), this can be done in many different ways. Specifically, the number of solutions is equal to \(m!\binom{n}{m}\), where \(\binom{n}{m}\) is the _Stirling number of the second kind_, which is a classical combinatorial function, counting the number of ways \(n\) objects can be partitioned into \(m\) non-empty groups, see [10]. Imagine furthermore, that the locations, to which the resources are distributed, are connected by a tree network, and that each resource can be shifted from its location to a neighboring one. Simultaneous multiple shifts of different resources are allowed, as long as at any point of this shifting procedure there remain some resources, which are not being moved, in each node. We would like to model this situation by introducing a higher-dimensional parameter space which encodes the interplay of such shifts. In what follows we introduce a family of combinatorial cubical complexes, which fulfill this task. We shall call these complexes the _Stirling complexes_. In recent years topology has increasingly been used in applications, most notably in data analysis, see [1] and the references therein. The idea of using higher-dimensional cell complexes to record transformations of combinatorial objects has been a further major thread in the tapestry of applied topology. For instance, a family of prodsimplicial complexes has been constructed in [1], see also [1, 2, 3], to find topological obstructions to graph colorings, a famously notorious problem. Another example is provided by the so-called protocol complexes, which have been introduced as a part of the topological approach to questions in theoretical distributed computing, see [13] and the numerous references therein. Optimally, such constructions provide deeper insight into the original combinatorial questions, yielding at the same time interesting, often highly symmetric families of combinatorial cell complexes. In what follows, we shall use standard facts and terminology of graph theory, as well as algebraic topology. If the need arises, the reader is invited to consult [Har] for graph theory, and [FFG, Fu, GH, Hat, Ko08, Ko20, Mu] for algebraic topology. ### Definition of the Stirling complexes Let \(m\) be an arbitrary integer, \(m\geqslant 2\), and let \(T\) be an arbitrary tree on \(m\) vertices, labeled with numbers \(1\) through \(m\). This tree models our network. Assume furthermore we have \(n\geqslant m\). We can view \(T\) as a \(1\)-dimensional simplicial complex, which leads us to considering the cubical complex \(T^{n}\). Let us make the following observations about this complex. * The cubes of \(T^{n}\) are indexed by the \(n\)-tuples \(c=(c_{1},\ldots,c_{n})\), where each \(c_{i}\) is either a vertex or an edge of \(T\). * The dimension of \(c\) is equal to the number of \(c_{i}\)'s which are edges. Accordingly, the vertices of \(T^{n}\) are indexed by the \(n\)-tuples of the vertices of \(T\), the dimension of \(T^{n}\) is equal to \(n\), and the top-dimensional cubes are indexed by the \(n\)-tuples of the edges. * The boundary cubes of \(c\) are obtained by replacing edges in the indexing \(n\)-tuple with adjacent vertices. The number of replaced edges is precisely the codimension of the corresponding boundary cube. We are now ready to define our main objects of study. **Definition 1.1**.: _Given a tree \(T\) with \(m\geqslant 2\) vertices, and a positive integer \(n\), the_ **Stirling complex**_\(8tr(T,n)\) is the subcomplex of \(T^{n}\) consisting of all \(n\)-tuples \(c=(c_{1},\ldots,c_{n})\), such that each vertex of \(T\) occurs as an entry in \(c\) at least once._ Since the condition of Definition 1.1 is preserved by taking the boundary, the Stirling complexes are well-defined. The following facts hold for Stirling complexes. * If \(n<m\), the condition in Definition 1.1 cannot be fulfilled, so \(8tr(T,n)\) is empty in this case. * The complex \(8tr(T,m)\) consists of \(m!\) vertices, indexed by all permutations of the set \([m]=\{1,\ldots,m\}\). * In general, the vertices of \(8tr(T,n)\) are indexed by all ways to partition the set \([n]\) into \(m\) labeled parts. Accordingly, the number of vertices of \(8tr(T,n)\) is equal to \(m!\{n\}\). * The dimension of \(8tr(T,n)\) is equal to \(n-m\), since this is the maximal number of resources which can be assigned to the edges of \(T\). For each \(0\leqslant d\leqslant n-m\), the Stirling complex \(8tr(T,n)\) has \(\binom{n}{d}(m-1)^{d}m!\genfrac{\{}{\}}{0.0pt}{}{n-d}{m}\) cubes of dimension \(d\). To see this, first choose \(d\) resources among \(n\), then assign each resource to one of the \(m-1\) edges, and then finally distribute the rest of the resources to the nodes, so that no node is left empty. This gives us the following formula for the Euler characteristic: \[\chi(8tr(T,n))=\sum_{d=0}^{n-m}(-1)^{d}\binom{n}{d}(m-1)^{d}m!\genfrac{\{}{\} {\}{0.0pt}{}{n-d}{m}}{m}. \tag{1.1}\] In what follows, we shall derive a better formula for \(\chi(8tr(T,n))\). ### Examples To acquaint ourselves with the Stirling complexes, let us consider a few further examples. _Example 1_.: The first interesting example is \(\mathcal{S}\mathrm{tr}(T,m+1)\). The dimension of this Stirling complex is \(1\), so it is a graph. The numerical data of this graph is the following. * The number of vertices of \(\mathcal{S}\mathrm{tr}(T,m+1)\) is \[m!\genfrac{\{}{\}}{0.0pt}{}{m+1}{m}=m!\genfrac{\{}{\}}{0.0pt}{}{m+1}{2}=\frac{ m}{2}(m+1)!.\] The vertices of \(\mathcal{S}\mathrm{tr}(T,m+1)\) are indexed by the \((m+1)\)-tuples of the vertices of \(T\), with one vertex repeating twice and all other vertices occurring exactly once. * As a graph \(\mathcal{S}\mathrm{tr}(T,m+1)\) has \((m-1)(m+1)!\) edges; the edges are indexed by \((m+1)\)-tuples consisting of one edge and \(m\) vertices of \(T\), with each vertex repeating exactly once. Accordingly, the Euler characteristic of this Stirling complex is given by \[\chi(\mathcal{S}\mathrm{tr}(T,m+1))=-\frac{1}{2}(m-2)(m+1)!.\] It is easy to see using a direct argument that \(\mathcal{S}\mathrm{tr}(T,m+1)\) is always connected. Therefore it is homotopy equivalent to a wedge of \(\frac{1}{2}(m-2)(m+1)!+1\) circles. Consider now the special case when \(m=4\). Let \(T_{1}\) be the tree with one vertex of degree \(3\) and \(3\) leaves. Let \(T_{2}\) be the string with \(3\) edges: it has \(2\) vertices of degree \(2\) and \(2\) leaves. Both \(\mathcal{S}\mathrm{tr}(T_{1},5)\) and \(\mathcal{S}\mathrm{tr}(T_{2},5)\) are connected and have \(240\) vertices and \(360\) edges. However, these two graphs are different: \(\mathcal{S}\mathrm{tr}(T_{1},5)\) has \(60\) vertices with valency \(6\), and the rest of the vertices with valency \(2\), whereas all vertices of \(\mathcal{S}\mathrm{tr}(T_{2},5)\) have valency \(2\) or \(4\). We see therefore that, while topology of \(\mathcal{S}\mathrm{tr}(T_{1},5)\) and \(\mathcal{S}\mathrm{tr}(T_{2},5)\) is the same, the spaces themselves depend on the actual tree structure of \(T_{1}\) and \(T_{2}\). _Example 2_.: Next, consider the cubical complexes \(\mathcal{S}\mathrm{tr}(T,m+2)\), for \(m\geqslant 2\). These are \(2\)-dimensional. The number of vertices is given by \[f_{0}\coloneqq m!\genfrac{\{}{\}}{0.0pt}{}{m+2}{m}=m!\frac{1}{24}m(m+1)(m+2)(3 m+1)=m(3m+1)\frac{(m+2)!}{24}.\] The number of edges is given by \[f_{1}\coloneqq(m+2)(m-1)\frac{m}{2}(m+1)!=12m(m-1)\frac{(m+2)!}{24}.\] Finally, the number of squares is given by \[f_{2}\coloneqq\genfrac{\{}{\}}{0.0pt}{}{m+2}{2}(m-1)^{2}m!=12(m-1)^{2}\frac{ (m+2)!}{24}.\] So, \[\chi(\mathcal{S}\mathrm{tr}(T,m+2))=f_{0}+f_{2}-f_{1}=(3m^{2}-11m+12)\frac{(m+ 2)!}{24}.\] _Example 3_.: Switching to considering the small values of \(m\). Set \(m:=2\), so \(T\) is just an edge. The complex \(\mathcal{S}\mathrm{tr}(T,n)\) is a cubical subdivision of the \((n-2)\)-dimensional sphere. \(\mathcal{S}\mathrm{tr}(T,3)\) is a hexagon. \(\mathcal{S}\mathrm{tr}(T,4)\) is a rhombic dodecahedron, whose \(f\)-vector is \((14,24,12)\). In general the \(f\)-vector of \(\mathcal{S}tr(T,n)\), when \(T\) is a single edge, is \((f_{0},\ldots,f_{n-2})\), where \[f_{k}=\binom{n}{k}(2^{n-k}-2),\text{ for }k=0,\ldots,n-2.\] The cubical complex \(\mathcal{S}tr(T,n)\) can be obtained by starting with an \(n\)-cube \(K\) and then deleting two opposite vertices \(a\) and \(b\), together with all smaller cubes in \(K\) containing \(a\) or \(b\). The author is not aware whether there exists some established terminology for these complexes, beyond the cases \(n=3\) and \(n=4\). ## 2. The topology of the Stirling complexes ### The formulation of the main theorem Somewhat surprisingly, our main theorem implies that the homotopy type of the Stirling complexes \(\mathcal{S}tr(T,n)\) only depends on \(n\) and on the number of vertices in \(T\), not on the actual tree structure. **Theorem 2.1**.: _Assume \(T\) is a tree with \(m\) vertices, \(m\geqslant 2\), and \(n\) is an integer, \(n\geqslant m\). The cubical complex \(\mathcal{S}tr(T,n)\) is homotopy equivalent to a wedge of \((n-m)\)-dimensional spheres._ _Let \(f(m,n)\) denote the number of these spheres. Then \(f(m,n)\) is given by the following formula_ \[f(m,n)=(m-1)^{n}-\binom{m}{1}(m-2)^{n}+\binom{m}{2}(m-3)^{n}+\ldots\\ +(-1)^{m-1}\binom{m}{m-3}2^{n}+(-1)^{m}\binom{m}{m-2}. \tag{2.1}\] In particular, we have \(f(2,n)=1\), confirming our observation that \(\mathcal{S}tr(T,n)\) is a sphere in this case. Further values of \(f(-,-)\) are \[f(3,n) =2^{n}-3,\] \[f(4,n) =3^{n}-4\cdot 2^{n}+6.\] Table 2.1 gives the values of \(f(m,n)\) for small \(m\) and \(n\). It is interesting to think about the implications of Theorem 2.1 for the original problem of resource distribution. Clearly, the fact that \(\mathcal{S}tr(T,n)\) is connected, when \(n>m\), means that, starting from any distribution one can get to any other one by moving the resources. When \(n>m+1\), the space \(\mathcal{S}tr(T,n)\) is simply connected. This means that when two distributions are fixed, any two redistribution schemes from the first distribution to the second one, are homotopic, i.e., there is a simultaneous Figure 1. Examples of \(\mathcal{S}tr(T,n)\), when \(T\) is an edge. redistribution scheme, connecting the two. Even higher connectivity of \(\mathsf{Str}(T,n)\) means the presence of these higher-dimensional redistribution schemes. Finally, the fact that the homotopy of \(\mathsf{Str}(T,n)\) is not trivial in the top dimension means that in this dimension there is a number of fundamentally different higher-dimensional redistribution schemes. The number \(f(m,n)\) tells us, in a certain sense, just how many of these different schemes there are. Let us make a few comments on the numerical side. First, by Euler-Poincare formula, Equation (1.1) could be used instead of Equation (2.1), although the latter is clearly simpler. Second, let \(\mathsf{SF}(m,n)\) denote the number of surjective functions from \([n]\) to \([m]\). We have \(\mathsf{SF}(m,n)=m!\binom{n}{m}\). We can then rewrite Equation (2.1) as follows. **Proposition 2.2**.: _For all \(n\geqslant m\geqslant 2\), we have_ \[f(m,n)=\mathsf{SF}(m-1,n)-\mathsf{SF}(m-2,n)+\cdots+(-1)^{m}\mathsf{SF}(1,n). \tag{2.2}\] **Proof.** As a simple corollary of the principle of inclusion-exclusion we have the following well-known formula \[\mathsf{SF}(a,b)=a^{b}-\binom{a}{a-1}(a-1)^{b}+\cdots+(-1)^{a-1}\binom{a}{1}. \tag{2.3}\] Substituting the right hand side of Equation (2.3) into Equation (2.2), and using the Pascal triangle addition rule for the binomial coefficients, shows that it is equivalent to Equation (2.1). Finally, for future reference, we record the following fact. **Proposition 2.3**.: _We have_ \[\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^{m}=m!-1.\] **Proof.** This follows from the following well-known polynomial identity \[m!=\sum_{k=0}^{m}\binom{m}{k}(-1)^{k}(x-k)^{m},\] where \(x\) is a variable, by substituting \(x:=m+1\). Proposition 2.3 shows that Equation (2.1) holds for \(m=n\). \begin{table} \begin{tabular}{c|c c c c c c} \(n\setminus m\) & 2 & 3 & 4 & 5 & 6 & \(\ldots\) \\ \hline 2 & 1 & & & & & \\ 3 & 1 & 5 & & & & \\ 4 & 1 & 13 & 23 & & & \\ 5 & 1 & 29 & 121 & 119 & & \\ 6 & 1 & 61 & 479 & 1083 & 719 & \\ \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ldots\) & \(\ddots\) & \\ \end{tabular} \end{table} Table 2.1. The values of \(f(m,n)\). ### Relaxing the occupancy requirement Our proof of Theorem 2.1 proceeds by induction. As it often happens in such situations, it is opportune to deal with a more general class of complexes. In our case, we relax the requirement that each node must have at least one allocated resource. **Definition 2.4**.: _For any cell \(c\) of \(T^{n}\), we define_ \[\operatorname{supp}c=\{v\in V(T)\,|\,\exists k\in[n],\text{ such that }c_{k}=v\}\subseteq V(T).\] _Let \(S\) be an arbitrary subset of the vertex set of \(T\). We define \(\operatorname{\mathcal{S}tr}(T,S,n)\) to be the subcomplex of \(T^{n}\), consisting of all cells \(c\) whose support contains \(S\)._ Note, that whenever \(b,c\in T^{n}\) are cubes, such that \(b\subseteq c\), we have \(\operatorname{supp}c\subseteq\operatorname{supp}b\). In other words, the support of a cell \(c\) either stays the same or increases when taking the boundary. This implies that the cubical complex \(\operatorname{\mathcal{S}tr}(T,S,n)\) is well-defined. Extreme values for \(S\) give us two special cases: * for \(S=V(T)\), we have \(\operatorname{\mathcal{S}tr}(T,S,n)=\operatorname{\mathcal{S}tr}(T,n)\); * for \(S=\emptyset\), we have \(\operatorname{\mathcal{S}tr}(T,S,n)=T^{n}\), which is contractible as a topological space. Rather than attacking Theorem 2.1 directly, we shall prove the following, more general result. **Theorem 2.5**.: _The complex \(\operatorname{\mathcal{S}tr}(T,S,n)\) is homotopy equivalent to a wedge of \((n-|S|)\)-dimensional spheres. The number of spheres is \(\operatorname{f}(|S|,n)\)._ Clearly, Theorem 2.1 is a special case of Theorem 2.5, where \(S=V(T)\). ## 3. Homotopy colimits ### The diagrams of topological spaces Our strategy to prove Theorem 2.5 is to decompose the spaces \(\operatorname{\mathcal{S}tr}(T,S,n)\) into simpler pieces and then to manipulate this decomposition, while preserving the homotopy type of the total space. Although there are different ways to formulate our argument, we find it handy to phrase it using the language of homotopy colimits. Let us introduce the corresponding terminology, see also [BoK, Hat, V]. We assume that the reader is familiar with basic category theory, [ML, Mi]. Recall, that given a poset \(P\), we can always view \(P\) as a category so that * the objects of that category are precisely the elements of \(P\); * for any two elements \(p,q\in P\), such that \(p\geq q\), there exists a _unique_ morphism from \(p\) to \(q\). The composition rule in this category is clearly uniquely defined, since there is at most one morphism between any two objects. Recall that \(\operatorname{\mathbf{Top}}\) denotes the category of topological spaces and continuous maps. **Definition 3.1**.: _Assume, we are given a poset \(P\), and we view it as a category. A functor from \(P\) to \(\operatorname{\mathbf{Top}}\) is called a_ **diagram of topological spaces over \(P\)._ Specifically, a diagram \(\mathcal{D}\) is a collection of topological spaces \(\mathcal{D}(p)\), where \(p\in P\), together with continuous maps \(\mathcal{D}_{p,q}:\mathcal{D}(p)\to\mathcal{D}(q)\), where \(p>q\). These maps are subject to the condition \(\mathcal{D}_{q,r}\circ\mathcal{D}_{p,q}=\mathcal{D}_{p,r}\), whenever \(p>q>r\). ### Homotopy colimits of diagrams over \(P^{t}\) Let \(T\) be a an arbitrary tree with \(m\) vertices, where \(m\geqslant 2\). We assume that the vertices are indexed by the set \([m]=\{1,\ldots,m\}\). A poset \(P^{T}\) is defined as follows: * the elements of \(P^{T}\) are indexed by the vertices and the edges of \(T\); * the partial order on \(P^{T}\) given by saying that each edge is larger than its adjacent vertices. This poset has \(2m-1\) elements. The elements indexed by the vertices are minimal, while the elements indexed by the edges are maximal, and each one is larger than exactly \(2\) minimal elements. A diagram \(\mathcal{D}\) of topological spaces over \(P^{T}\) is then given by the following data, subject to no further conditions: * spaces \(\mathcal{D}(v)\) for all vertices of \(T\); * spaces \(\mathcal{D}(e)\) for all edges of \(T\); * continuous maps \(\mathcal{D}_{e,v}:\mathcal{D}(e)\to\mathcal{D}(v)\), whenever \(v\) is a vertex adjacent to the edge \(e\). **Definition 3.2**.: _Assume \(\mathcal{D}\) is a diagram of topological spaces over a poset \(P^{T}\). We define the_ **homotopy colimit** _of \(\mathcal{D}\), denoted \(\mathtt{hocolim}\mathcal{D}\), as the quotient space_ \[\mathtt{hocolim}\mathcal{D}=\left(\prod_{v\in V(T)}\mathcal{D}(v)\prod_{e\in E (T)}(\mathcal{D}(e)\times[0,1])\right)/\sim,\] _where the equivalence relation \(\sim\) is generated by \((x,0)\sim\mathcal{D}_{e,v}(x)\), and \((x,1)\sim\mathcal{D}_{e,w}(x)\), whenever \(x\in\mathcal{D}(e)\), \(e=(v,w)\), \(v<w\)._ Let us mention that the notion of homotopy colimit can be defined more generally, including homotopy colimits of diagrams of topological spaces over arbitrary posets. Here, we restrict ourselves to Definition 3.2, which will be sufficient for our purposes. ### Homotopy independence of the homotopy colimits of diagrams of CW complexes over \(P^{t}\) From now on, we assume that the spaces \(\mathcal{D}(p)\) are CW complexes, for all \(p\in P^{T}\), and the maps \(\mathcal{D}_{e,v}\) are cellular. The next proposition says that changing these maps up to homotopy does not change the homotopy type of the homotopy colimit. **Proposition 3.3**.: _Assume \(\mathcal{D}\) and \(\mathcal{E}\) are diagrams of CW complexes over \(P^{T}\), such that_ 1. \(\mathcal{D}(p)=\mathcal{E}(p)\)_, for all_ \(p\in P^{T}\)_;_ 2. _the maps_ \(\mathcal{D}_{e,v}\) _and_ \(\mathcal{E}_{e,v}\) _are homotopic, whenever_ \(e\) _is an edge of_ \(T\)_, and_ \(v\) _is a vertex adjacent to_ \(e\)_._ _Then \(\mathtt{hocolim}\mathcal{D}\) and \(\mathtt{hocolim}\mathcal{E}\) are homotopy equivalent._ **Proof.** Since \(T\) is finite, it is enough to consider the case where \(\mathcal{D}_{e,v}\) and \(\mathcal{E}_{e,v}\) coincide, for all, but one single instance of an edge \(e\) and a vertex \(v\). Decompose the tree \(T\) into a union of trees \(T^{\prime}\) and \(T^{\prime\prime}\), such that the intersection of \(T^{\prime}\) and \(T^{\prime\prime}\) is vertex \(v\), \(v\) is a leaf of \(T^{\prime}\), and \(T^{\prime}\) contains the edge \(e\), see Figure 3.1. Let \(\mathcal{D}^{\prime}\) be the diagram of CW complexes on \(P^{T^{\prime}}\), which is a restriction of \(\mathcal{D}\) with a slight change at \(v\). Specifically, it is defined as follows: * for any vertex \(w\in V(T^{\prime})\), we have \(\mathcal{D}^{\prime}(w):=\left\{\begin{array}{ll}\mathcal{D}(w),&\text{ if }w\neq v;\\ \mathcal{D}(e),&\text{ otherwise.}\end{array}\right.\) * \(\mathcal{D}^{\prime}(r)=\mathcal{D}(r)\), for all \(r\in E(T^{\prime})\); * for any edge \(r\in E(T^{\prime})\) and an adjacent vertex \(w\), we have \[\mathcal{D}^{\prime}_{r,w}=\left\{\begin{aligned} \mathcal{D}_{r,w},& \quad\text{if }(r,w)\neq(e,v);\\ \operatorname{id}_{\mathcal{D}(e)},&\quad\text{otherwise.} \end{aligned}\right..\] Let \(\mathcal{D}^{\prime\prime}\) be the restriction of \(\mathcal{D}\) to \(P^{T^{\prime\prime}}\). Set \(X:=\operatorname{\mathtt{hocolim}}\mathcal{D}^{\prime}\), \(Y:=\operatorname{\mathtt{hocolim}}\mathcal{D}^{\prime\prime}\), \(A:=\mathcal{D}^{\prime}(v)=\mathcal{D}(e)\). Note that \(X\) and \(Y\) are CW complexes, and \(A\) is a CW subcomplex of \(X\). Set \(f:=\mathcal{D}_{e,v}\) and \(g:=\mathcal{E}_{e,v}\). Clearly, \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) is obtained from \(Y\) by attaching \(X\) over \(f\), whereas \(\operatorname{\mathtt{hocolim}}\mathcal{E}\) is obtained from \(Y\) by attaching \(X\) over \(g\). We assumed that \(f\) is homotopic to \(G\). It is then a general fact, see e.g. [Hat], that the homotopy type of the adjunction space does not change, when the attachment map is replaced by a homotopic one. This implies that \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) and \(\operatorname{\mathtt{hocolim}}\mathcal{E}\) are homotopy equivalent. ### Special homotopy colimits As above, let \(T\) be an arbitrary tree with at least \(2\) vertices. Let us fix a nonempty subset \(S\subseteq V(T)\). Assume we have a diagram of CW complexes over \(P^{T}\) satisfying the following conditions: * \(\mathcal{D}(v)\) are single points, for all \(v\in S\); * \(\mathcal{D}(e)=X\), for all \(e\in E(T)\), and \(\mathcal{D}(v)=X\), for any \(v\notin S\), where \(X\) is some fixed CW complex; * the maps \(\mathcal{D}_{e,v}\) are identity maps, for all \(v\notin S\). **Proposition 3.4**.: _Under the conditions above, the homotopy colimit of \(\mathcal{D}\) is homotopy equivalent to the wedge of \(|S|-1\) copies of \(\operatorname{\mathtt{susp}}X\)._ **Proof.** The proof is by induction on the number of vertices of \(T\). The induction base is when \(m=2\). We have two cases. **Case 1.** If \(S=1\), then \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) is a cone over \(X\), hence contractible. **Case 2.** If \(S=2\), then \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) is obtained by taking a cylinder over \(X\) and shrinking each of the end copies of \(X\) to a point. This is precisely the suspension space \(\operatorname{\mathtt{susp}}X\). From now on we can assume \(m\geqslant 3\). We break our argument in the following cases. **Case 1.** Assume there exists an internal vertex \(v\in T\), such that \(v\in S\). Let \(e_{1},\dots,e_{k}\) be the edges adjacent to \(v\), \(k\geqslant 2\). Figure 3.1. Decomposition of the tree \(T\). Cutting \(T\) at \(v\) will decompose \(T\) into the trees \(T_{1},\ldots,T_{k}\), \(v\) is a leaf in each of them, and \(e_{i}\) is adjacent to \(v\) in \(T_{i}\), for all \(i=1,\ldots,k\); see Figure 3.2. Let \(\mathcal{D}_{i}\) be the restriction of \(\mathcal{D}\) to \(T_{i}\), for \(i=1,\ldots,k\). Each homotopy colimit \(\operatorname{\mathsf{hocolim}}\mathcal{D}_{i}\) has a marked point \(x_{i}\) corresponding to the copy of \(\mathcal{D}(v)\). The homotopy colimit \(\operatorname{\mathsf{hocolim}}\mathcal{D}\) is obtained by gluing the homotopy colimits \(\operatorname{\mathsf{hocolim}}\mathcal{D}_{i}\) together along these points, for \(i=1,\ldots,k\). Accordingly, we see that \[\operatorname{\mathsf{hocolim}}\mathcal{D}\cong\vee_{i=1}^{k}\operatorname{ \mathsf{hocolim}}\mathcal{D}_{i}. \tag{3.1}\] Set \(S_{i}\coloneqq S\cap V(T_{i})\). The vertex \(v\) is in \(S\), so \(v\in S_{i}\), for all \(i\). This means that \(S\setminus v=\coprod_{i=1}^{k}(S_{i}\setminus v)\), and hence \(|S|-1=\sum_{i=1}^{k}(|S_{i}|-1)\). Since each \(T_{i}\) has fewer vertices than \(T\), we know by the induction assumption that \(\operatorname{\mathsf{hocolim}}\mathcal{D}_{i}\) is homotopy equivalent to \(|S_{i}|-1\) copies of \(\operatorname{\mathsf{susp}}X\). Accordingly, (3.1) implies that \(\operatorname{\mathsf{hocolim}}\mathcal{D}\) is homotopy equivalent to a wedge of \(|S|-1\) copies of \(\operatorname{\mathsf{susp}}X\). **Case 2.** All the vertices in \(S\) are leaves of \(T\), and there exists a further leaf \(w\notin S\). Assume \(w\) is connected to the vertex \(u\). Since \(m\geqslant 3\) and all the vertices in \(S\) are leaves, we must have \(u\notin S\). Let \(T^{\prime}\) be the tree obtained from \(T\) by deleting \(w\) and the adjacent edge. Let \(\mathcal{D}^{\prime}\) be the restriction of \(\mathcal{D}\) to \(T^{\prime}\). By induction assumption \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) is homotopy equivalent to a wedge of \(|S|-1\) copies of \(\operatorname{\mathsf{susp}}X\). The space \(\operatorname{\mathsf{hocolim}}\mathcal{D}\) is obtained from \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) by attaching a cylinder with base \(X\) at one of its ends. Clearly \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) is a strong deformation of \(\operatorname{\mathsf{hocolim}}\mathcal{D}\), so the latter is also homotopy equivalent to a wedge of \(|S|-1\) copies of \(\operatorname{\mathsf{susp}}X\). **Case 3.** The set \(S\) is precisely the set of all leaves of \(T\). Since \(m\geqslant 3\), we have at least \(3\) leaves. Fix \(v\in S\). Say \(v\) is connected to \(w\) by an edge. We have \(w\notin S\). Let \(T^{\prime}\) be the tree obtained from \(T\) by deleting \(v\), and let \(\mathcal{D}^{\prime}\) be the restriction of \(\mathcal{D}\) to \(T^{\prime}\). The topological space \(\operatorname{\mathsf{hocolim}}\mathcal{D}\) is obtained from \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) by attaching a cone over \(X=\mathcal{D}(w)\). Let \(u\in S\) be any other leaf of \(T\), \(u\neq v\). There is a unique path inside of \(T^{\prime}\) connecting \(w\) with \(u\). The homotopy colimit of the restriction of \(\mathcal{D}\) to that path is a cone with apex at \(\mathcal{D}(u)\) and base at \(\mathcal{D}(w)\). This cone lies inside \(\operatorname{\mathsf{hocolim}}T^{\prime}\), therefore the inclusion map \(\mathcal{D}(w)\hookrightarrow\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) is trivial. It follows that, up to homotopy equivalence, attaching a cone over \(\mathcal{D}(w)\) to \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) is the same as wedging \(\operatorname{\mathsf{hocolim}}\mathcal{D}^{\prime}\) with \(\operatorname{\mathsf{susp}}X\). The result now follows by induction. Figure 3.2. Cutting the tree \(T\) at \(v\). Let us now consider a little more general diagrams. These satisfy the same conditions outside of \(S\), however, for any \(v\in S\), the spaces \(\mathcal{D}(v)\) are now arbitrary connected CW complexes, and each \(\mathcal{D}_{e,v}\) maps everything to some point in \(\mathcal{D}(v)\), whenever \(e\) is an adjacent edge. In this case, Proposition 3.4 can be generalized as follows. **Proposition 3.5**.: _Under the conditions above the homotopy colimit of \(\mathcal{D}\) is homotopy equivalent to the wedge_ \[\vee_{v\in S}\mathcal{D}(v)\vee_{\Omega}\operatorname{\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Proposition 4.1**.: _The homotopy colimit \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) is homeomorphic to the cubical complex \(\mathcal{S}\mathrm{tr}(T,S,n)\)._ **Proof.** Whenever \(e=(v,w)\) is an edge of \(T\), let \(B_{e}\) denote the subcomplex of \(\mathcal{S}\mathrm{tr}(T,S,n)\) consisting of all cells \(c\in\mathcal{S}\mathrm{tr}(T,S,n)\) such that one of the following holds 1. \(c_{n}=e\); 2. \(c_{n}=v\), and there exists \(1\leqslant k\leqslant n-1\), such that \(c_{k}=v\); 3. \(c_{n}=w\), and there exists \(1\leqslant k\leqslant n-1\), such that \(c_{k}=w\). It is easy to see that this set of cells is closed under taking the boundary, hence the subcomplex \(B_{e}\) is well-defined. Furthermore, the complex \(\mathcal{S}\mathrm{tr}(T,S,n)\) is the union of the subcomplexes \(\mathcal{D}(v)\), for \(v\in V(T)\), and \(B_{e}\), for \(e\in E(T)\). To see this just take any cube \((c_{1},\ldots,c_{n})\) and sort it according to the value of \(c_{n}\). Recording the value of \(c_{n}\) separately, we can see that, as a cubical complex, each \(B_{e}\) is isomorphic to the direct product of \(\mathcal{S}\mathrm{tr}(T,S,n-1)\) with the closed interval \([0,1]\). This can be seen as a cylinder with base \(\mathcal{S}\mathrm{tr}(T,S,n-1)\). The entire complex \(\mathcal{S}\mathrm{tr}(T,S,n)\) is obtained by taking the disjoint union of \(\mathcal{D}(v)\), for \(v\in V(T)\), and connecting them by these cylinders. For each cylinder \(B_{e}\), \(e=(v,w)\), its bases are identified with corresponding subcomplexes of \(\mathcal{D}(v)\) and \(\mathcal{D}(w)\) by assigning \(c_{n}\coloneqq v\) or \(c_{n}\coloneqq w\). These are precisely the maps \(\mathcal{D}_{e,v}\) and \(\mathcal{D}_{e,w}\). Comparing this gluing procedure with the definition of \(\operatorname{\mathtt{hocolim}}\mathcal{D}\) we see that we obtain a homeomorphic space. ### The proof of the main theorem We are now ready to show our main result. **Proof of Theorem 2.5.** First, when \(|S|=n\), the complex \(\mathcal{S}\mathrm{tr}(T,S,n)\) is a disjoint union of \(n!\) points. This can be viewed as a wedge of \(n!-1\) copies of a \(0\)-dimensional sphere, so the result follows from Proposition 2.3. Assume from now on that \(n\geqslant|S|+1\). By Proposition 4.1 we can replace \(\mathcal{S}\mathrm{tr}(T,S,n)\) by \(\operatorname{\mathtt{hocolim}}\mathcal{D}\). Consider now a map \(\mathcal{D}_{e,v}:\mathcal{D}(e)\to\mathcal{D}(v)\). By induction, we know that \(\mathcal{D}(e)=\mathcal{S}\mathrm{tr}(T,S,n-1)\) is homotopy equivalent to a wedge of spheres of dimension \(n-1-|S|\). We make \(2\) observations. 1. If \(v\notin S\), the cubical complex \(\mathcal{D}(v)\) is isomorphic to \(\mathcal{S}\mathrm{tr}(T,S,n-1)\), and the map \(\mathcal{D}_{e,v}\) is the identity map. 2. If \(v\in S\), the cubical complex \(\mathcal{D}(v)\) is isomorphic to \(\mathcal{S}\mathrm{tr}(T,S\setminus v,n-1)\). This is because we know that \(c_{n}=v\), so there is no need to request that \(v\) is occupied by some other resource. By induction assumption, the space \(\mathcal{S}\mathrm{tr}(T,S\setminus v,n-1)\) is homotopy equivalent to a wedge of spheres of dimension \(n-1-(|S|-1)=n-|S|\). In particular, it is \((n-|S|-1)\)-connected. Therefore, the map \(\mathcal{D}_{e,v}\) is homotopic to a trivial map, which takes everything to a point. We now apply Proposition 3.3 to shift our consideration to the diagram \(\mathcal{D}^{\prime}\), which is obtained from \(\mathcal{D}\) by replacing the maps \(\mathcal{D}_{e,v}\) by trivial ones, whenever \(v\in S\). This diagram has the same homotopy type as \(\mathcal{D}\). On the other hand, it now satisfies the conditions of Proposition 3.5, where the connectivity of the spaces \(\mathcal{D}(v)\) is a consequence of the fact that \(n\geqslant|S|+1\). It follows from that proposition that \[\operatorname{\mathtt{hocolim}}\mathcal{D}\simeq\vee_{v\in S}\mathcal{D}(v) \vee_{\Omega}\operatorname{\ \mathtt{susp}}\mathcal{S}\mathrm{tr}(T,S,n-1),\] where \(|\Omega|=|S|-1\). Counting spheres on both sides, we obtain the recursive formula \[f(|S|,n)=(|S|-1)f(|S|,n-1)+|S|f(|S|-1,n-1).\] The validity of the formula Equation (2.1) now follows from Proposition 4.3. **Remark 4.2**.: _After this paper was submitted for publication, a shorter proof of theorem 2.5 was found by one of the referees. It is included in the appendix._ **Proposition 4.3**.: _Let \(\Gamma=\{(m,n)\in\mathbb{Z}\times\mathbb{Z}\,|\,n\geqslant m\geqslant 2\}\). Assume we have a function \(f:\Gamma\to\mathbb{Z}\), which satisfies the following:_ 1. _for all_ \(n>m\geqslant 3\) _we have recursive formula_ (4.1) \[f(m,n)=(m-1)f(m,n-1)+mf(m-1,n-1);\] 2. _we have the boundary conditions_ \(f(2,n)=1\)_,_ \(f(m,m)=m!-1\)_._ _Then for all \((m,n)\in\Gamma\), the value \(f(m,n)\) is given by Equation (2.1), which we rewrite as_ \[f(m,n)=\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^{m}. \tag{4.2}\] Proof.: Clearly, the recursive rule Equation (4.1) together with the boundary conditions defines the values of the function of the function \(f(-,-)\) uniquely. Therefore, to show that \(f\) is given by the formula Equation (4.2) we just need to know that this formula satisfies our boundary conditions and the recursion. Substituting \(m=2\) into Equation (4.2) immediately yields \(1\) on the right hand side, as there is only one summand, with \(\alpha=1\). The case \(m=n\) follows from Proposition 2.3. To show that Equation (2.1) satisfies the recursion Equation (4.1) we need to check that \[\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha^{ n}=\\ (m-1)\sum_{\alpha=1}^{m-1}(-1)^{m+\alpha+1}\binom{m}{\alpha+1} \alpha^{n-1}+m\sum_{\alpha=1}^{m-2}(-1)^{m+\alpha}\binom{m-1}{\alpha+1}\alpha ^{n-1}. \tag{4.3}\] We do that simply by comparing the coefficients of \(\alpha^{n-1}\) on each side of Equation (4.3). For \(\alpha=m-1\), the coefficient on each side is \(m-1\). For \(\alpha=1,\ldots,m-2\), we need to show that \[(-1)^{m+\alpha+1}\binom{m}{\alpha+1}\alpha=(m-1)(-1)^{m+\alpha+1}\binom{m}{ \alpha+1}+m(-1)^{m+\alpha}\binom{m-1}{\alpha+1}.\] This follows from the formula \[(m-\alpha-1)\binom{m}{\alpha+1}=m\binom{m-1}{\alpha+1}.\qed\] We finish with an open question. **Open Question 1**.: _Let \(T\) be a tree with a single internal vertex of valency \(r\), where \(r\geqslant 2\), and let \(n\) be any integer, \(n\geqslant r+1\). The symmetric group \(S_{r}\) acts on \(T\) by permuting its \(r\) leaves. This induces an \(S_{r}\)-action on the Stirling complex \(\operatorname{\mathsf{Str}}(T,n)\), and hence also an \(S_{r}\)-action on \(H_{n-r-1}(\operatorname{\mathsf{Str}}(T,n);\mathbb{R})\). It would be interesting to decompose this representation of \(S_{r}\) into irreducible ones._ ## Appendix: Stirling Complexes via the Wedge Lemma by Roy Meshulam 1 Footnote 1: Department of Mathematics, Technion, Haifa 32000, Israel. e-mail: [email protected]. Supported by ISF grant 686/20. In this appendix we prove a generalization of Theorem 2.5. Let \(X=X_{0}\) be a finite simplicial complex, and let \(S=\{X_{i}\}_{i\in[m]}\) be a family of subcomplexes of \(X\). For \(n\geq m\) let \[A_{m,n}=\{(i_{1},\ldots,i_{n})\in(\{0\}\cup[m])^{n}:\{i_{1},\ldots,i_{n}\} \supset[m]\}.\] Slightly extending the setup considered in 2.4, we define the _Stirling Complex_ associated with the triple \((X,S,n)\) by \[\mathcal{S}\mathrm{tr}(X,S,n)=\bigcup_{(i_{1},\ldots,i_{n})\in A_{m,n}}X_{i_{1 }}\times\cdots\times X_{i_{n}}.\] Let \(S^{k}\) denote the \(k\)-sphere. Theorem 2.5 asserts that if \(T\) be a finite tree and \(S\) is a set of \(m\geq 2\) distinct vertices of \(T\), then \[\mathcal{S}\mathrm{tr}(T,S,n)\simeq\bigvee_{i=1}^{f(m,n)}S^{n-m}.\] Here we give a simple proof of a generalization of Theorem 2.5 that perhaps clarifies why the homotopy type of \(\mathcal{S}\mathrm{tr}(T,S,n)\) does not depend on the structure of \(T\). **Theorem A.1**.: _Let \(X\) be a finite contractible complex and let \(S=\{X_{i}\}_{i=1}^{m}\) be a family of \(m\geq 2\) pairwise disjoint contractible subcomplexes of \(X\). Then_ \[\mathcal{S}\mathrm{tr}(X,S,n)\simeq\bigvee_{i=1}^{f(m,n)}S^{n-m}.\] The main tool in the proof of Theorem A.1 is the Wedge Lemma of Ziegler and Zivaljevic (Lemma 1.8 in [ZZ]). The version below appears in [HRW]. For a poset \((P,\prec)\) and \(p\in P\) let \(P\prec p=\{q\in P:q\prec p\}\). Let \(\Delta(P)\) denote the order complex of \(P\). Let \(Y\) be a regular CW-complex and let \(\{Y_{i}\}_{i=1}^{m}\) be subcomplexes of \(Y\) such that \(\bigcup_{i=1}^{m}Y_{i}=Y\). Let \((P,\prec)\) be the poset whose elements index all distinct partial intersections \(\bigcap_{i\in P}Y_{i}\), where \(\emptyset\neq J\subset[m]\). Let \(U_{p}\) denote the partial intersection indexed by \(p\in P\), and let \(\prec\) denote reverse inclusion, i.e. \(p\prec q\) if \(U_{q}\subsetneq U_{p}\). **Wedge Lemma [ZZ, HRW].** suppose that for any \(p\in P\) there exists a \(c_{p}\in U_{p}\) such that the inclusion \(\bigcup_{q\succ p}U_{q}\hookrightarrow U_{p}\) is homotopic to the constant map to \(c_{p}\). Then (A.1) \[Y\simeq\bigvee_{p\in P}\Delta(P\prec_{p})*U_{p}.\] **Proof of Theorem A.1.** If \(m=n\) then \(\mathcal{S}\mathrm{tr}(X,S,n)\) is a union of \(m!\) disjoint contractible sets and hence homotopy equivalent to \(\bigvee_{i=1}^{m!-1}S^{0}\). Suppose \(n>m\geq 2\). In view of the recursion (4.1), it suffices as in the proof of Theorem 2.5, to establish the following homotopy decomposition: (A.2) \[\mathcal{S}\mathrm{tr}(X,S,n)\simeq\bigvee_{i=1}^{m}\mathcal{S}\mathrm{tr}(X,S\setminus\{X_{i}\},n-1)\vee\bigvee_{i=1}^{m-1}S^{0}*\mathcal{S}\mathrm{tr}( X,S,n-1).\] We proceed with the proof of (A.2). For \(1\leq i\leq m\) let \[Y_{i}=\big{(}X_{i}\times\mathcal{S}\mathrm{tr}(X,S\setminus\{X_{i}\},n-1)\big{)} \cup\big{(}X\times\mathcal{S}\mathrm{tr}(X,S,n-1)\big{)}.\] Then \(\bigcup_{i=1}^{m}Y_{i}=8\mathrm{tr}(X,S,n)\). Next note that (A.3) \[\mathcal{S}\mathrm{tr}(X,S,n-1)\subset\mathcal{S}\mathrm{tr}(X,S\setminus\{X_{ i}\},n-1).\] As \(X_{i}\subset X\) are both contractible, it follows that \(X_{i}\) is a deformation retract of \(X\). Together with (A.3) it follows that \(X_{i}\times\mathcal{S}\mathrm{tr}(X,S\setminus\{X_{i}\},n-1)\) is a deformation retract of \(Y_{i}\). Therefore (A.4) \[Y_{i}\simeq\mathcal{S}\mathrm{tr}(X,S\setminus\{X_{i}\},n-1).\] Let \(Z=X\times\mathcal{S}\mathrm{tr}(X,S,n-1)\). Then for any \(1\leq i\neq j\leq m\) (A.5) \[Y_{i}\cap Y_{j}=\bigcap_{k=1}^{m}Y_{k}=Z\simeq\mathcal{S}\mathrm{tr}(X,S,n-1).\] Eq. (A.5) implies that the intersection poset \((P,\prec)\) of the cover \(\{Y_{i}\}_{i=1}^{m}\) is \(P=[m]\cup\{\widehat{1}\}\), where \(i\in[m]\) represents \(Y_{i}\), \(\widehat{1}\) represents \(Z\), \([m]\) is an antichain and \(i\prec\widehat{1}\) for all \(i\in[m]\). Note that \(\Delta(P_{\prec i})=\emptyset\) for all \(i\in[m]\) and \(\Delta(P_{\prec\widehat{1}})\) is the discrete space \([m]\). By induction, \(Y_{i}\) is homotopy equivalent to a wedge of \((n-m)\)-spheres and \(Z\) is homotopy equivalent to a wedge of \((n-m-1)\)-spheres. Hence the inclusion \(Z\hookrightarrow Y_{i}\) is null homotopic. Using the Wedge Lemma together with (A.4) and (A.5), it follows that \[\mathcal{S}\mathrm{tr}(X,S,n) \simeq\left(\bigvee_{i\in[m]}\Delta(P_{\prec i})*Y_{i}\right) \vee\big{(}\Delta(P_{\prec\widehat{1}})*Z\big{)}=\left(\bigvee_{i\in[m]}Y_{ i}\right)\vee([m]*Z)\] \[\simeq\bigvee_{i\in[m]}\mathcal{S}\mathrm{tr}(X,S\setminus\{X_ {i}\},n-1)\vee\bigvee_{i=1}^{m-1}\mathcal{S}^{0}*\mathcal{S}\mathrm{tr}(X,S,n -1).\] This completes the proof of (A.2) and hence of Theorem A.1.
2309.08062
Using quantitative magneto-optical imaging to reveal why the ac susceptibility of superconducting films is history-independent
Measurements of the temperature-dependent ac magnetic susceptibility of superconducting films reveal reversible responses, i.e., irrespective of the magnetic and thermal history of the sample. This experimental fact is observed even in the presence of stochastic and certainly irreversible magnetic flux avalanches which, in principle, should randomly affect the results. In this work, we explain such an apparent contradiction by exploring the spatial resolution of magneto-optical imaging. To achieve this, we successfully compare standard frequency-independent first harmonic ac magnetic susceptibility results for a superconducting thin film with those obtained by ac-emulating magneto-optical imaging (acMOI). A quantitative analysis also provides information regarding flux avalanches, reveals the presence of a vortex-antivortex annihilation zone in the region in which a smooth flux front interacts with pre-established avalanches, and demonstrates that the major impact on the flux distribution within the superconductor happens during the first ac cycle. Our results establish acMOI as a reliable approach for studying frequency-independent ac field effects in superconducting thin films while capturing local aspects of flux dynamics, otherwise inaccessible via global magnetometry techniques.
Davi A. D. Chaves, J. C. Corsaletti Filho, E. A. Abbey, D. Bosworth, Z. H. Barber, M. G. Blamire, T. H. Johansen, A. V. Silhanek, W. A. Ortiz, M. Motta
2023-09-14T23:23:25Z
http://arxiv.org/abs/2309.08062v1
Using quantitative magneto-optical imaging to reveal why the ac susceptibility of superconducting films is history-independent ###### Abstract Measurements of the temperature-dependent ac magnetic susceptibility of superconducting films reveal reversible responses, i.e., irrespective of the magnetic and thermal history of the sample. This experimental fact is observed even in the presence of stochastic and certainly irreversible magnetic flux avalanches which, in principle, should randomly affect the results. In this work, we explain such an apparent contradiction by exploring the spatial resolution of magneto-optical imaging. To achieve this, we successfully compare standard frequency-independent first harmonic ac magnetic susceptibility results for a superconducting thin film with those obtained by ac-emulating magneto-optical imaging (acMOI). A quantitative analysis also provides information regarding flux avalanches, reveals the presence of a vortex-antivortex annihilation zone in the region in which a smooth flux front interacts with pre-established avalanches, and demonstrates that the major impact on the flux distribution within the superconductor happens during the first ac cycle. Our results establish acMOI as a reliable approach for studying frequency-independent ac field effects in superconducting thin films while capturing local aspects of flux dynamics, otherwise inaccessible via global magnetometry techniques. ## I Introduction The last years have seen superconducting materials be positioned as a vital part of an ongoing quantum revolution [1; 2; 3; 4; 5] and serving as a fertile playground for the development of several nanoscale technological applications [6; 7; 8; 9; 10; 11; 12; 13]. Particularly, understanding, controlling, and exploring the interaction of different superconductors with distinct properties and structures with a low-frequency ac magnetic field has been an active research topic [14; 15; 16; 17; 18; 19; 20; 21; 22]. In a type-II superconductor, it may be energetically favorable for the sample to allow flux penetration in the form of vortices [23; 24]. For a given direction of the applied magnetic field, vortices may either be of positive or negative polarity, the latter being commonly referred to as antivortices. Whereas vortices with the same polarity interact repulsively [25], vortices and antivortices attract each other, which eventually leads to mutual annihilation when two such entities come in close proximity [26; 27]. On a mesoscopic scale, ordinary flux distributions in type-II specimens are described by critical state models [28; 29]. In this case, the magnetic field gradually penetrates toward the center of the sample as a smooth flux front originating from the edges of the material, a consequence of vortex motion being hampered by pinning centers [30; 31]. The exact distribution profile depends on sample geometry and its magnetic history [32; 33; 34]. Moreover, the depth of the flux front penetration is tied to the sample critical current density \(J_{c}\), as further penetration indicates a lower magnetic shielding capability [32]. In short, the actual flux distribution usually depends on external thermodynamic parameters such as the temperature, \(T\), and the applied magnetic field, \(H\), i.e., \(J_{c}=J_{c}(T,H)\). The inevitable vortex displacement during flux penetration represents an energy dissipating process [24]. Then, if the superconductor is not able to swiftly assimilate the heat generated by moving vortices in order to accommodate for further vortex movement, a thermomagnetic instability may be triggered. In a given interval of magnetic fields and temperatures, these events lead to the onset of a positive feedback process in which superconducting properties are locally suppressed, allowing for abrupt flux penetration known as flux avalanches [35; 36; 37]. In thin films, flux avalanches take remarkable dendritic patterns as they propagate through the material with velocities up to the scale of hundreds of km/s [38; 39; 40; 41; 42; 43; 44; 45]. The abrupt flux penetration during a flux avalanche event results in well-known flux jumps in the global magnetization hysteresis loop of superconductors [31; 46; 47; 48; 49]. Another signature of avalanches in the magnetic properties of superconducting materials is a paramagnetic reentrance observed in the temperature dependence of the first harmonic ac magnetic susceptibility, \(\chi^{\prime}_{\rm ac}(T)+i\chi^{\prime\prime}_{\rm ac}(T)\)[50; 51; 52]. The in-phase component \(\chi^{\prime}_{\rm ac}\) is related to an inductive response and measures the superconductor ability to shield magnetic flux [53; 54]. The so-called paramagnetic reentrance is observed as a decrease in \(|\chi^{\prime}_{\rm ac}|\) for temperatures lower than the superconducting critical temperature \(T_{c}\). On its turn, the out-of-phase component \(\chi^{\prime\prime}_{\rm ac}\) gauges the energy losses related to flux motion in type-II superconductors [53; 54]. Hence, an increase in \(\chi^{\prime\prime}_{\rm ac}\) accompanying the decrease in \(|\chi^{\prime}_{\rm ac}|\) reveals the occurrence of flux avalanches. Although acceptibility studies are a ubiquitous approach for characterizing the magnetic dynamics of superconducting systems [19; 53; 54; 55; 56; 57; 58; 59; 60; 61], a technique with the micrometric spatial resolution of magneto-optical imaging, has remained little explored in this effort. In this work, we investigate the effects of ac magnetic fields in a 100-nm-thick amorphous MoSi (a-MoSi) film by employing ac-emulating magneto-optical imaging (acMOI). Comparing the results with \(\chi_{\rm ac}(T)\) measurements obtained by conventional global ac magnetometry, we demonstrate that acMOI is a reliable technique for the quantitative study of the ac magnetic susceptibility of superconductors. Moreover, as magneto-optical imaging allows to spatially resolve individual flux avalanches, acMOI is used to explain an observed thermomagnetic history-independent paramagnetic reentrance in \(\chi_{\rm ac}(T)\) for the a-MoSi film. Quantitative acMOI also allows us to visualize how an incoming smooth flux front overrides the flux distribution of pre-established avalanches, revealing a vortex-antivortex annihilation zone separating regions permeated by magnetic flux with opposing polarities. This paper is organized as follows: Section II details the experimental methods used to fabricate and investigate the a-MoSi thin film; Section III describes typical \(\chi_{\rm ac}(T)\) measurements conducted in a standard ac magnetometer, demonstrating the history-independent paramagnetic reentrance in the investigated sample; Section IV qualitatively explores the nature of ac susceptibility measurements using acMOI both in the smooth penetration and avalanche regimes and quantifies the magnetic imprint of individual avalanches; Section V demonstrates how acMOI may be used to quantitatively gauge \(\chi_{\rm ac}(T)\) for superconducting samples; Section VI further explores the spatial resolution of MOI to investigate how an incoming flux front interacts with an already established avalanche region; finally, Section VII summarizes the results and outlines perspectives on the use of acMOI. ## II Experimental details A square a-MoSi film with lateral size of 2.5 mm and thickness of 100 nm was deposited onto a silicon substrate at 77 K by dc magnetron sputtering at a pressure of 1.2 Pa in dynamical equilibrium under argon flow, similarly to the protocol described in Ref. [62]. Amorphous MoSi films typically present critical temperatures above 7 K, low intrinsic pinning, and correspondingly low critical current densities [63; 64]. Application-wise, a-MoSi is a prominent material choice for superconducting nanowire single-photon detectors [65; 66; 67]. The complex ac magnetic susceptibility of the a-MoSi sample was investigated as a function of temperature using standard global magnetometry which captures the magnetic behavior of the sample as a whole. A SQUID-based magnetometer model MPMS-5S from Quantum Design (MPMS) was employed to measure both the in-phase (\(\chi^{\prime}_{\rm ac}\)) and out-of-phase (\(\chi^{\prime\prime}_{\rm ac}\)) components of \(\chi_{\rm ac}(T)\). Probe ac magnetic fields of frequencies \(f=0.05\) Hz or 1 Hz and amplitudes \(h\) varying from 0.1 Oe to 3.8 Oe were applied perpendicularly to the plane of the film during the experiments. Before measuring, the magnetic history of the dc field-generating superconducting coil of the MPMS was erased, and all measurements were performed under remanent magnetic field, \(H_{\rm rem}\lesssim 1\) Oe. In other words, no external dc field was intentionally applied to the sample. The magneto-optical imaging technique allows us to locally resolve the magnetic flux distribution within the sample on the micrometric scale [30]. By placing a Bi-doped yttrium iron garnet--a Faraday-active material [68]--directly on top of our superconducting film, MOI allows inspection of the deviation of the polarization angle of light in the presence of a magnetic field due to the Faraday effect. Thus, we are able to detect subtle nuances in the local field induced in the investigated material as a variation in the intensity captured by a CCD camera. We perform a pixel-by-pixel calibration procedure implemented on MATLAB to obtain quantitative information from magneto-optical images [69]. In other words, we extract the out-of-plane magnetic flux density \(B(x,y)\) from the intensity data \(I(x,y)\), where \((x,y)\) defines the position of a given pixel within the image. Possible drifts in sample position relative to the sensors due to thermal dilation of the cold finger in the experimental setup are corrected within a precision of two pixels using the StackReg plugin [70] with ImageJ [71]. As a consequence of its lower \(J_{c}\), a-MoSi also presents an intrinsic advantage for quantitative MO studies, as it inhibits unwanted magnetic domain wall switching in the garnet layer [72], which could otherwise compromise the \(I\)-to-\(B\) transformation. ## III Ac susceptibility: MPMS measurements Typical temperature-dependent ac susceptibility results for superconducting films are illustrated in Fig. 1 for the 100-nm-thick a-MoSi sample. The curves depicted are obtained using the MPMS and show both \(\chi^{\prime}_{\rm ac}\) and \(\chi^{\prime\prime}_{\rm ac}\) normalized by the Meissner state plateau \(\chi_{0}\) of the \(\chi^{\prime}_{\rm ac}\) measurement conducted with the lowest \(h\) and \(T\). Figure 1(a) highlights the effects of \(h\) in \(\chi_{\rm ac}(T)\). In all measurements, the sample is first subjected to zero-field cooling (ZFC) down to 2 K. Then, \(\chi_{\rm ac}(T)\) is measured as the temperature is increased using a probe field with \(f=1\) Hz and varying \(h\) values from 0.1 Oe to 3.5 Oe. As we demonstrate in Appendix A, the choice of \(f\) has almost no impact on the \(\chi_{\rm ac}(T)\) behavior in the frequency range explored in this work. For the smallest field amplitude (black points), we observe a near constant \(\chi_{\rm ac}^{\prime}\) close to \(-1\) at low temperatures. This is a signature of superconductors' perfect diamagnetism, showing that the sample initially shields its interior from magnetic flux very efficiently. Then, as the sample approaches its critical temperature (\(T_{c}\)), a sharp increase in \(|\chi_{\rm ac}^{\prime}|\) is observed, as the film is no longer shielded from flux penetration. Signatures of the superconducting-normal state transition are also found in the out-of-phase component, as a peak in \(\chi_{\rm ac}^{\prime\prime}\) accompanies the increase in \(|\chi_{\rm ac}^{\prime}|\). Thus, the dissipative motion of vortices entering the sample is consistently captured by \(\chi_{\rm ac}^{\prime\prime}\), which is greatly enhanced during flux penetration. Therefore, for the a-MoSi sample, we define \(T_{c}=7.30\pm 0.05\) K as the first experimental point for which both \(\chi_{\rm ac}^{\prime}\) and \(\chi_{\rm ac}^{\prime\prime}\) depart from zero in \(\chi_{\rm ac}(T)\) measurements. If \(h\) is now increased to 0.5 Oe (red points), a very similar behavior is observed in Fig. 1(a). However, flux exclusion becomes less complete as the temperature and field are increased in accordance with critical state models. Therefore, although \(T_{c}\) is unchanged, the onset of the superconducting-normal state transition occurs for lower temperatures, as \(T\) is increased from 2 K. This trend continues as \(h\) is further increased to 1.0 Oe and 1.5 Oe, represented in Fig. 1(a) by green and blue points, respectively. For all measurements with \(h\leq 1.5\) Oe, the a-MoSi film is in the smooth penetration regime and all flux penetration occurs gradually and uniformly from the edges toward the center of the sample, as described by critical state models. For \(h=2.5\) Oe, however, a radically different behavior is observed: the pink points in Fig.1(a) sharply differ from those observed in the smooth regime. For the lowest temperatures, an apparently noisy response is observed in both \(\chi_{\rm ac}(T)\) components while shielding becomes much less effective. The purple points reveal the same trend for measurements carried out with a probe field of 3.5 Oe. As we will demonstrate in this paper using MOI, these characteristics are signs of the occurrence of magnetic flux avalanches in the film [73]. Such variations in \(|\chi_{\rm ac}^{\prime}|\) and \(\chi_{\rm ac}^{\prime\prime}\) are then explained by a reduction of the volume of the film free from flux penetration as avalanches advance throughout the sample. Eventually, as \(T\) is increased above 4 K, the noisy behavior in \(\chi_{\rm ac}(T)\) is no longer present for both the 2.5 Oe and 3.5 Oe curves. This occurs because the temperature is increased beyond that for which avalanches can be triggered (\(T_{\rm th}\)), keeping the sample in a thermomagnetically stable condition, as described by the thermomagnetic model [74]. As such, flux will now only penetrate the sample smoothly, although frozen imprints of previous avalanches may remain in the flux patterns observed in the film. Flux avalanches are of a stochastic nature. It is therefore not possible to accurately predict their shape or size, nor to precisely pinpoint the moment or the position at which an avalanche will be triggered [37; 44]. In spite of this fact, Fig. 1(b) reveals an interesting feature of \(\chi_{\rm ac}(T)\) measurements: the results are not only largely reproducible but also independent of the thermomagnetic history both in the smooth and in the unpredictable avalanche regime. To illustrate that, we conduct \(\chi_{\rm ac}(T)\) measurements for the a-MoSi film after ZFC to 2 K using a probe field with \(f=1\) Hz and \(h=0.1\) Oe. The red circles represent the results as \(T\) is gradually increased through \(T_{c}\) up to 9 K. Then, \(\chi_{\rm ac}(T)\) is recorded as the temperature is lowered from the normal state back to 2 K, as shown by the blue circles. A close inspection of both curves reveals essentially no difference in \(\chi_{\rm ac}(T)\) in the smooth regime, independently of the direction of the temperature variation. If now \(h\) is increased to 2.5 Oe and Figure 1: Temperature-dependent ac susceptibility of a-MoSi film under \(H_{\rm rem}\) obtained using the MPMS. (a) Data acquired as the temperature is decreased from the normal state using a probe magnetic field of \(f=1\) Hz and amplitude varying from \(h=0.1\) Oe to \(h=3.5\) Oe. Purple arrow indicates the onset of flux avalanches, characterizing the paramagnetic resonance region. (b) Data acquired both as the temperature is decreased from the normal state (\(T\downarrow\)) and increased from the Meissner state (\(T\uparrow\)) with \(f=1\) Hz and \(h=0.1\) Oe (smooth regime) and \(h=2.5\) Oe and 3.8 Oe (avalanche regime). the experiment is repeated, the sample is in the avalanche regime for \(T<T_{\rm th}\sim 4.5\) K. In this temperature range, the red and blue squares in Fig. 1(b) are no longer indistinguishable, although they remain very close to each other. More precisely, the observed ups and downs in \(\chi_{\rm ac}(T)\) gauged as \(T\) is increased mirror those obtained as \(T\) is decreased. The red and blue triangles in Fig. 1(b) reveal the same behavior in the avalanche regime for a higher probe field amplitude \(h=3.8\) Oe. ## IV Ac Susceptibility: Moi Measurements We now turn to magneto-optical imaging to explain why there appears to be to a large extent a reversible response in the noisy ac susceptibility behavior caused by stochastic avalanche events. To do so, it is instructive to first recall the working principle of how \(\chi_{\rm ac}(T)\) is obtained in magnetometers such as the MPMS. An applied zero-mean probe ac field with an amplitude \(h\) and frequency \(f\), such that \(h(t)=h\cos(2\pi ft)\), induces a time-dependent magnetic moment in the investigated sample. Hence, a detectable electric current is induced in the magnetometer's superconducting pickup coils, connected to the SQUID sensor, allowing the determination of the magnetic moment \(m_{\rm ac}\). After averaging measurements performed for successive probe field cycles, \(m_{\rm ac}\) is fitted to an equation of the form [75] \[m_{\rm ac}=C(t)+m^{\prime}\cos(2\pi ft)+m^{\prime\prime}\sin(2\pi ft), \tag{1}\] where \(C(t)\) represents any dc offset or drift in field or temperature, and \(m^{\prime}\) and \(m^{\prime\prime}\) are respectively related to \(\chi^{\prime}_{\rm ac}\) and \(\chi^{\prime\prime}_{\rm ac}\) as \[\chi^{\prime}_{\rm ac}=\frac{m^{\prime}}{h}\quad\text{ and }\quad\chi^{\prime \prime}_{\rm ac}=\frac{m^{\prime\prime}}{h}. \tag{2}\] Recalling that \(\chi_{\rm ac}=\chi^{\prime}_{\rm ac}+i\chi^{\prime\prime}_{\rm ac}=\partial M /\partial H\), if the total applied magnetic field is \(H=H_{\rm dc}+h\), then \(\chi_{\rm ac}=\partial M/\partial h\). Hence, the above measurement protocol can be used in combination with a dc applied magnetic field to gauge the sample susceptibility in different points of the \(M(H)\) curve. The process to emulate ac measurements using a magneto-optical imaging setup equipped with a dc magnetic field source was first introduced by Ref. [52]. Here, we refer to such measurements as acMOI. To summarize the process, the dc field is incremented in stair-like steps until it reaches a preset maximum amplitude \(h_{\rm dc}=h_{\rm dc}^{\rm max}\). After each field step, a MO image is recorded. Then, keeping the same step size, the applied field is reduced to \(-h_{\rm dc}^{\rm max}\) and, finally, increased to zero. This routine reproduces one ac field cycle and it is schematically presented in Fig. 2. Although the data acquisition rate of the acMOI technique is substantially slower than the MPMS ac magnetic field source, by successively repeating the above routine, we may take advantage of the frequency-independent nature of the first harmonic \(\chi_{\rm ac}\) to capture ac effects in the investigated sample. We are also capable of varying external parameters, such as the temperature or an additional dc field, allowing investigations of their effects on the sample. Following, we will explore this ability to qualitatively visualize how magnetic flux penetrates the superconducting a-MoSi film during typical temperature-dependent ac susceptibility measurements, both in the smooth and in the avalanche regimes. ### Smooth penetration regime Figure 3 exemplifies results obtained for the a-MoSi film following the acMOI procedure. In this case, \(h_{\rm dc}^{\rm max}\) = 1.0 Oe, corresponding to the situation in which the film remains in the smooth regime for all temperatures below \(T_{c}\), as revealed by Fig. 1(a). The first row of Fig. 3(a) shows MO images as directly obtained during the first field cycle after the sample was zero-field-cooled to the base temperature of 2.9 K. A schematic representation of the point in the field cycle at which each of the four images is captured is presented at the lower left corner of Fig. 3. The first image reveals a shallow bright region surrounding the darker inner region of the square film at 1.0 Oe. As discussed previously, such a bright region represents the small flux front able to penetrate the superconductor at lower temperatures and ac field amplitudes, due to its elevated shielding capacities. As the field cycle continues, the second image, taken at 0 Oe, reveals that some positive flux remains trapped in the Figure 2: Ac-emulated magneto-optical imaging (acMOI). A dc magnetic field is progressively applied in stair-like discrete steps. The dc field intensity is varied from zero to \(h_{\rm dc}^{\rm max}\), then from \(h_{\rm dc}^{\rm max}\) to \(-h_{\rm dc}^{\rm max}\), and finally from \(-h_{\rm dc}^{\rm max}\) to zero. Successively repeating this field cycle emulates an applied low-frequency ac magnetic field. After each field step, a MO image of the sample is recorded, as exemplified in the detail of the first field cycle. Additional parameters may be controlled, as schematically represented by an increase followed by a decrease in the temperature. sample, but the edges of the film no longer appear in bright contrast as the flux polarity is being reversed in that region. In the third image, taken at \(-1.0\) Oe, the flux inside the superconductor has completely reversed its sign and appears now in dark contrast, signaling its negative intensity. Finally, the fourth image, at \(0\) Oe, reveals some trapped negative flux in the interior of the sample, but the edges again indicate the reversal of the applied field. The ac-emulated field cycle is repeated a total of four times before the temperature is increased and set to \(3.5\) K, \(4.0\) K, \(4.5\) K, \(5.0\) K, \(5.5\) K, \(6.0\) K, \(6.5\) K, and \(7.0\) K, every time repeating the field cycle four times and collecting a MO image after each field step of \(0.1\) Oe. We will refer to this data as the \(T\uparrow\) experiment. The first row of images of Fig. 3(b) shows results obtained at \(6.5\) K as the temperature is increased after ZFC. They are analogous to those obtained at \(2.9\) K, however, the flux front penetrates deeper into the film due to its reduced shielding capability near \(T_{c}\). Then, the temperature is risen above \(T_{c}\) in the absence of an applied magnetic field, erasing the magnetic history of the sample. After that, \(T\) is progressively reduced back to the base temperature while subjecting the sample to four ac-emulating field cycles at the same set temperatures listed before. This is the \(T\downarrow\) experiment. The second row in Fig. 3(a) shows the MO images recorded at \(2.9\) K during this experiment, i.e., after \(T\) was reduced from above \(T_{c}\). Although the temperature is the same, the flux landscapes inside the superconductor in the first and second rows of Fig. 3(a) are completely different. For the images taken during the \(T\downarrow\) experiment, the complete magnetic history of the sample due to the successive field cycles is retained by the film. This happens because higher temperatures enable further flux penetration, therefore the flux trapped in the most inner regions of the sample is not superimposed by new field cycles at lower temperatures. Accordingly, the MO images obtained at \(6.5\) K as \(T\) is reduced, shown in the second row of Fig. 3(b), differ from those presented in the first row. In this case, the sample was previously subjected to four ac-emulating field cycles at \(7.0\) K, resulting in the observed trapped flux in the interior of the film. After these observations, it may be natural to ask why such different flux distributions lead to the indistinguishable \(\chi_{\mathrm{ac}}(T)\) observed in the smooth regime in Fig. 1(b) for increasing and decreasing temperatures. To understand this, it is necessary to remember that ac susceptibility is a measurement of the flux variation in the material as the applied field is changed, rather than its total magnetic moment. To gauge flux variation due to the variation of an applied field using MOI, we may turn to what is called differential MOI [76]. This approach Figure 3: Comparison between direct MO images and differential MO images of a-MoSi film taken at (a) \(2.9\) K and (b) \(6.5\) K as the temperature is increased (\(T\uparrow\)) after ZFC to the base temperature and as it is decreased (\(T\downarrow\)) from above \(T_{c}\). Data is acquired using an ac-emulating applied field with an amplitude of \(1.0\) Oe, thus in the smooth regime. The contrast in each image was individually adjusted for optimal visualization of the flux penetration. The lower left corner detail represents the point of the field cycle at which each image was captured. consists of subtracting the measured flux density distribution in a given MO image from that obtained in the previous field step. In other words, \(B_{n}^{\rm diff}(x,y)=B_{n}(x,y)-B_{n-1}(x,y)\), where \(n\) represents the MO image number, chronologically increasing from the first to the last image obtained in a given data set. The third and fourth rows of Figs. 3(a) and (b) show differential MO images of the same images represented in the first two rows of each figure. The results demonstrate that, although the flux distribution in the superconductor is vastly different depending on its thermomagnetic history, the flux variation within an ac field cycle does not present significant differences for measurements conducted while increasing or decreasing the sample temperature--given that the probe field amplitude and frequency are kept the same. This notion explains why \(\chi_{\rm ac}(T)\) does not depend on the sample's thermomagnetic history in the smooth regime. In the Supplemental Material [77], a video highlights this behavior for all MO images obtained during the four field cycles at temperatures of 2.9 K, 3.8 K, 4.5 K, 5.5 K, and 6.5 K. ### Flux avalanches regime Figure 4 presents results obtained by acMOI for an applied field with amplitude of \(h_{\rm dc}^{\rm max}=2.4\) Oe. As revealed in Fig. 1, such a probe field will lead to the nucleation of flux avalanches in the a-MoSi film for temperatures lower than \(T_{\rm th}\). These abrupt, non-critical-state-like flux penetration events have a characteristic dendritic morphology observable in several MO images in Fig. 4. Such a strong flux variation leads to the paramagnetic reentrance region in \(\chi_{\rm ac}(T)\) measurements. The acMOI results in the avalanche regime were obtained using a field step of 0.2 Oe and \(T\) values of 2.9 K, 3.5 K, 3.8 K, 4.5 K, 5.0 K, 5.5 K, 6.0 K, 6.5 K, and 7.0 K. The interpretation of Fig. 4 is analogous to that of Fig. 3. A series of MO images are taken within the same ac-emulating field cycle both as \(T\) is increased after ZFC to 2.9 K and as \(T\) is decreased after the magnetic history of the sample is erased above \(T_{c}\). Figure 4(a) shows images taken at 2.9 K and, hence, below \(T_{\rm th}\). In the first row, the first image reveals that a positive flux avalanche was triggered in the film, advancing further into the interior of the sample than the shallow critical-state-like bright flux front. Then, as revealed by the third image, a new, negative flux avalanche, or anti-avalanche, was triggered, reusing the flux channel created by the first positive avalanche [52, 78]. The differential MOI analysis on the third row of Fig. 4(a) allows us to conclude that the avalanches appearing in the first row were not triggered on the depicted images, but as \(h_{\rm dc}\) was ramped from zero to 2.4 Oe and, then, from 2.4 Oe to \(-2.4\) Oe. This is the case because the differential flux distributions do not show any signs of abrupt flux intake by the sample. On the other hand, analysis of the second and fourth rows of Fig. 4(a) reveals that a positive flux avalanche was triggered in the sample at 2.4 Oe. The differential image allows us to clearly distinguish this specific penetration event from the complex flux landscape presented by the sample. In the case depicted in Fig. 4(b), \(T=5.5\) K \(>T_{\rm th}\). Even though previously triggered avalanches are visible in the first row of images, all flux penetration at this temperature occurs smoothly from the edges of the film. Hence, the very different flux landscapes in the first and second rows lead to similar differential flux patterns, shown in the third and fourth rows of Fig. 4(b). These results are compatible with those in Fig. 1, as the sample is in the smooth regime above \(T_{\rm th}\). However, if we now compare the differential images below \(T_{\rm th}\) at 2.9 K, we see that, contrary to the smooth penetration regime, there is not a match between each corresponding image due to the nucleation of flux avalanches. In the Supplemental Material, a video highlights this fact, showing distinct differential flux distributions each time an avalanche occurs in the film either as \(T\) is increased or decreased and at different temperatures. Such a difference in behavior in the avalanche regime, coupled with the unpredictable nature of these events, seems to indicate that there should not be a match between independent \(\chi_{\rm ac}(T)\) measurements. Nevertheless, we do observe in Fig. 1(b) very similar behaviors as \(T\) varies up or down. To understand why this happens, we will rely on the potential of MOI as a quantitative analysis tool, as its spatial resolution allows the study of individual avalanches in a manner that is not possible with standard magnetometers like the MPMS. The first image of the forth row of Fig. 4(a) indicates how we may extract information on individual avalanches. By differentiating an image in which an avalanche event occurs, we are able to highlight it from the rest of the sample's flux landscape. To harness this possibility and study all avalanches triggered in the a-MoSi film during our measurements, we implemented an algorithm in MATLAB, schematically represented in the top row of Fig. 5. As multiple avalanches may occur simultaneously in different parts of the sample, we first select the region of the film which will be analyzed. Then, we differentiate the MO images, resulting in a \(B^{\rm diff}(x,y)\) distribution around zero outside of the smooth flux front and the avalanches. However, avalanches result in much more intense flux variation than critical-state-like penetration. This allows the algorithm to identify every image in which an avalanche was triggered. Moreover, it is possible to clearly separate the avalanches from the rest of the image by applying an intensity threshold mask over the selected avalanche region. This mask can be used either on the directly obtained MO image or on the differential flux distribution, allowing the investigation of different quantities. Using this algorithm, we analysed all 3474 MO images obtained using the ac-emulated field cycles with \(h_{\rm dc}^{\rm max}=2.4\) Oe. In those, a total of 105 flux avalanches were triggered. Table 1 specifies the distribution of these avalanches between different temperatures. It also highlights if the avalanches were triggered while increasing or decreasing \(T\) as well as if they are comprised of positive or negative flux. These statistics reveal that many more avalanches occur when the temperature is being reduced from above \(T_{c}\). This difference is related to the established flux landscape within the sample, clearly visible on the second image row in Fig. 4. As the sample is fully penetrated by vortices, the probability of triggering thermomagnetic instabilities increases [79, 80]. Once all avalanches were identified, we may calculate the magnetic flux difference in the sample due to each avalanche, \(\Delta\Phi_{\rm aval}\), by numerically integrating \(B^{\rm diff}(x,y)\). Figure 5 shows \(\Delta\Phi_{\rm aval}\) as \(T\) is increased and decreased as a function of the avalanche area, \(A_{\rm aval}\). Noticeably, the data reveals a temperature-independent linear relationship between \(\Delta\Phi_{\rm aval}\) and \(A_{\rm aval}\), as highlighted by the sloped guides to the eye. This may be understood considering the microscopic nature of the mixed state in type-II superconductors, in which quantized flux vortices permeate the sample. The vortex core size is proportional to the coherence length of the material, \(\xi\), whereas the intervortex spacing is related to the penetration depth, \(\lambda\)[24]. In turn, these quantities evolve in temperature as \((1-T/T_{c})^{-1/2}\), which implies that they only vary significantly for temperatures close to \(T_{c}\). Therefore, the density of vortices is nearly constant in the temperature range for which the film is in the avalanche regime, leading to the behavior observed in Fig. 5. The slope of the linear relationship is roughly equal to 0.75 mT, indicating fields slightly above the ones used to trigger the avalanches. This difference is explained by the higher flux concentration along the edges of the thin film due to demagnetization effects [32]. Moreover, the solid horizontal lines in Fig. 5 represent the net \(\Delta\Phi_{\rm aval}\) calculated by summing \(\Delta\Phi_{\rm aval}\) for all avalanches that occur at a given set temperature when \(T\) is increased (red lines) or decreased (blue lines). Al Figure 4: Comparison between direct MO images and differential MO images of a-MoSi film taken at (a) 2.9 K and (b) 5.5 K as the temperature is increased (\(T\uparrow\)) after ZFC to the base temperature and as it is decreased (\(T\downarrow\)) from above \(T_{c}\). Data is acquired using an ac-emulating applied field with an amplitude of 2.4 Oe, thus in the avalanche regime. The contrast in each image was individually adjusted for optimal visualization of the flux penetration. The lower left corner detail represents the point of the field cycle at which each image was captured. Black arrows indicate regions of further flux penetration that will be discussed in Section VI. \begin{table} \begin{tabular}{l l l l l} & 2.9 K & 3.5 K & 3.8 K & 4.5 K \\ \(T\uparrow\) — Positive flux & 8 & 3 & 3 & 0 \\ \(T\uparrow\) — Negative flux & 8 & 6 & 1 & 0 \\ \(T\downarrow\) — Positive flux & 19 & 9 & 10 & 1 \\ \(T\downarrow\) — Negative flux & 21 & 10 & 6 & 0 \\ \end{tabular} \end{table} Table 1: Number of flux avalanches observed in the MO images obtained during the \(T\uparrow\) and \(T\downarrow\) experiments for field cycles with \(h_{\rm dc}^{\rm max}=2.4\) Oe at different temperatures. though many more avalanches happen during the \(T\downarrow\) experiment, the blue lines reveal that the net flux variation they cause in the sample is comparable to that caused by a single avalanche. The same is true during the temperature increase, as shown by the red lines. This fact is associated with the effects of an ac field cycle on the superconducting film. As can be observed in Fig. 4, there is a tendency for new avalanches to reuse the flux channel created by previously nucleated avalanches of opposite polarity. Note that bright and dark contrast avalanches are superimposed in the MO images. This same trend has been previously reported both experimentally [52] and numerically [78]. Such behavior is explained by the attractive nature of the interaction between vortices and antivortices, as well as by the fact that the existing avalanche creates an easy channel of locally reduced critical current density inside the film, facilitating the propagation of magnetic flux. A Supplemental Material video demonstrates that most new avalanches reuse previously existing flux channels. Therefore, these dynamics tend to balance out positive and negative flux variations arising from abrupt penetration events. As the ac susceptibility is measured by averaging the flux variation captured throughout several ac field cycles, the avalanche contributions become very similar in both directions of temperature variation, resulting in the remarkably similar \(\chi_{\mathrm{ac}}(T)\) measurements as \(T\) is increased and decreased, as shown in Fig. 1(b). ## V Quantitative ac susceptibility analysis from MOI In Section IV, we qualitatively discussed the link between differential MO images and ac susceptibility measurements conducted in the MPMS. In this Section, we demonstrate how MOI can be further utilized as a tool for quantitatively studying ac field-induced effects on superconducting films. The in-phase and out-of-phase components of \(\chi_{\mathrm{ac}}\) are obtained by acMOI as a function of \(T\), which can then be compared to MPMS measurements. To achieve that, let us first be reminded that \(\chi^{\prime}_{\mathrm{ac}}\) is associated with the superconductor inductive response to shield magnetic flux from its interior. Therefore, \(\chi^{\prime}_{\mathrm{ac}}\) captures the evolution of the sample magnetization with an applied magnetic field. On the other hand, \(\chi^{\prime\prime}_{\mathrm{ac}}\) is associated with a resistive response arising from energy losses, caused by the dissipative flux motion within the superconductor. As discussed in Ref. [81], this energy can be gauged by evaluating the area of the \(M(h)\) loop, \(A^{\mathrm{loop}}_{\mathrm{ac}}\), defined by the application of one ac field cycle. This way, we may obtain the \(\chi_{\mathrm{ac}}(T)\) components from ac-emulating MOI cycles as [52] \[\chi^{\prime}_{\mathrm{ac}}=\left\langle\frac{\partial\langle M_{\mathrm{MOI} }\rangle}{\partial h_{\mathrm{dc}}}\right\rangle\quad\text{and}\quad\chi^{ \prime\prime}_{\mathrm{ac}}=\frac{A^{\mathrm{loop}}_{\mathrm{ac}}}{\pi(h^{ \mathrm{max}}_{\mathrm{dc}})^{2}}, \tag{3}\] where the mean magnetization \(\langle M_{\mathrm{MOI}}\rangle\) is obtained from the out-of-plane flux density distribution within the sample on a MO image as [52] \[\langle M_{\mathrm{MOI}}\rangle=\frac{1}{N_{\mathrm{px}}}\sum_{n=1}^{N_{ \mathrm{px}}}\left\{B_{n}(x,y)/\mu_{0}-h_{\mathrm{dc}}\right\}, \tag{4}\] where \(N_{\mathrm{px}}\) is the number of pixels which correspond to the sample within the MO image. These quantities are calculated for each ac-emulating field cycle at a given temperature in SI units as exemplified in Fig. 6(a-b), which shows typical \(\langle M_{\mathrm{MOI}}\rangle(h_{\mathrm{dc}})\) loops. The results are then averaged for the four cycles to obtain the \(\chi_{\mathrm{ac}}(T)\) evolution for the sample. Figure 6(c-d) displays MPMS measurements of \(\chi_{\mathrm{ac}}(T)\) for the a-MoSi film using probe fields with \(f=0.05\) Hz and \(h=1.0\) Oe and \(2.4\) Oe, hence in the smooth and avalanche regimes, respectively. Although the \(\chi_{\mathrm{ac}}\) analysis is frequency-independent, this \(f\) value is chosen to Figure 5: Top row: demonstration of the process used to obtain quantitative data on single avalanches. See the main text for a detailed explanation. Main panel: the total magnetic flux of each individual avalanche triggered in the a-MoSi film as the temperature is increased (\(T\uparrow\)) after ZFC to the base temperature and decreased (\(T\downarrow\)) from above \(T_{c}\). Results are plotted against the area of the respective avalanches. Solid horizontal lines represent the net \(\Delta\Phi_{\mathrm{aval}}\) obtained by summing the flux of all avalanches triggered at the same temperature as \(T\) is increased (red) or as \(T\) is decreased (blue). Dashed lines are guides to the eye. match an "effective" frequency estimated considering a light exposure time of 200 ms during the acMOI measurements, which, in turn, is used to optimize the image contrast. The MPMS data in Fig. 6 is averaged from eight successive field cycles [see Eqs. (1) and (2)]. Although this differs from the four cycles used in the acMOI measurements, Appendix B demonstrates that MPMS results are equivalent for measurements conducted with these numbers of field cycles. Thus, in Fig. 6(c), the MPMS measurements are compared to \(\chi^{\prime}_{\rm ac}\) and \(\chi^{\prime\prime}_{\rm ac}\) quantitatively obtained from the acMOI measurements using Eq. (3), both as the temperature is increased after ZFC and as \(T\) is decreased from above \(T_{c}\). There are two main observations in Fig. 6(c). The first is that, despite limitations in the measurement resolution in comparison to SQUID magnetometers and the presence of defects on the MO indicator which could compromise the result, acMOI captures with high fidelity the behavior of both components of \(\chi_{\rm ac}(T)\), specially at the lower temperatures and ac field amplitudes. When \(T\) approaches \(T_{c}\), however, the lower contrast of the MO images induce larger errors, therefore, the acMOI data points at 7 K are detached from those obtained using the MPMS. The second observation is that acMOI captures exceptionally well the independence of \(\chi_{\rm ac}(T)\) on thermomagnetic history in the smooth regime, as \(\chi^{\prime}_{\rm ac}\) and \(\chi^{\prime\prime}_{\rm ac}\) are mostly superimposed in Fig. 6(c). When the film is in the avalanche regime, acMOI also captures the paramagnetic reentrance observed in the MPMS measurements. However, it appears that the technique is more susceptible to differences in the flux landscape in the sample, as measurements conducted as the temperature was decreased resulted in slightly lower values of \(|\chi^{\prime}_{\rm ac}|\). Nonetheless, within the natural limitations of the technique, it is possible to accurately investigate the ac susceptibility of a superconducting thin film using ac-emulating MOI. A semi-quantitative approach can also be used to obtain \(\chi_{\rm ac}(T)\) from acMOI. As highlighted by Eq. (3), the sample magnetization is the crucial ingredient in the calculation of \(\chi^{\prime}_{\rm ac}\) and \(\chi^{\prime\prime}_{\rm ac}\). \(M\), however, is a global parameter, describing the average behavior of the sample. In Fig. 6(c), we obtained this quantity from the local flux density distribution in the film. If we remember that raw MOI data is an intensity count, we may define a mean intensity for each MO image, \(\langle I(x,y)\rangle\). Then, using measurements performed above \(T_{c}\), such that the sample magnetization does not interfere with the flux distribution, we may find a relationship between an applied magnetic field \(H\) and \(\langle I(x,y)\rangle\). Considering that, above \(T_{c}\), \(M=0\) and \(H=B/\mu_{0}\), the mean flux density distribution \(\langle B\rangle\) can be found by fitting an empirical polynomial relationship between \(\langle B\rangle\) and \(\langle I(x,y)\rangle\)[82]. The influence of defects on the MO indicator can be minimized by subtracting the zero-field background from all images. Once the images are calibrated, the mean sample magnetization in each MO image within an ac-emulating field cycle can be calculated as \[\langle M\rangle=\langle B\rangle/\mu_{0}-h_{\rm dc}. \tag{5}\] Using \(\langle M\rangle\) and Eq. (3), we obtained the \(\chi_{\rm ac}(T)\) results shown in Fig. 6(d). The results are completely analogous and very similar to those depicted in Fig. 6(c), demonstrating the robustness of MOI as a tool to gauge \(\chi_{\rm ac}(T)\). ## VI Erasing Flux Avalanches Let us now discuss a side benefit of using quantitative MO data to gain insight into the interaction between an incoming magnetic flux front and the region where an Figure 6: Example of typical \(M_{\rm MOI}(h_{\rm dc})\) loops obtained for measurements as the temperature is increased (\(T\uparrow\)) at (a) 4.5 K and (b) 6.5 K for \(h_{\rm dc}^{\rm max}=2.4\) Oe \(\approx 191\) A/m. Comparison between the temperature-dependent ac susceptibility of the a-MoSi film under \(H_{\rm rem}\) obtained using the MPMS and (c) a quantitative acMOI analysis and (d) a semi-quantitative acMOI analysis. In both panels, the MPMS measurements are carried at \(f=0.05\) Hz for better correspondence with the slower MO measurements. MO images are taken as the temperature is increased (\(T\uparrow\)) and decreased (\(T\downarrow\)), with accumulated field amplitudes of 1.0 Oe and 2.4 Oe, thus in the smooth and avalanche regimes, respectively. avalanche previously took place. In Fig. 4(b), arrows indicate regions in differential images taken at 2.4 Oe and \(-2.4\) Oe in which positive and negative flux, respectively, penetrate further into the sample than elsewhere. Figure 7 sheds light on this dynamics by highlighting results obtained for the a-MoSi sample at 5 K as \(T\) is increased from the base temperature after ZFC. Panels (a) and (b) show the same MO images side-by-side, only with different color scales. This is done to evidence different aspects of the flux penetration dynamics. The first image of Fig. 7, taken at 0 Oe, is captured before the ac-emulating magnetic field is applied to the film at 5 K. Therefore, the depicted flux landscape is a result of the 16 ac-emulating field cycles applied to the film at the four previous temperature steps. Noticeably, a number of flux avalanches took place, resulting in the characteristic dendritic flux-filled regions observed in the sample. On the second row, \(h_{\mathrm{dc}}\) is increased to 2.4 Oe for the first time at 5 K. As previously discussed, this will result in the penetration of a positive, smooth flux front from the edges toward the center of the film. Figure 7(a) illustrates an interesting characteristic of the dynamics of how this flux front interacts with avalanches previously triggered in the film. First, notice the presence of a large negative flux avalanche on the right edge of the film framed by the dashed white rectangle. Then, we may observe that the positive flux front penetrates deeper into the sample where it interacts with the negative avalanche than elsewhere--compare, for instance, the penetration from the right edge with that from the top edge of the sample. Additionally, a medium-sized positive flux avalanche had previously occurred on the bottom-left edge of the film. Near that avalanche, the positive flux front has a shallower penetration than on the right side of the bottom edge. The explanation for such a difference in the flux penetration lies in the nature of the attractive interaction between vortices and antivortices [83], leading to the deeper penetration of the flux front coming from the right edge on the second image of Fig. 7(a). However, if the incoming flux has the same polarity as the pinned flux, it will be repelled, causing the shallower penetration of the positive flux front over the positive avalanche on the bottom edge of the sample. Moreover, vortices and antivortices will be annihilated if they come in close contact, leaving behind a region of zero net magnetic flux on the superconductor. Figure 7(b) allows us to visualize such vortex-antivortex annihilation regions. Using again the second image of the depicted sequence as a reference, we may look at the right edge of the sample, where the positive (dark-blue) flux front penetrates over the negative (yellow) avalanche. Then, we notice the presence of a zero flux (light-blue) region between the flux front and the avalanche. As vortices penetrate the sample from the right edge, they encounter previously pinned antivortices, leading to mutual annihilation. The resulting flux-free region is then filled by new incoming vortices which, in turn, will be annihilated with further pinned antivortices, in a process that enables the positive flux penetration as the field is increased up to \(h_{\mathrm{dc}}^{\mathrm{max}}\). In the next step of the ac-emulating field cycle, the field is reduced to \(-h_{\mathrm{dc}}^{\mathrm{max}}\). Then, negative (yellow) flux will penetrate the sample from the edges. As observed in the third row of Fig. 7, negative flux penetrates less from the right edge of the sample than the positive flux front did. Moreover, we observe that the negative flux further penet Figure 7: MOI of a-MoSi film at 5 K as the temperature is increased after ZFC. The images were captured during accumulating field cycles with \(h_{\mathrm{dc}}^{\mathrm{max}}=2.4\) Oe and demonstrate how an incoming flux front interacts with previously established avalanches. Panels (a) and (b) show the same MO images with different color scales to highlight different features of the flux penetration dynamics. Dashed rectangles highlight regions in which it is possible to observe the vortex-antivortex annihilation zone. trates over the positive flux avalanche that previously occurred at the bottom edge of the sample. Thus, the negative flux penetration dynamics follow the same behavior observed when a positive flux front penetrates the film. Accordingly, the third image of Fig. 7(b) reveals a vortex-antivortex annihilation zone between the incoming negative flux front and the deeper positive front. Then, inside the dashed red rectangle, we observe beginning from the edge of the sample: a negative flux region, a first annihilation zone, a positive flux region, a second annihilation zone, and, finally, the negative deeply pinned flux where the avalanche propagated through the film. In the fourth image of the depicted sequence, the applied field is once again increased to \(h_{\rm dc}^{\rm max}\), leading to positive flux penetration. Now, along the bottom edge, the incoming positive flux penetrates less than the established negative flux over the positive avalanche. This creates the region highlighted by the dashed rectangle on the fourth image of Fig. 7(b), where it is possible to observe a positive flux region, an annihilation zone, a negative flux region, another annihilation zone, and the positive flux pinned after the avalanche penetrated deep into the sample. The Supplemental Material presents a video highlighting the interaction between an incoming flux front with the pre-established avalanches in Fig. 7 at different moments of the ac-emulating field cycle. The flux penetration dynamics in the smooth penetration regime revealed by Fig. 7 hints at a different aspect of ac susceptibility measurements. To wit, Fig. 7(a) shows two panels with \(h_{\rm dc}\) = 2.4 Oe where no clear differences are observed in the penetrated flux landscape. Figure 8 further explores this aspect of the results both as the temperature is increased after ZFC [Fig. 8(a), at 6 K] and as it is decreased from above \(T_{c}\) [Fig. 8(b), at 5 K]. The first row of both panels shows MO images obtained before the ac-emulating magnetic field is applied at the indicated temperature followed by the flux landscape captured at the end of each field cycle, when the a-MoSi is under \(h_{\rm dc}\) = 0 Oe. The second row shows differential MO images obtained by subtracting the flux landscape after field cycle \(N\) by that after cycle \(N-1\). As established, flux penetration differs when the field is applied at different temperatures. Thus, a different flux pattern is revealed after the first cycle when compared to the previously pinned landscape, as evidenced by the first differential image in both panels. However, as the applied field reaches \(h_{\rm dc}^{\rm max}\) (or \(-h_{\rm dc}^{\rm max}\)), the penetrated positive (or negative) flux front reaches its maximum depth into the sample for those specific measurement conditions. Therefore, there is no sensitive difference between the flux landscapes observed at equivalent \(h_{\rm dc}\) at the subsequent field cycles after the field reaches its maximum value. This is evidenced by the last three differential images in both panels. In the Supplemental Material, an accompanying video shows that these dynamics are observed in all images captured within the four field cycles for the measurements presented in Fig. 8(a). Therefore, in the smooth regime, the important dynamic aspects of flux penetration into superconducting samples are restricted to occur during the first field cycle. This naturally explains the observed independence of \(\chi_{\rm ac}\) on the number of averaged field cycles in MPMS measurements, as reported in Appendix B. ## VII Conclusions We have investigated the ac magnetic susceptibility of a superconducting thin film with lateral dimensions in the millimeter range. Standard global ac magnetometry measurements of frequency-independent first harmonic \(\chi_{\rm ac}(T)\) reveal that the sample exhibits a paramagnetic reentrance related to the abrupt magnetic flux intake experienced during a flux avalanche event. Despite the stochastic nature of these avalanches, their effect on \(\chi_{\rm ac}(T)\) is nearly insensitive to the sample thermo-magnetic history. We employ quantitative ac-emulating magneto-optical imaging to uncover the reasons behind this fact. In the smooth penetration regime, the indistinguishability of \(\chi_{\rm ac}(T)\) measured as the temperature is increased from 2 K or decreased from above \(T_{c}\) is explained using differential MO images highlighting that the flux variation within the sample during an ac cycle is independent of the previously established flux landscape. The same is not true in the presence of flux avalanches. Nev Figure 8: MOI of a-MoSi film at (a) 6 K as \(T\) is increased after ZFC and (b) 5 K as \(T\) is decreased from above \(T_{c}\). The images were captured after ac-emulating field cycles with \(h_{\rm dc}^{\rm max}\) = 2.4 Oe were completed, i.e., under \(h_{\rm dc}\) = 0 Oe. The first row shows direct measurements whereas the second row shows differential results. The differential images were obtained by subtracting the images after a field cycle from those obtained after the previous field cycle, i.e., the one in the previous column on the first row. ertheless, we demonstrate that new avalanches preferentially nucleate along previously established and frozen avalanche regions of opposite polarity. By quantifying the flux variation due to each single avalanche, we find out that this process leads to similar contributions as \(T\) is increased or decreased. We thus correlate these findings to the similar \(\chi_{\rm ac}(T)\) behavior in the avalanche regime, independently of the thermomagnetic history of the sample. Moreover, we use acMOI to quantitatively gauge \(\chi_{\rm ac}(T)\) in superconductors obtaining excellent agreement with standard global measurements, particularly at low temperatures and probe field amplitudes. Although the results have been obtained for an a-MoSi film, they are of total generality and, in principle, applicable to any kind of type-II superconductor, even those with high critical temperatures. We also take advantage of the technique to locally resolve regions of vortex-antivortex annihilation, explaining how an income flux front interacts with previously nucleated avalanches. This interplay also allows us to visualize that, after the ac field reaches its maximum amplitude in both field polarities, no new features are observed for subsequent field cycles, explaining the observed independence of \(\chi_{\rm ac}\) on the number of cycles averaged to obtain the results. Therefore, by analyzing the history-independent \(\chi_{\rm ac}(T)\) of an a-MoSi sample, we demonstrate that acMOI is an effective technique to quantitatively study frequent-independent ac magnetic field effects in superconducting materials. This was recently employed to explain the impact of flux dynamics and, in particular, avalanches, on the resonance frequency of large-area superconducting coplanar waveguide resonators [84]. ###### Acknowledgements. This work was partially supported by Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, the Sao Paulo Research Foundation (FAPESP, Grant No. 2021/08781-8), the National Council for Scientific and Technological Development (CNPq, Grants No. 431974/2018-7 and 316602/2021-3) and by the UK EPSRC through Grant EP/I036303/1. D.A.D.C. and J.C.C.F. contributed equally to this work. ## Appendix A Ac susceptibility dependency on the drive frequency Figure 9 shows \(\chi_{\rm ac}(T)\) measurements performed in the MPMS varying the magnetic field drive frequency between 0.05 Hz and 1000 Hz. The results are normalized by \(\chi_{0}\) obtained from the \(f\) = 1000 Hz curve. In the investigated frequency range, \(\chi_{\rm ac}(T)\) shows no significant variations for different values of \(f\). ## Appendix B Ac susceptibility dependence on the number of field cycles Figure 10 shows three different measurements of the ac susceptibility of the a-MoSi sample as a function of \(h\). The data is obtained varying the number of field cycles used by the MPMS to average the magnetic moment [see Eqs. (1) and (2)]. The \(h\) range explored depicts the full limit of the MPMS. If the sample is in the smooth penetration regime, i.e., \(h<2.0\) Oe for \(T\) = 2 K, Fig. 10 quantitatively shows that \(\chi_{\rm ac}\) is independent of the number of field cycles. In the avalanche regime, small variations are observed due to the stochastic nature of the abrupt flux penetration events. Figure 10: Ac susceptibility of a-MoSi film under \(H_{\rm rem}\) as a function of \(h\) obtained using the MPMS. The different measurement runs reflect results obtained by averaging different number of field cycles. All measurements were carried out at \(f\) = 1000 Hz and \(T\) = 2 K. Figure 9: Temperature-dependent ac susceptibility under \(H_{\rm rem}\) of a-MoSi film obtained using the MPMS. The probe field amplitude is kept constant at \(h\) = 1 Oe while \(f\) is varied between 0.05 Hz to 1000 Hz. Inset shows an amplification of the graph region highlighted by the dashed rectangle in the main panel.
2309.08704
Co-Optimization of Damage Assessment and Restoration: A Resilience-Driven Dynamic Crew Allocation for Power Distribution Systems
This study introduces a mixed-integer linear programming (MILP) model, effectively co-optimizing patrolling, damage assessment, fault isolation, repair, and load re-energization processes. The model is designed to solve a vital operational conundrum: deciding between further network exploration to obtain more comprehensive data or addressing the repair of already identified faults. As information on the fault location and repair timelines becomes available, the model allows for dynamic adaptation of crew dispatch decisions. In addition, this study proposes a conservative power flow constraint set that considers two network loading scenarios within the final network configuration. This approach results in the determination of an upper and a lower bound for node voltage levels and an upper bound for power line flows. To underscore the practicality and scalability of the proposed model, we have demonstrated its application using IEEE 123-node and 8500-node test systems, where it delivered promising results.
Ali Jalilian, Babak Taheri, Daniel K. Molzahn
2023-09-15T18:51:55Z
http://arxiv.org/abs/2309.08704v2
Co-Optimization of Damage Assessment and Restoration: A Resilience-Driven Dynamic Crew Allocation for Power Distribution Systems ###### Abstract This study introduces a mixed-integer linear programming (MILP) model, effectively co-optimizing patrolling, damage assessment, fault isolation, repair, and load re-energization processes. The model is designed to solve a vital operational conundrum: deciding between further network exploration to obtain more comprehensive data or addressing the repair of already identified faults. As information on the fault location and repair timelines becomes available, the model allows for dynamic adaptation of crew dispatch decisions. In addition, this study proposes a conservative power flow constraint set that considers two network loading scenarios within the final network configuration. This approach results in the determination of an upper and a lower bound for node voltage levels and an upper bound for power line flows. To underscore the practicality and scalability of the proposed model, we have demonstrated its application using IEEE 123-node and 8500-node test systems, where it delivered promising results. Damage assessment, fault management, field crew, resilience, and service restoration. ## Nomenclature **Sets and Indexes:** \(\mathcal{B},b\): Set and index of buses \(\mathcal{L},\ell\): Set and index of sections (lines) \(\mathcal{Z},z\): Set and index of electrical zones \(\mathcal{Q},q\): Set and index of unpatrolled zones \(\mathcal{R},r\): Set and index of RCSs \(\mathcal{M},m\): Set and index for manual switches (MS) \(\mathcal{F},f\): Set and index of faults \(\mathcal{C},c\): Set and index of available crews \(\mathcal{E},e\): Set and index of equipment in patrol zones \(\mathcal{P},p\): Set and index of all locations in crew routing \(\mathcal{T},t\): Set and index of time-steps **Subsets:** \(\mathcal{M}\backslash\mathcal{R}_{z,z^{\prime}}\): Set of MSs \(\backslash\) RCSs connecting \(z\) and \(z^{\prime}\) \(\mathcal{F}_{z}\backslash\mathcal{B}_{z}\): Set of faults \(\backslash\) buses in \(z\) \(\mathcal{P}_{\mathcal{C}}\): Set of crews' initial locations \(\mathcal{P}_{\mathcal{F}}\backslash\mathcal{P}_{\mathcal{M}}\): Set of faults \(\backslash\) MSs' locations \(\mathcal{P}_{\mathcal{M}^{\prime}}\): Duplicate set of MSs' location for 2nd switching \(\mathcal{F}_{\mathcal{Q}}\): Set of hypothetical faults in unpatrolled zones **Parameters:** \(T^{\text{repair}}\): Required repair time for faults \(T^{\text{patrol}}\): Estimated patrol time of patrol zones \(\rho_{e}\): Failure probability of equipment \(C_{z}^{\text{out}}\): Cost coefficient commensurate to ENS \(C^{\text{cra}}\): Cost coefficient commensurate to crews' travels \(\Delta_{p,p^{\prime}}\): Travel time between two points for crews \(BT\): A large out-of-scope amount of time \(P_{z}\): Zonal power consumption \(M\): Big-enough constant positive value \(\alpha_{b}^{\text{sub}}\): Binary value showing if a bus is a substation \(\beta_{m}^{\text{MSI}}\): Binary value showing if an MS is initially closed \(D_{b}\): Active and reactive demand **Binary Variables:** \(\beta_{p,p^{\prime}}=1\) if a path from \(p\) to \(p^{\prime}\) is traversed by a crew \(\beta_{p}^{\prime}\): Indicates if a crew visits \(p\) \(\beta_{m}^{MSP}\): Indicates if a crew opens an MS during a patrol \(\beta_{z^{\prime},z}^{z^{\prime}}\): Indicates if \(z\) is energized by \(z^{\prime}\) \(\beta_{m}^{MSP}\): Indicates if an MS is finally closed \(\beta_{r}^{RCS}\): Indicates if an RCS is finally closed \(\beta_{l}^{\text{line}}\): Indicates if a line is finally connected \(\alpha_{z}^{\text{root}}\): Indicates if there is a substation or a master DG \(\alpha_{b}^{DG}\): Indicates if there is a master DG in \(b\) \(\beta_{z,t}^{zt}\): Indicates if a zone is energized in a time-step \(\zeta_{z,z^{\prime}}\): Indicates if zone \(z\) is energized earlier than \(z^{\prime}\) **Continuous Variables:** \(T_{z}^{\text{out}}\): Outage time \(\tau_{p}^{c}\): Finish time of an action in \(p\) by crews \(T_{p}^{\text{op}}\): Operation time for a remedial action in \(p\) \(U_{b}\): Voltage magnitude of buses \(\varphi_{\ell}\backslash G_{b}\): Active and reactive line flow \(\backslash\) power generation ## I Introduction Critical infrastructures (CIs), such as electricity, are integral to the functioning of societies. These backbones of economy, security, and health are increasingly susceptible to high-impact, low-probability (HILP) events, including natural disasters and adverse weather conditions [1, 2]. A disruption in these infrastructures, especially in power distribution systems, not only affects other essential CIs, like transportation, communication, and water supply, but also has considerable societal consequences. With climate change intensifying the frequency and severity of such extreme events, the resilience of power systems, i.e., their ability to prepare for, withstand, and recover swiftly from disruptive events, is gaining increased attention. Traditional power systems designed to endure low-impact high-probability (LIHP) events are being challenged to evolve and handle these significant HILP incidents. The need for resilience is particularly critical at the distribution level, where \(80-90\%\) of power outages occur [3], thus justifying the recent surge in related research. This paper addresses this critical issue, focusing on strategies to expedite power restoration following disruptions at the distribution level. It offers a comprehensive model that takes into account fault isolation, damage assessment, network reconfiguration, and microgrid formation. Our model aims to bridge gaps in existing literature, particularly in dealing with these complex, interrelated processes. Therefore, our literature review touches upon five pivotal facets in the realm of power system restoration: micro-grid formation, network reconfiguration, fault isolation, damage assessment, and addressing technical constraints. _Microgrid Formation_: As access to the upstream network is often impaired during fault conditions, deploying a multitude of distributed energy sources at the distribution network level in a microgrid can improve resilience. Studies [4, 5] have emphasized the importance of such resources in the form of distributed generators (DGs) or mobile energy units [6]. While a substantial amount of research has focused on the energy sufficiency, economic viability, and technical limitations of microgrids, others have shed light on microgrid formation through network reconfiguration tactics [7, 8]. _Network Reconfiguration_: A multi-stage load restoration process inherently calls for iterative network reconfigurations, utilizing sectionalizing switches at each stage. These switches could be remote-controlled or manual. The act of manual switching necessitates field crew presence, which could extend the switching time due to variables such as geographical attributes, traffic conditions, and crew availability. Various studies have dissected the implications of the remote-controlled switches' (RCS) switching actions in distribution networks [9, 10]. Manual switches (MSs), i.e. manual sectionalizers, cut-out fuses, or even circuit breakers without remote control capability, also provide pragmatic and efficient load restoration capabilities. Also, due to the possibility of damage to the cyber network, especially in the event of severe fault conditions [11], remotely unreachable RCSs could still be engaged manually to help achieve a faster restoration. However, few references have incorporated the optimal performance of MSs in their proposed restoration processes. In [12, 13, 14], operation crews for closing MSs were considered. These papers assume that all of the MSs have been opened in the fault isolation phase. This assumption overlooks the importance of optimal fault isolation. _Fault Isolation_: Establishing optimal primary fault isolation paves the way for accelerated load pick-up during the restoration process. However, this crucial step has been overlooked in several studies [4, 5, 6, 7, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 22, 231, 232, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 89, 91, 85, 87, 89, 92, 88, 86, 88, 89, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 14, 14, 15, 16, 17, 18, 19, 19, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 94, 95, 96, 97, 98, 99, 100, 101, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 112, 12, 13, 14, 15, 16, 17, 18, 19, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 101, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 101, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 39, 40, 41, 42, 43, 45, 46, 4 * Incorporating two network loading conditions into conventional power flow constraints at the network's final configuration, our model establishes boundaries for nodes' voltage levels and limits for line power flows. This results in safe operations across all stages, and importantly, the proposed constraints do not necessitate the segregation of zones with power sources, such as substations or DGs. * Demonstrating the effectiveness and scalability of our proposed algorithm through numerical experiments, we present results from medium- and large-scale test cases. This paper is structured as follows: Section II explains our proposed methodology. Section III shows our numerical results. Section IV offers conclusions and future directions. ## II Proposed Methodology Our methodology devises an intelligent decision-making framework tailored for the complex process of restoring a distribution network after severe weather-induced equipment failures. By balancing system repair tasks, switching operations, and damage assessments, this methodology navigates the challenges efficiently. ### _Decision Framework_ #### Ii-A1 Event Description and Network Blackout Severeer weather is notorious for instigating a chain of equipment failures within distribution networks. Protective devices, sensing these faults, trigger automatic shutdown protocols in the preliminary phase of such an event. The situation often exacerbates as the event unfolds, causing more damage, and inducing more faults. Field crews can only be deployed once safe operational conditions are restored. As a result, during this stage, comprehensive information regarding the damage--such as the number and location of faults, extent of the damage, and anticipated repair duration--is typically scarce. #### Ii-A2 Damage Assessment and Partol Tasks In light of the transportation network's characteristics, the distribution feeder is divided into several patrolling areas for damage evaluation and data gathering. Here, we assume that the number and extent of patrol zones are predetermined. Taking into account the event's severity and the equipment's fragility curves, we determine the likelihood of equipment failure [24]. To each area, we assign a hypothetical fault with a repair time equivalent to the sum of the patrolling duration for that area and the expected repair time. This repair time is deduced from equation (1): \[T_{q}^{\text{repair}}\ =T_{q}^{\text{patrol}}\ +\sum_{e\in\mathcal{E}q}T_{e}^{ \text{repair}}\ \rho_{e}. \tag{1}\] Here, (1) computes the repair time \(T_{q}^{\text{repair}}\) for the hypothetical fault in patrolling area \(q\). This time is the sum of the patrol time \(T_{q}^{\text{patrol}}\) and the product of each equipment's repair time \(T_{e}^{\text{repair}}\) and failure probability \(\rho_{e}\) within the area \(\mathcal{E}_{q}\). #### Ii-A3 Task Assignment One of the significant challenges during the restoration process is to determine the optimal allocation of various tasks--such as switching operations and repair of actual and hypothetical (patrolling) faults--to the repair crews. The distribution of tasks among repair crews is depicted in Fig. 1. We consider three types of manual switching actions: 1. During-patrol MS opening (optimal primary fault isolation). 2. Deploying a crew for the first switching action of an MS (open/close). 3. Deploying a crew for the second switching action of an MS (close). The first switching type is described within a patrol action, forming a single patrol-and-switch task, while the second and third switching types are single-task duties. Consequently, under our proposed methodology, normally closed MSs can be opened either during patrol or by directly deploying a crew. If an MS is opened, it can be closed via the second switching action. Conversely, normally open switches can only be closed through a first-time direct switching operation. To manage the modeling complexity and computational challenges, in this paper, we do not operate each MS more than two times. This modeling choice prevents reconfiguration of the energized parts of the network in each set of decisions. #### Ii-A4 Chronological Description As highlighted in Section II-B, we dispatch our crews based on specific routing decisions. Keeping these decisions updated is of utmost importance. To address this, we incorporate proactive re-optimization, scheduled either at set times or regular intervals. Additionally, we use reactive re-optimization, which is initiated after an area has been patrolled or when a new fault comes to light and has been thoroughly evaluated. For the purposes of our research, we focus on the timings associated with crew actions and the energization of zones. These have been integrated as decision variables within our optimization framework. Consequently, our proposed methodology operates by responding to variable time events. We define a set of events, represented by \(\mathcal{T}\), which captures the order of zone energizations and assigns a unique set of power flow constraints for each instance of \(t\in\mathcal{T}\). Instead of continually checking, we also consider an alternative approach that verifies power flow only when the network reaches its final configuration, i.e., no further zones are left to be restored. This approach eliminates the need to explicitly model power flow across the expansive set of events, \(\mathcal{T}\), thus greatly improving computational efficiency. Fig. 2 provides a chronological overview of our model. ### _Mathematical Formulation_ The primary goal of a restoration plan is to minimize the overall cost incurred from an event. A significant portion of this cost accrues from electric service disruptions. There are also costs associated with the restoration process, such as crew mobilization expenses, which are comparatively minimal but essential to consider to prevent the dispatch of remote Figure 1: Task distribution for repair crews crews for certain tasks. The proposed model, grounded in this concept, aims to minimize the total cost: \[\text{Cost}=\sum_{z\in\mathcal{Z}}T_{z}^{\text{out}}\,P_{z}C_{z}^{\text{out}}\,+ \sum_{p,p^{\prime}\in\mathcal{P}}\beta_{p,p^{\prime}}\,\Delta_{p,p^{\prime}}C^ {\text{tra}}\,, \tag{2}\] where \(\mathcal{Z}\) represents the set of all electrical zones, with \(z\) as an index. The outage duration is represented by \(T_{z}^{\text{out}}\), \(P_{z}\) is the power consumption, and \(C_{z}^{\text{out}}\) is a cost coefficient corresponding to the energy not supplied. The first term represents the customers' damage costs, which is a function of these variables. In the second term, \(\mathcal{P}\) denotes the set of all locations within the crew routing, with a pair of indexes \((p,p^{\prime})\). The binary variable \(\beta_{p,p^{\prime}}\) indicates whether a crew traverses a path from location \(p\) to location \(p^{\prime}\), \(\Delta_{p,p^{\prime}}\) represents the travel time between these locations, and \(C^{\text{tra}}\) is a cost coefficient corresponding to the crews' travel. The second term encapsulates the cost associated with crew teams and their vehicles, accounting for the distance covered, the duration of travel, and the related cost coefficient. The summation is performed over all location pairs. The optimization problem we address is bound by multiple technical and operational constraints. Fig. 3 illustrates the primary characteristics of five distinct constraint classes and the interrelationships among them. Notably, action sequences, which are pivotal decision variables in crew routing constraints, have a significant influence over various action timings. This is because an action's completion time is contingent on its placement within a crew's list of duties. Furthermore, these sequences are crucial for network reconfiguration, as they dictate decisions regarding the switching of MSs. Each class of constraints will be detailed in the ensuing sections. #### Iii-B1 Crev Routing The process of optimally allocating repair crews for manual switching and fault repairs is a routing problem. As previously discussed, to maintain an accurate description of the restoration process without assuming the MSs are open at the start of the switching process, it is necessary to consider the possibility of two switching operations for each MS. With this in mind, the crew routing constraints are: \[\sum_{p^{\prime}\in\mathcal{P}}\beta_{p,p^{\prime}}\leq\beta_{p}^{ V};\quad\forall p\in\mathcal{P}\] (3a) \[\sum_{p^{\prime}\in\mathcal{P}}\beta_{p^{\prime},p}=\beta_{p}^{V}; \quad\forall p\in\mathcal{P}/\mathcal{P}_{\mathcal{C}}\] (3b) \[\beta_{p^{\prime}}^{V}\leq\beta_{m}^{MSP}+\beta_{p}^{V}\leq 1;\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ the switching time in (4c) and (4d). As per (4e), the second manual switching of an MS must occur after the first one. #### Iii-B3 Network Reconfiguration Constraints The next set of constraints relates to the energization paths for each load or zone and governs whether parts of the network operate as isolated islands or remain connected to the upstream network: \[\alpha_{z}^{\text{root}} =\sum_{b\in\mathcal{B}_{z}}\left\{\alpha_{b}^{\text{sub}}\,+\alpha _{b}^{DG}\right\};\quad\forall z\in\mathcal{Z} \tag{5a}\] \[\alpha_{z}^{\text{root}} +\sum_{z^{\prime}\in\mathcal{Z}}\beta_{z^{\prime},z}^{zz}=1; \quad\forall z\in\mathcal{Z}\] (5b) \[\beta_{z^{\prime},z}^{zz} +\beta_{zz,z^{\prime}}^{zz}\leq 1;\quad\forall z,z^{\prime}\in \mathcal{Z}\] (5c) \[\beta_{z^{\prime},z}^{zz} +\beta_{z,z^{\prime}}^{zz}=\sum_{r\in\mathcal{R}_{z,z^{\prime}}} \beta_{r}^{RCS}+\sum_{m\in\mathcal{M}_{z,z^{\prime}}}\beta_{m}^{MSF};\forall z,z^{\prime}\in\mathcal{Z}\] (5d) \[\beta_{m}^{MSF} =\left\{\begin{array}{l}\beta_{m}^{MSI}\left(1-\beta_{p}^{V} -\beta_{m}^{MSP}+\beta_{p}^{V}\right)\\ +\left(1-\beta_{m}^{MSI}\right)\left(\beta_{p}^{V}-\beta_{p^{\prime}}^{V} \right)\end{array}\right\};\] \[\forall m\equiv p\equiv p^{\prime},m\in\mathcal{M},p\in\mathcal{P }_{\mathcal{M}},p^{\prime}\in\mathcal{P}_{\mathcal{M}^{\prime}}\] (5e) \[\beta_{\ell}^{\text{line}} =\begin{cases}\beta_{r}^{RCS};\,\ell=\mathcal{L}_{r}^{\text{RS}} \\ \beta_{m}^{MSF};\,\ell=\mathcal{L}_{m}^{\text{MS}}\\ 1;\quad\text{otherwise}\end{cases}\quad\forall\ell\in\mathcal{L},m\in\mathcal{M}, r\in\mathcal{R}, \tag{5f}\] where \(\alpha_{z}^{\text{root}}\) is a binary variable indicating the power supply reference zone; \(\alpha_{b}^{\text{sub}}\) is a binary value showing if bus \(b\) is a substation; \(\alpha_{b}^{DG}\) is a binary variable indicating if bus \(b\) is hosting a master DG, i.e., a DG that remains separated from substations or other master DGs; and \(\mathcal{B}_{z}\) is the set of buses in zone \(z\). The term \(\beta_{z^{\prime},z}^{zz}\) is a binary variable indicating whether zone \(z\) is energized by zone \(z^{\prime}\); \(\beta_{r}^{RCS}\) is a binary variable indicating the connection status of RCS \(r\) in (5d); \(\mathcal{R}_{z,z^{\prime}}\) and \(\mathcal{M}_{z,z^{\prime}}\) are the sets of all RCS and MSs between zone \(z\) and \(z^{\prime}\), respectively; and \(\beta_{m}^{MSF}\) is a binary variable representing the final status of MS \(m\). In (5e), \(\beta_{m}^{MSI}\) represents the initial status of MS \(m\). In (5f), \(\beta_{\ell}^{\text{line}}\) is the final line connection status, and \(\mathcal{L}_{m}^{\text{MS}}\) and \(\mathcal{L}_{r}^{\text{RCS}}\) are the lines switchable by MS \(m\) and RCS \(r\), respectively. The reference zone includes a substation bus or a master DG (5a), so it is not energized through another zone. This statement is reflected in (5b) which indicates that each zone is either a reference zone or is energized by another zone. This condition also implies maintaining the radial structure of the network. As described in (5c), for a pair of zones in the network, only one zone can energize the other (parent/child relation). In a complex network structure, it is possible for two zones to be connected via multiple switches. If one zone energizes another zone, only one switch (RCS or MS) between the two zones must be in the connected state (5d). The MS final status is calculated based on its initial status and switching actions in (5e). The final line connection status is calculated based on the final MS or RCS status (5f). #### Iii-B4 Zone Restoration Times So far, the constraints related to the energization path of the network zones, switching, and repair times have been introduced. Knowing these values, the outage duration (energization time) of different zones is calculated. The parent must be energized before the child for each for each pair of connected zones: \[T_{z}^{\text{out}}\ \geq T_{z^{\prime}}^{\text{out}}\,-M\left(1-\beta_{z^{ \prime},z}^{zz}\right);\quad\forall z,z^{\prime}\in\mathcal{Z}, \tag{6}\] where \(T_{z}^{\text{out}}\) is the outage duration of zone \(z\). If an MS isolates two zones, the zones on each side of the switch cannot have a restoration time smaller than the switching time. Before the MS is opened, these two zones are connected and they thus cannot be restored due to the lack of fault isolation or violations of technical constraints: \[T_{z}^{\text{out}}\ \geq\tau_{p}^{c}-M\left(1-\beta_{p}^{V}- \beta_{m}^{MSP}\right);\] \[\forall z,z^{\prime}\in\mathcal{Z},m\in\mathcal{M}_{z,z^{\prime} },p\equiv m,p\in\mathcal{P}_{\mathcal{M}},\beta_{m}^{MSI}=1, \tag{7}\] where \(\mathcal{M}_{z,z^{\prime}}\) is the set of all MSs connecting zones \(z\) and \(z^{\prime}\) and \(\beta_{m}^{MSI}=1\) indicates that MS \(m\) is initially closed. If an MS is finally closed after a second switching, one of its connected zones will be the parent and the other will be the child. In this case, according to the description of the load restoration process, first, the switch is opened in order to separate the two zones and energize the parent zone, and then it is closed again in order to restore the child zone. Therefore, only the child zone will have a restoration time greater than the second switching time: \[T_{z}^{\text{out}}\ \geq\tau_{p}^{c}-M\left(2-\beta_{z^{\prime},z}^{zz}- \beta_{p}^{V}\right);\] \[\forall z,z^{\prime}\in\mathcal{Z},m\in\mathcal{M}_{z,z^{\prime} },p\equiv m,p\in\mathcal{P}_{\mathcal{M}},\beta_{m^{\prime}}^{MSI}=1, \tag{8}\] where \(\mathcal{M}_{z,z^{\prime}}^{\prime}\) represents the set of MSs connecting \(z\) and \(z^{\prime}\) for second switching actions. For a child zone restored by closing a normally open MS, the zone restoration time will be greater than the manual switching time: \[T_{z}^{\text{out}}\ \geq\tau_{p}^{c}-M\left(2-\beta_{z^{\prime},z}^{zz}- \beta_{p}^{V}\right);\] \[\forall z,z^{\prime}\in\mathcal{Z},m\in\mathcal{M}_{z,z^{\prime} },p\equiv m,p\in\mathcal{P}_{\mathcal{M}},\beta_{m}^{MSI}=0. \tag{9}\] If a normally closed MS remains closed, it will surely energize one of the two zones on its two sides. In this situation, the parent zone cannot be energized before the child zone because these zones are connected during the entire procedure: \[T_{z}^{\text{out}}\ \geq T_{z^{\prime}}^{\text{out}}\,-M\left(1-\beta_{z,z^{ \prime}}^{zz}+\beta_{p}^{V}+\beta_{m}^{MSP}\right);\] \[\forall z,z^{\prime}\in\mathcal{Z},m\in\mathcal{M}_{z,z^{\prime} },p\equiv m,p\in\mathcal{P}_{\mathcal{M}},\beta_{m}^{MSI}=1. \tag{10}\] A zone cannot be energized until all related faults have been repaired. Therefore, the time to restore a zone must be longer than the time to repair all the faults in that zone: \[T_{z}^{\text{out}}\ \geq\tau_{p}^{c};\quad\forall z\in\mathcal{Z},f\in \mathcal{F}_{z},p\equiv f,p\in\mathcal{P}_{\mathcal{F}}, \tag{11}\] where \(\mathcal{F}_{z}\) is the set of all faults in zone \(z\) and \(\mathcal{P}_{\mathcal{F}}\) is the set of all locations \(p\) with a fault. If an MS were closed before all faults are repaired in the child zone, the parent zone would be subject to the repair time of the offspring zone. Therefore, it is preferred that the MS does not have a closing time earlier than the restoration time of the child zone: \[\tau_{p}^{c}\left(1-\beta_{m}^{MSI}\right)+\tau_{p}^{c}\left( \beta_{m}^{MSI}\right)\geq T_{z}^{\text{out}}-M\left(1-\beta_{z^{\prime},z}^{ zz}\right);\] \[\forall z,z^{\prime}\in\mathcal{Z},m\in\mathcal{M}_{z,z^{\prime} },m\equiv p\equiv p^{\prime},p\in\mathcal{P}_{\mathcal{M}},p^{\prime}\in \mathcal{P}_{\mathcal{M}}. \tag{12}\] In essence, constraints (6)-(12) define the interactions between zones, switches, and repair activities during restoration. #### Iii-B5 Power Flow Expression Here, we discuss power flow expressions, which consist of multi-time-step conventional power flow (PF) and a time-step free conservative PF in accordance with the proposed routing framework. The conventional PF model is discussed first. Consider two generic zones able to be connected by a switch that possibly have their own active\(\backslash\)reactive power injections. When a zone \(z\) energizes another zone \(z^{\prime}\) after its own energization, the power flow and voltage conditions may change in zone \(z\). Therefore, for a network with \(n\) zones, \(n\) sets of power flow equations are required to guarantee safe voltage and line flow values. During the restoration process, each step involves the energization of one zone, and one set of power flow constraints is added at each step. These constraints are: \[\zeta_{z,z^{\prime}}\geq\left(T_{z^{\prime}}^{\text{out}}-T_{z^{ \prime}}^{\text{out}}\right)/T^{\text{max}}\ ;\quad\forall z,z^{\prime}\in\mathcal{Z} \tag{13a}\] \[\sum_{z\in\mathcal{Z}}\beta_{z,t}^{zt}=t;\quad\forall t\in \mathcal{T}\] (13b) \[\beta_{z,t}^{zt}\geq\beta_{z,t-1}^{zt};\quad\forall t\in \mathcal{T},z\in\mathcal{Z}\] (13c) \[\sum_{t\in\mathcal{T}}\left(\beta_{z,t}^{zt}-\beta_{z^{\prime},t }^{zt}\right)\geq 1-\left(1-\zeta_{z,z^{\prime}}\right)M;\quad\forall z,z^{ \prime}\in\mathcal{Z}\] (13d) \[\sum_{t\in\mathcal{T}}\left(\beta_{z,t}^{zt}-\beta_{z^{\prime},t }^{zt}\right)\geq 1-\left(1-\beta_{z,z^{\prime}}^{zz}\right)M;\quad\forall z,z^{\prime}\in\mathcal{Z}, \tag{13e}\] where \(\zeta_{z,z^{\prime}}\) is a binary variable indicating earlier energization time for zone \(z\) than \(z^{\prime}\) and \(T^{\text{max}}\) is the maximum possible outage time. The binary variable \(\beta_{z,t}^{zt}\) in (13b) and (13c) tracks the energization status of each zone at each time step, where \(t\) represents a time step and \(\mathcal{T}\) is the set of all time steps. In each time step, one zone is energized (13b) and remains at that state for the rest of the process (13c). For any pair of zones \(z\) and \(z^{\prime}\), if \(z\) is energized earlier (\(\zeta_{z,z^{\prime}}=1\) in (13d)) or is the parent zone (\(\beta_{z,z^{\prime}}^{zz}=1\) in (13e)), then it has been in the network for more time steps. Otherwise, the constraints are relaxed by a large margin of \(M\). These constraints ensure that power is transferred in the correct order following the energization paths. A set of power flow equations consists of voltage drop equation (14a) (see [25] for details on the model we use in this paper), power balance (14b), power source limitations (14c), and voltage and line flow limits (e.g., see [26]): \[\pm f\left(U_{b,t},\varphi_{\ell,t}\right)\leq M\left(1-\beta_{ \ell}^{\text{line}}\right) \tag{14a}\] \[\sum_{\ell\sim b}\varphi_{\ell,t}+\beta_{z,t}^{zt}D_{b}-G_{b,t}=0 ;\forall b\in\mathcal{B}_{z},z\in\mathcal{Z},t\in\mathcal{T}\] (14b) \[\beta_{z,t}^{zt}G_{b}^{\text{min}}\leq G_{b,t}\leq\beta_{z,t}^{zt} G_{b}^{\text{max}}\ ;\forall b\in\mathcal{B}_{z},z\in\mathcal{Z},t\in\mathcal{T}, \tag{14c}\] where \(f\left(U_{b,t},\varphi_{\ell,t}\right)\) in (14a) represents the voltage drop as a function of the voltage magnitude at bus \(b\) at time \(t\), \(U_{b,t}\), and the flow of line \(\ell\) at time \(t\), \(\varphi_{\ell,t}\). In (14b), the summation term represents the total power flow export from bus \(b\) to all lines connected to it, denoted as \(\ell\sim b\), \(D_{b}\) is the demand at bus \(b\), and \(G_{b,t}\) is the generation at bus \(b\) at time \(t\). Variables \(\varphi\), \(D\), and \(G\) concisely represent both active and reactive powers. In (14c), \(G_{b}^{\text{min}}\) and \(G_{b}^{\text{max}}\) are the minimum and maximum generation at bus \(b\), respectively. The concept of conservative PF is introduced to reduce the computational burden of solving multiple sets of power flow equations. Two loading conditions, passive loading and active loading, in the final configuration of the network, determine the upper and lower bounds for nodes' voltage levels and an upper bound for lines' power flows in all steps of the restoration process. To acquire these bounds, in this paper, we assume that as the result of a good switch placement strategy in a previous planning stage, our network loading in electrical zones is nearly three-phase balanced [12], the condition at which voltage levels and line loading conditions monotonically change along the feeder [27]. _Passive loading_: The purpose of this loading condition is to determine a lower bound for voltage levels, \(LB\{U\}\), in all restoration steps. This condition is referred to as "passive loading." Since DGs and capacitors can supply power up to their designated zone's and downstream zones' aggregate demand while respecting their own generation upper limits. However, in the passive loading condition, active and reactive injections from downstream zones to upstream zones are prohibited. Furthermore, the DGs' lower power generation limits are also relaxed to accommodate cases where the lower generation limits are above the total power consumption within their designated zone's and downstream zones'. Consequently, each zone treats all of its downstream zones as a collective passive load as shown in Fig. 4. Passive loading lowers voltage levels as new zones are restored. For the passive loading condition, the power flow constraints are: \[\pm f\left(\underline{U_{b}},\underline{\varphi}_{\ell}\right) \leq M\left(1-\beta_{\ell}^{\text{line}}\right) \tag{15a}\] \[\sum_{\ell\sim b}\underline{\varphi}_{\ell}+D_{b}-\underline{G_{ b}}=0;\quad\forall b\in\mathcal{B}\] (15b) \[\underline{G_{b}}\leq G_{b}^{\text{max}};\quad\forall b\in \mathcal{B}\] (15c) \[\underline{\varphi}_{\ell}\geq\left(\zeta_{z,z^{\prime}}-1\right)M; \quad\forall z,z^{\prime}\in\mathcal{F}, \tag{15d}\] where \(\underline{U_{b}}\) is voltage magnitude at bus \(b\), \(\underline{\varphi}_{\ell}\) is the flow on line \(\ell\), and \(\underline{G_{b}}\) is the generation at bus \(b\), respectively, all in the passive loading condition. If there is a time difference between energization of \(z\) and \(z^{\prime}\), then \(z^{\prime}\) is added as a passive load \(\left(\underline{\varphi}_{\ell}\geq 0\right)\) to \(z\) as shown in (15d). This constraint reduces nodes' voltage levels monotonically by adding a new zone. In (15c), lower bounds on the power generation are relaxed since \(\sum_{b\in z^{\prime}}D_{b}\leq\sum_{b\in z^{\prime}}G_{b}^{\text{min}}\) forces \(T_{z}^{\text{out}}=T_{Z^{\prime}}^{\text{out}}\) such that \(z^{\prime}\) can send the extra generated power to \(z\). _Active loading_: In the active loading condition, new zones are added as active loads \(\left(\bar{\varphi}_{\ell}\leq 0\right)\), leading to higher voltage levels. This paradigm is termed "active loading" because a specific zone accommodates DGs within its domain along with power injections from downstream zones to meet the entire demand within the zone, potentially allowing for power export to the upstream zone. However, the outbound power transmission from a zone to its downstream counterparts remains Fig. 4: Passive loading condition prohibited. To be able to generate that much power, the upper limit of the DGs' power generation is relaxed, permitting them to produce power beyond their rated capacities. This approach additionally considers the presence of a DG at every node. As a result, each zone treats its entire downstream network as an active load as shown in Fig. 5. As new zones are restored, active loading increases voltage levels. Thus, if the voltage levels for the final configuration are within the acceptable range, the voltage levels of all preceding configurations also satisfy the voltage limits. For the active loading condition, the power flow constraints are: \[\pm f\left(\bar{U}_{b},\bar{\varphi}_{\ell}\right)\leq M\left(1- \beta_{\ell}^{\text{line}}\right) \tag{16a}\] \[\sum_{\ell\sim b}\bar{\varphi}_{\ell}+D_{b}-\bar{G}_{b}=0;\quad \forall b\in\mathcal{B}\] (16b) \[G_{b}^{\text{min}}\leq\bar{G}_{b};\quad\forall b\in\mathcal{B}\] (16c) \[\underline{G}_{b}\leq\bar{G}_{b};\quad\forall b\in\mathcal{B}\] (16d) \[\bar{\varphi}_{\ell}\leq(1-\zeta_{z,z^{\prime}})\,M;\quad \forall z,z^{\prime}\in\mathcal{Z}, \tag{16e}\] where \(\bar{U}_{b}\) is the voltage level at bus \(b\), \(\bar{\varphi}_{\ell}\) is the flow on line \(\ell\), and \(\bar{G}_{b}\) is the generation at bus \(b\), respectively, for the active loading condition. As shown in Section III-A, our numerical results validate the accuracy of the power flow linearization from [25] for our formulation, with voltage magnitudes within 0.0058 per unit of the nonlinear AC power flow model. The appendix provides derivations showing how the passive and active loading conditions result in upper and lower bounds on the voltages and upper bounds on the line flows with respect to the power flow approximation's outputs. ## III Numerical Results This section empirically evaluates the proposed model using modified IEEE 123-node and IEEE 8500-node [28] networks. The simulations have been designed to validate the model's efficiency and scalability. The 123-node network shown in Fig. 6 includes \(6\) MSs and \(7\) RCSs, dividing the network into \(13\) distinct zones. For the purposes of these studies, we assume that the operation time for MSs is \(5\) minutes, while the operation time for RCSs is negligible. The parameters associated with the \(2\) DGs are presented in Table I. In our simulated scenarios, system outages are triggered by \(12\) faults, the locations and estimated repair times of which are detailed in Table II. We assume that these parameters are unknown immediately post-event and are revealed progressively during the feeder patrolling process. We have also assumed the availability of \(6\) crew teams for field operations, with patrol zones identical to the electrical zones for the sake of clarity. The cost of damage to customers is selected randomly from \(\$15\) to \(845\) per kWh, and the travel cost for crews is set at \(\$0.60\) per hour of driving time. Travel times are calculated based on the straight distance between any pair of locations in the routing problem. ### _Base Case Evaluation_ In the aftermath of an extreme event, the breakers at the substations activate, and all load points experience an interruption. After executing our proposed optimization model, Fig. 6 illustrates the sequence of actions needed to restore service to the affected load points. The total restoration process spans \(21\) optimization steps and a duration of \(6\) hours and \(36\) minutes, during which all load points are re-energized. The timing of decision updates is contingent upon the completion of zone patrols or the detection and assessment of a fault; otherwise, the timing defaults to a set value (in this case, \(30\) minutes). The update time never falls below a minimum time step length (in this case, \(10\) minutes). Fig. 6 presents the final moments of six selected steps from a total of \(21\) steps. Solid arrows connect each crew's previous location (the initial location in the time step) to its current location (the final location in the time step), illustrating their path of travel. Dashed arrows show the crews' planned routes based on the latest set of decisions, which could be altered by subsequent decisions. For example, at the start (\(t=0\)), crew \(1\) is scheduled to patrol zones Z1 and Z4, and crew \(3\) is designated for zone Z2 as shown in Fig. 6a. However, at \(t=29\), a fault is discovered in zone Z1 by crew \(1\). Consequently, the routes are updated as shown in Fig. 6b, with crew \(3\) being reassigned to repair the fault before patrolling Z2. As shown in Fig. 6c, zone Z7 is isolated through during-patrol MS operation and energized since no damage is detected in that area. It is also worth noting that some crews are already engaging in repair and restoration operations while some zones are still pending patrol. Numerical simulations were conducted using Gurobi 8.1.1 on a system equipped with an AMD Ryzen7 4800H processor and 16 GB of memory. The model was found to be computationally efficient, with the optimization problem for all steps resolved in less than nine seconds, as depicted in Fig. 7. The complexity of the routing problem, as shown in Fig. 7, is indicated by the number of crews, unpatrolled zones, faults, and the number of MS operations. According to the description of MSs operation in section II-A3, the number of potential MS operations is twice the number of closed MSs, as these could be opened and then reclosed, plus the number of opened MSs. For the computation of precise minimum and maximum voltage levels, a multi-time step approach was employed, incorporating both linear and non-linear AC power flow constraints. However, the decision variables pertaining to the routing problem and the ultimate network configuration remained consistent with the time-step-free conservative scenario. In Fig. 8, the voltage magnitude ranges are depicted across three scenarios: the conservative time-step-free model with linear power flow, the multi-time step model with linear power flow, and the multi-time step model with non-linear \begin{table} \begin{tabular}{c|c c c} \hline \multicolumn{3}{c}{} & \multicolumn{3}{c}{DG Parameters} \\ \hline **Name** & **Location** & \(\overline{\mathbf{P}^{DQ}}/\underline{\mathbf{P}^{DG}}\) & \(\overline{\mathbf{Q}^{DQ}}/\underline{\mathbf{Q}^{DG}}\) \\ \hline DG1 & Bus 47 & \(200/20\) kW & \(\pm 140\) kW \\ \hline DG2 & Bus 77 & \(300/30\) kW & \(\pm 210\) kW \\ \hline \end{tabular} \end{table} TABLE I: DG Parameters Fig. 5: Active loading condition exact power flow. As expected, the minimum and maximum values lie within the range of conservative bounds. Note that in the time-step-free model, the constraints merely enforce the upper and the lower bounds to be in the statutory range, allowing these variables to freely extend to the extreme ends. Therefore, our purpose here is not to assess the tightness of upper and lower bounds; an assessment of their tightness is deferred to section III-C. ### _Decision Update Frequency_ In practical scenarios, as data regarding fault locations and repair times are progressively revealed through ongoing patrol operations, decision updates must be frequently performed to accommodate this newly acquired information. However, the immediacy of response to this new data is curtailed by factors such as data collection and processing time, as well as the runtime of various programs required for operations Fig. 8: Ranges of voltage magnitudes across time steps Fig. 6: Fault restoration process in IEEE 123-node test feeder; see [29] for an animation of the restoration procedure Fig. 7: Program run-time across different steps like load/generation estimation, travel time prediction, and fault management. Fig. 9 illustrates the sensitivity of the total network outage cost and energy not supplied (ENS) within the study horizon to variations in the minimum decision update time. A comparison of the results for update times of \(5\) and \(10\) minutes reveals that a more rapid response does not necessarily translate to cost reduction. This finding underscores the challenge of the exploration-exploitation dilemma in the context of dynamic decision-making in this environment. ### _Non-conservative Power Flow Approach_ We next reassessed the base case scenario with the proposed methodology, replacing our conservative time-step-free power flow (PF) constraints with conventional multi-time-step linear PF constraints. This yields a marginal improvement (0.7%) in the total network outage cost, from $435.9k in the conservative power flow scenario to $432.9k in the conventional scenario. The proximity of the outage costs showcases the tightness of our proposed bounds in this case. The run-times for each stage, for both the conventional and conservative approaches, are shown in Fig. 10. While the two approaches suggest differing decisions and the problem parameters diverge after the initial stage, a clear uptick in overall computational complexity is observed when implementing conventional PF constraints. ### _Simultaneous Restoration and Damage Assessment_ To benchmark the effectiveness of the proposed concurrent damage assessment and load restoration strategy, we considered two alternative benchmarks: 1. [leftmargin=*] 2. _First Patrol all, then Repair all (FPTR)_: Here, all crews are initially dispatched for feeder patrol and damage assessment. The objective at this stage is to minimize total patrol time [30]. Subsequently, fault repair is carried out to restore all loads. 3. _Separate Patrol and Repair Crews (SPRC)_: In this scenario, crews \(1\) and \(5\) are assigned to patrol, while the others perform repairs. Fault repair is based on progressively updated information about the location and repair time of faults [18]. As Fig. 11 demonstrates, our proposed method outperforms the others. The SPRC approach keeps repair crews idle until some faults are assessed. On the other hand, the FPTR approach, while not leaving any crews idle, fails to maximize the restored load as it prioritizes patrol action over repair activities. Fig. 12 illustrates the cumulative outage cost from the beginning of the process. ### _Scalability of the Solution Approach_ To assess the applicability of the proposed model for large-scale, real-world networks, we used the IEEE 8500-node system [28]. This network was partitioned into \(20\) patrol zones, as depicted in Fig. 13. We assumed that the network experienced a significant event, resulting in \(25\) equipment damages. Within this network, \(20\) crews, initially stationed at four locations, were tasked with damage assessment and service restoration. \(32\) randomly placed DGs with random capacity from \(100\) to \(600\) kW are shown with green-filled circles. The computation time for all optimization steps was less than \(170\) seconds, as shown in Fig. 14. This figure also reveals the routing problem's dimension, which includes the number of crews, unpatrolled zones, faults, and MSs operations. The results indicate that the proposed method offers an efficient and scalable solution for power system restoration, applicable even to large-scale networks. Fig. 11: Restored load over time in different restoration approaches Fig. 12: Cumulative outage cost from the beginning of the process Fig. 10: Computation times for conventional and conservative approaches Fig. 9: Sensitivity of outage cost and ENS to decision update frequency ## IV Conclusion This paper proposes a dynamic fault management plan designed for co-optimizing damage assessment and service restoration. The primary objective minimizes the total cost accrued from both outages and the restoration process. This objective is achieved by devising a routing plan for field crews, which includes feeder patrol, damage assessment, manual switching, and repair actions. To ensure the safe operation of the network in abnormal configurations, a conservative set of power flow equations is employed. This approach contributes to the efficiency and scalability of the proposed framework. The results demonstrate the efficacy of simultaneous optimization and operation of feeder patrolling, damage assessment, repair, and restoration. By integrating these activities, significant benefits are observed in terms of outage reduction for the distribution network. This approach outperforms sequential phases or the deployment of separate crews for different actions. The analysis reveals that incorporating conservative power flow constraints can substantially alleviate the computational burden associated with the problem. Despite the reduced complexity, the total cost remains remarkably close to optimal levels. Consequently, the proposed fault management scheme holds promise for practical applicability in large-scale real-world distribution networks.
2309.06729
From one to infinity: symmetries of integrable systems
Integrable systems constitute an essential part of modern physics. Traditionally, to approve a model is integrable one has to find its infinitely many symmetries or conserved quantities. In this letter, taking the well known Korteweg-de Vries and Boussinesq equations as examples, we show that it is enough to find only one nonlocal key-symmetry to guarantee the integrability. Starting from the nonlocal key-symmetry, recursion operator(s) and then infinitely many symmetries and Lax pairs can be successfully found.
S. Y. Lou, M. Jia
2023-09-13T05:10:35Z
http://arxiv.org/abs/2309.06729v2
# From one to infinity: symmetries of integrable systems ###### Abstract Integrable systems constitute an essential part of modern physics. Traditionally, to approve a model is integrable one has to find its infinitely many symmetries or conserved quantities. In this letter, taking the well known Korteweg-de Vries and Boussinesq equations as examples, we show that it is enough to find only one nonlocal key-symmetry to guarantee the integrability. Starting from the nonlocal key-symmetry, recursion operator(s) and then infinitely many symmetries and Lax pairs can be successfully found. Nonlocal key-symmetry; Integrability; Lax pair; Integrable hierarchy pacs: 02.30.Ik, 05.45.Yv Usually, a nonlinear system is called integrable, one has to point out that the model is integrable under what special meanings. A Lax integrable model possesses a Lax pair such that the model can be considered as a consistent condition of the Lax pair [1]. A Painleve integrable model requires all of the movable singularities of its all solutions with respect to an arbitrary singular manifold are poles [2; 3; 4; 5]. An IST integrable model is solvable by means of the inverse scattering transformation [6]. A CRE (or CTE) integrable model can be solved by means of the consistent Riccati expansion (or the consistent Tanh expansion) method [7; 8; 9]. A symmetry integrable system is defined as it possesses infinitely many symmetries [10; 11]. To find infinitely many symmetries is clearly not easy work. For (1+1)-dimensional nonlinear systems, a fundamental method is to find a recursion operator such that infinitely many symmetries can be obtained by repeatedly applying the recursion operator on some trivial seed symmetries like the travelling symmetries, Galilean invariance invariance and scaling invariance [12; 13]. For (2+1)-dimensional systems, one can use the so-called mastersymmetry approach [14] and the formal series symmetry approach [15] to find infinitely many symmetries. Here, we propose a significant question: can we find one key symmetry to guarantee the integrability for a nonlinear system? In other words, can we find infinitely many symmetries from only one symmetry? Symmetry study is fundamental to find or establish universal models like the standard model [16] in particle physics. There are some types of symmetry methods to solve complicated nonlinear physical problems. The symmetry approach is more attractive in the study of integrable systems because the existence of infinitely many local and nonlocal symmetries [11]. Local symmetries are widely used to get symmetry invariant solutions, to reduce the dimensions of partial differential equations and to find new integrable systems. Recently, it is found that nonlocal symmetries are also very useful to find exact novel types of exact solutions [17; 18; 19; 20; 8], integrable models, and the relations among different types of integrable hierarchies [21; 22]. It is worth to mention that combining some local and nonlocal symmetries one can find interaction solutions among different types of nonlinear waves including solitary waves, cnoidal periodic waves, Bessel waves, Airy waves, rational waves, Painleve waves [18; 19; 20; 21; 8; 22], KdV waves and Boussinesq waves [23]. Basing on the results of the local-nonlocal symmetry reduction method, one may propose some more general approaches like the consistent Riccati (or tanh function) expansion method to find more general interaction solutions [7; 8; 9]. The function \(\sigma\) is called a symmetry of the evolution equation \[u_{t}=K(u), \tag{1}\] if we have always \[\frac{\mathrm{d}\sigma}{\mathrm{d}t}=K^{\prime}\sigma, \tag{2}\] where \(K^{\prime}\) is the linearized operator of \(u\). The symmetry means the evolution equation is invariant under the transformation \(u\to u+\epsilon\sigma\) with infinitesimal parameter \(\epsilon\). A recursion operator, strong symmetry is defined if it satisfies \[\frac{\mathrm{d}\Theta}{\mathrm{d}t}=[K^{\prime},\Theta]=K^{\prime}\Theta- \Theta K^{\prime}. \tag{3}\] In other words, if \(\sigma\) is a symmetry of an evolution equation, then \(\Theta\sigma\) with \(\Theta\) being given by (3) is also a symmetry of the same equation. In this letter, we point out that for a nonlinear physical model, it may be enough to find a key nonlocal symmetry to guarantee its integrability. That means we can find recursion operator(s) and then infinitely many symmetries from one nonlocal symmetry. To realize this idea, we first take the well known Korteweg-de Vries (KdV) equation, \[u_{t}=6uu_{x}+u_{xxx}, \tag{4}\] as a simple example. The KdV equation is found valid in a very large variety of physical fields such as nonlinear optics, Bose-Einstein condensates, hydrodynamics, acoustics, plasma physics, solid state physics, gravity, biology, and many other areas [24; 25; 26; 27]. A symmetry, \(\sigma\), of the KdV equation (4) is defined as a solution of the linearized equation of (4) \[\sigma_{t}=6\sigma u_{x}+6u\sigma_{x}+\sigma_{xxx},\qquad K^{\prime}=6u_{x}+6u \partial_{x}+\partial_{x}^{3}, \tag{5}\] which means the KdV equation (4) is invariant under the transformation \(u\to u+\epsilon\sigma\) with infinitesimal parameter \(\epsilon\). It is straightforward to check that \[u=-\frac{2f_{x}^{2}}{f^{2}}+2\frac{u_{1}}{f}+u_{2}, \tag{6}\] with \[u_{1} = f_{xx}, \tag{7}\] \[u_{2} = \lambda-\frac{1}{2}\frac{f_{xxx}}{f_{x}}+\frac{1}{4}\frac{f_{xx}^ {2}}{f_{x}^{2}}, \tag{8}\] and \[f_{t}=6\lambda f_{x}+f_{xxx}-\frac{3}{2}\frac{f_{xx}^{2}}{f_{x}} \tag{9}\] solves the KdV equation (4). It is also interesting that \(u_{2}\) expressed by (8) with (9) also solves the KdV equation (4). That means Eq. (6) is an auto-Backlund transformation which transforms one solution \(u_{2}\) to another solution \(u\) for the same KdV equation. And Eq. (8) is a nonauto-Backlund transformation which transforms the solution \(f\) of the Schwartz KdV equation (9) to the solution \(u\) of another equation, the KdV equation (4). Another important fact is that \(u_{1}\) defined by (7) is a nonlocal symmetry, residual symmetry [8], of the solution \(u_{2}\) given by (8). One can directly check that \(\sigma=f_{xx}\) satisfies the symmetry equation (5) with the solution \(u\) given by (8). Now, an important question is can we derive some other integrable properties such as the recursion operator(s), infinitely many symmetries and Lax pair(s) from the residual symmetry? To derive the recursion operator(s) of the KdV equation (4) from the single residual symmetry \[\sigma=f_{xx}, \tag{10}\] we rewrite (8) as \[2uf_{x}+f_{xxx}=2\lambda f_{x}+\frac{1}{2}\frac{f_{xx}^{2}}{f_{x}}, \tag{11}\] where \(u_{2}\) has been redenoted as \(u\) for simplicity. Differentiating Eq. (11) with respect to \(x\), we have \[2u_{x}f_{x}+2uf_{xx}+f_{xxx}=2\lambda f_{xx}+\frac{f_{xx}f_{xxx}}{f_{x}}- \frac{1}{2}\frac{f_{xx}^{3}}{f_{x}^{2}}. \tag{12}\] By eliminating \(f_{xxx}\) on the right hand side of (12) with help of (11), one can immediately find \[\Phi\sigma=4\lambda\sigma,\qquad\Phi\equiv\partial_{x}^{2}+4u+2u_{x}\partial_ {x}^{-1}, \tag{13}\] where the function \(f\) has been cancelled by the residual symmetry (10). Obviously, \(\Phi\) is just the recursion operator of the KdV equation (4) and Eq. (13) is nothing but the eigenvalue problem of the recursion operator \(\Phi\). The recursion operator \(\Phi\) given by (13) has been found in many literatures. Whence a recursion operator is obtained, the infinitely many symmetries are followed immediately by repeated applying the recursion operator on any other seed symmetries such as \(u_{x}\) and \(3tu_{x}+\frac{1}{2}\) related to the space translation and the Galilean transformation, respectively. Using the similar approach to the Schwarz KdV equation (9) and the residual symmetry (10), we have another eigenvalue problem \[\Psi\sigma=-4\lambda\sigma,\qquad\Psi\equiv(\partial_{t}-2u\partial_{x}+2u_{x} )\partial_{x}^{-1}. \tag{14}\] The eigenvalue problem (14) can also be obtained from the symmetry equation (5) by cancelling \(\sigma_{xx}\) via (13). It is not difficult to know that the two-dimensional operator \(\Psi\) defined in (14) is also a recursion operator of the KdV equation (4) and the 2-dimensional operator \(\Psi\) defined in (14) has not yet appeared before in literature. The KdV hierarchy can be written as \[u_{t_{2m+1}}=\Phi^{n}u_{x},\ n=0,\ 1,\ 2,\ \ldots \tag{15}\] and/or \[u_{t_{2m+1}}=\Psi^{n}u_{x},\ n=0,\ 1,\ 2,\ \ldots. \tag{16}\] The equivalence of (15) and (16) can be proven by cancelling \(u_{t}\) in (16) via the KdV equation (4). It is straightforward to verified that the compatibility condition of the eigenvalue problem (13) (or (14)) and the symmetry equation (5) is just the KdV equation (4). In other words, The linear equation system (13) and (5) is a Lax pair of the KdV equation (4). It is known that the square eigenfunction symmetry \[\sigma=(\psi^{2})_{x} \tag{17}\] transforms the symmetry equations, the Lax pair (13) and (5) to the traditional known Lax pair \[\begin{array}{l}\psi_{xx}+u\psi=\lambda\psi,\\ \\ \psi_{t}=4\psi_{xxx}+6u\psi_{x}+3u_{x}\psi.\end{array} \tag{18}\] Thus, we see that the existence of the nonlocal residual symmetry for the KdV equation is equivalent to the integrability of the KdV equation because the recursion operator(s) and then the infinitely many symmetries, conservation laws and Lax pairs can be derived from the single residual symmetry. In fact, the N-fold Darboux transformations of the KdV equation can also be derived from the residual symmetry [28] and/or the square eigenfunction symmetry [29]. The recursion operator of the KdV equation can also be derived from the square eigenfunction symmetry (17) and the infinitesimal nonlocal symmetry comes from the Backlund transformation [18]. To provide further support for our idea, we consider another well known integrable system, the Boussinesq equation [30; 31; 32; 33] \[u_{tt}=\frac{1}{3}(u_{xx}+4u^{2})_{xx}, \tag{19}\] which can be equivalently rewritten as \[\begin{array}{l}u_{t}=v_{x},\\ \\ v_{t}=\frac{1}{3}(u_{xx}+4u^{2})_{x}.\end{array} \tag{20}\] The Boussinesq equation (19) was introduced in 1871 for the propagation of long surface waves on water of constant depth [30; 31]. Similar to the KdV case, we have an auto-Backlund transformation (both \(\{u^{\prime},\ v^{\prime}\}\) and \(\{u,\ v\}\) are solutions of (20)) \[\begin{array}{l} u^{\prime}=-\frac{3}{2}\frac{f_{x}^{2}}{f^{2}}+ \frac{3}{2}\frac{f_{xx}}{f}+u,\\ \\ v^{\prime}=-\frac{3}{2}\frac{f_{x}f_{t}}{f^{2}}+\frac{3}{2}\frac{f_{ xt}}{f}+v\end{array} \tag{21}\] with a nonauto-Backlund transformation (\(\{u,\ v\}\) is a solution of (20) while \(f\) is a solution of the Schwartz Bussinesq equation) \[\begin{array}{l} u=\frac{3}{8}\frac{f_{t}^{2}+f_{xx}^{2}}{f_{ x}^{2}}-\frac{1}{2}\frac{f_{xxx}}{f_{x}},\\ \\ v=\lambda-\frac{1}{2}\frac{f_{xt}}{f_{x}}+\frac{1}{4}\frac{(f_{t}f_{ xx})_{x}}{f_{x}^{2}}-\frac{1}{4}\frac{(f_{t}^{2}+f_{xx}^{2})f_{t}}{f_{x}^{3}}, \end{array} \tag{22}\] where the function \(f\) is a solution of the Schwartz Bussinesq equation \[f_{tt}+f_{xxxx}=\frac{f_{xx}}{f_{x}^{2}}(f_{t}^{2}+4f_{x}f_{xxx}-3f_{xx}^{2}). \tag{23}\] Naturally, the coefficients of \(f^{-1}\) of (21), \[\sigma=\left(\begin{array}{c}\sigma^{u}\\ \sigma^{v}\end{array}\right)=\left(\begin{array}{c}f_{xx}\\ f_{xt}\end{array}\right), \tag{24}\] where a trivial constant factor \(3/2\) has been dropped out, is just a nonlocal symmetry, the residual symmetry of the Bossinesq equation (20). In other words, (24) solves \[\left(\begin{array}{c}\sigma^{u}\\ \sigma^{v}\end{array}\right)_{t}=\left(\begin{array}{cc}0&\partial_{x}\\ \frac{1}{3}(\partial_{x}^{3}+8\partial_{x}u)&0\end{array}\right)\left( \begin{array}{c}\sigma^{u}\\ \sigma^{v}\end{array}\right) \tag{25}\] After finishing some simple calculations from the nonauto-Backlund transformation (22), one can find that the following two relations \[\begin{array}{l}f_{xxxx}+3vf_{xx}+2v_{x}f_{x}+2uf_{xt}+u_{x}f_{t}-3f_{xx} \lambda=0,\\ \\ \frac{1}{3}f_{xxxxxx}+\frac{10}{3}uf_{xxxx}+5u_{x}f_{xxx}+3u_{xx}f_{xx}+ \frac{16}{3}u^{2}f_{xx}+\frac{2}{3}f_{x}u_{xxx}+\frac{16}{3}f_{x}uu_{x}+3vf_{ xt}+v_{x}f_{t}-3\lambda f_{xt}=0,\end{array} \tag{26}\] are identically satisfied by considering the Schwartz Boussinesq equation (23). Using the residual symmetry condition (24), the equation system (26) is just the eigenvalue problem \[\Phi\left(\begin{array}{c}\sigma^{u}\\ \sigma^{v}\end{array}\right)=3\lambda\left(\begin{array}{c}\sigma^{u}\\ \sigma^{v}\end{array}\right) \tag{27}\] of the recursion operator \[\Phi=\left(\begin{array}{cc}3v+2v_{x}\partial_{x}^{-1}&\partial_{x}^{2}+2u+ u_{x}\partial_{x}^{-1}\\ \Phi_{21}&3v+v_{x}\partial_{x}^{-1}\end{array}\right) \tag{28}\] with \(\Phi_{21}\equiv\frac{1}{3}\partial_{x}^{4}+\frac{10}{3}u\partial_{x}^{2}+5u_ {x}\partial_{x}+3u_{xx}+\frac{16}{3}u^{2}+\frac{2}{3}(u_{xx}+4u^{2})_{x} \partial_{x}^{-1}\). Obviously, the eigenvalue problem (27) and the symmetry equation (25) constitute a special Lax pair of the Boussinesq equation (20). Thus, we have derived the recursion operator and then the infinitely many symmetries and Lax pair of the Boussinesq equation (20) from the single nonlocal residual symmetry (24). If one directly study the residual symmetry of the Boussinesq equation (19) instead of (20), then the nonauto-Backlund transformation is described by the first equation of (22) with (23). Starting from the first equation of (22), one can find that the following related linear system with respect to \(f\) \[f_{xxxt}+3f_{xx}(\partial_{x}^{-1}u_{t}-\lambda)+2u_{t}f_{xt}+u_{x}f_{t}=0 \tag{29}\] is satisfied. Because \(f_{xx}\) is a nonlocal residual symmetry of the Boussinesq equation (19), Eq. (14) becomes an eigenvalue problem of the recursion operator \(\Psi\) \[\Psi\sigma=3\lambda\sigma, \tag{30}\] \[\Psi\equiv\partial_{xt}+3\left(\partial_{x}^{-1}u_{t}\right)+2u_ {t}\partial_{x}^{-1}+2u\partial_{x}^{-1}\partial_{t}+u_{x}\partial_{x}^{-2} \partial_{t}. \tag{31}\] The eigenvalue problem (30) and the linearized equation of the Boussinesq equation (19) \[\sigma_{tt}=\frac{1}{3}(\sigma_{xx}+8u\sigma)_{xx}, \tag{32}\] constitute a special Lax pair of (32). The recursion operator \(\Psi\) has not yet reported before. Though the recursion operators \(\Phi\) and \(\Psi\) defined in (28) and (31) are different at first glance, the Boussinesq hierarchies \[\left(\begin{array}{c}u\\ v\end{array}\right)_{t_{2n+1}}=\Phi^{n}\left(\begin{array}{c}u\\ v\end{array}\right)_{x},\ \left(\begin{array}{c}u\\ v\end{array}\right)_{t_{2n+2}}=\Phi^{n}\left(\begin{array}{c}v\\ \frac{u}{3}+\frac{4}{3}u^{2}\end{array}\right)_{x}, \tag{33}\] and \[u_{t_{2n+1}}=\Psi^{n}u_{x},\ u_{t_{2n+2}}=\Psi^{n}u_{t},\ n=0,\ 1,\ 2,\ \ldots, \tag{34}\] are equivalent. The equivalence can be directly proved by using the relations (20). In summary, for a (1+1)-dimensional nonlinear system, if one can find one key symmetry such that the recursion operator(s) and infinitely many symmetries and Lax pair(s) can be successfully obtained, then we say that the model is integrable under the meaning that it possesses a key symmetry. For simplicity, we say the system is key-symmetry-integrable (KSI). In this letter, we have proven that both the KdV equation and the Boussinesq system are KSI models. The key-symmetries used here are the nonlocal residual symmetries which can be obtained simply by using the truncated Painleve analysis [3; 4]. In fact, one can check that some other types of nonlocal symmetries can be used as key-symmetries such as the square eigenfunction symmetries [34; 35], infinitesimal Backlund transformations [18], nonlocal symmetries related to the CRE/CTE approach [7]. It is also interesting that the nonlocal key symmetries possess various other elegant properties in addition to derive recursion operator(s), infinitely many symmetries and Lax pairs. These types of nonlocal key symmetries can be localized to find many types of interaction solutions among different types of nonlinear excitations [8; 18; 19; 20; 23]. Algebro-geometric solutions can be obtained from the nonlinearization approach [36] via nonlocal key-symmetry constraints [37; 38]. Applying nonlocal key-symmetry constraints on the Lax pairs, various kinds of new integrable systems can be obtained [39]. The nonlocal key-symmetries can be used as sources to describe the interactions between long waves and short waves [40; 41; 42]. The localization of the nonlocal key-symmetries to the Camassa-Holm type systems [43], the reciprocal transformations will be naturally included [21]. In high dimensions, for some types of dispersionless integrable systems like the Hirota equations and heavenly equations [44; 45; 46; 47], the same idea can be used to find the recursion operators, infinitely many symmetries and Lax pairs starting from suitable nonlocal key-symmetry. For (2+1)-dimensional dispersive nonlinear systems like the Kadomtsev-Petviashvili (KP) equation, B-type KP equation, Davey-Stewartson equation and Nizhnik-Novikov-Veselov equation, nonlocal key-symmetries (such as the square eigenfunction symmetries and residual symmetries) are really existent. However, how to use these nonlocal key-symmetries to find infinitely many symmetries is still open. ## Acknowledgement The author are in debt to the helpful discussions with Professors X. B. Hu and Q. P. Liu. The authors acknowledge the support of the National Natural Science Foundation of China (Nos. 12275144, 12235007 and 11975131) and K. C. Wong Magna Fund in Ningbo University.
2309.05220
Revisiting 3D Flat Holography: Causality Structure and Modular flow
Flat space holography is an open and hard problem existing several different approaches, which may finally turn out to be consistent with each other, in the literature to tackle it. Focusing on how bulk emergent spacetime is encoded in quantum information of null boundaries, we choose a specific toy model called the flat$_3$/BMSFT model, which conjectures the duality between boundary BMS$_3$ invariant field theory and bulk quantum gravity in 3D asymptotic flat spacetimes (AFS), to explore. Aiming to find an entanglement wedge like quantity for single interval and a connected entanglement wedge for multi-intervals in flat$_3$/BMSFT model, we explore the bulk causality structures related to the holographic swing surface proposal through both boundary and bulk local modular flow, make a corresponding decomposition of the global Minkowski spacetime and look at the entanglement phase transition. As a byproduct, we solve the problem about the existence of partial entanglement entropy (PEE) correspondence in this model which is a bit nontrivial due to the unusual behavior of boundary modular flow in BMS$_3$ field theory. Among the literature considering quantum information aspects of flat$_3$/BMSFT model, there are several substantial, unusual but overlooked phenomena which need to be emphasized and revisited to gain more deserved attention. Thus another motivation of this paper is to find where these unusual phenomena come from, and physically show in a manifest way what they may imply. After reading we hope readers can feel sincerely what we present about the above mentioned second aim is more valuable than the mathematical results in the present paper.
Yuefeng Liu
2023-09-11T03:49:27Z
http://arxiv.org/abs/2309.05220v2
# Revisiting 3D Flat Holography: Causality Structure and Modular Flow ###### Abstract Flat space holography is an open and hard problem existing several different approaches, which may finally turn out to be consistent with each other, in the literature to tackle it. Focusing on how bulk emergent spacetime is encoded in quantum information of null boundaries, we choose a specific toy model called the flat\({}_{3}\)/BMSFT model, which conjectures the duality between boundary BMS\({}_{3}\) invariant field theory and bulk quantum gravity in 3D asymptotic flat spacetimes (AFS), to explore. Aiming to find an entanglement wedge like quantity for single interval and a connected entanglement wedge for multi-intervals in flat\({}_{3}\)/BMSFT model, we explore the bulk causality structures related to the holographic swing surface proposal through both boundary and bulk local modular flow, make a corresponding decomposition of the global Minkowski spacetime and look at the entanglement phase transition. As a byproduct, we solve the problem about the existence of partial entanglement entropy (PEE) correspondence in this model which is a bit nontrivial due to the unusual behavior of boundary modular flow in BMS\({}_{3}\) field theory. Among the literature considering quantum information aspects of flat\({}_{3}\)/BMSFT model, there are several substantial, unusual but overlooked phenomena which need to be emphasized and revisited to gain more deserved attention. Thus another motivation of this paper is to find where these unusual phenomena come from, and physically show in a manifest way what they may imply. After reading we hope readers can feel sincerely what we present about the above mentioned second aim is more valuable than the mathematical results in the present paper. ## 1 Introduction * 2 flat\({}_{3}\)/BMSFT model and PEE correspondence * 2.1 BMS\({}_{3}\) invariant field theory * 2.2 Swing Surface Proposal * 2.3 PEE correspondence * 3 Quotient manifolds and observations * 3.1 Boundaries and Horizons * 3.2 Order of taking the Infinity Limit * 3.3 Negative pure and mixed state entanglement measures * 3.4 Finite bench or Infinity bifurcating surface? * 4 Bulk Causality related to single interval * 4.1 Bifurcating horizons * 4.2 Decomposition of bulk spacetime * 4.3 PEE: intersection of swing surface * 4.4 PEE: boundary and bulk modular flow * 4.5 Entanglement wedge \(\mathcal{W}^{f}_{\mathcal{E}}[\mathcal{A}]\)? * 5 Two interval entanglement phase transition and EWN * 5.1 Entanglement phase transition * 5.2 Entanglement wedge nesting * 6 Conclusions and Open Questions * A Reflected Entropy * A.1 The BMS Semi-classical Block * A.2 OPE coefficient and Twist operator dimension * A.3 Reflected entropy of vacuum and thermal state on the plane * B \(M>0\) Zero Mode Background * B.1 Bifurcating horizon * B.2 Entanglement phase transition Introduction The principle of holography [1; 2; 3] has been successfully used to understand theories of quantum gravity in AdS spacetime and strongly coupled field theory for more than twenty years. Starting from the proposal of Ryu-Takayanagi formula [4; 5; 6; 7], there has been profound progress in exploring how spacetime and gravitational dynamics can emerge from boundary quantum information theory [8; 9; 10]. There are also mixed state entanglement measures generalization of the holographic entanglement entropy, for example the correspondence of entanglement wedge cross section (EWCS) with the reflected entropy [11], entanglement negativity [12] and balanced partial entanglement entropy (BPE) [13]. At the heart of these developments is the entanglement wedge reconstruction, i.e., subregion-subregion duality [14; 15; 16], which states that bulk operators in the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) can be decoded from the operator algebra in the causal domain \(D[\mathcal{A}]\) of the dual CFT. Another important approach highlighting the emergence of bulk locality and causality is the concept of modular flow [17; 18; 19]. Operator reconstruction with modular flow allow one to reach everywhere inside of entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\), which can reach far beyond the causal horizons. For the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\), there are two equivalent definitions in AdS/CFT holography [20]. One is the bulk domain of dependence of homology surface \(\mathcal{R}_{\mathcal{A}}\) (defined in (4.20)) which interpolates between boundary interval \(\mathcal{A}\) and corresponding HRT surface. Another definition is the bulk region bounded by bifurcating horizons of HRT surface on one side and boundary causal domain \(D[\mathcal{A}]\) on the other side. Whether these properties are universal to general holographic theories beyond AdS/CFT? One motivation of the paper is exploring the story of entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) like quantity in a toy model of 3D flat holography, i.e., the flat\({}_{3}\)/BMSFT model, using mainly modular flow and other refined tools due to the complications and subtleties here. Although people face both practical and philosophical difficulties in formulating flat version of AdS holography, there has been interesting work on understanding holography in asymptotically flat spacetimes at the early days of AdS/CFT [21; 22; 23; 24; 25]. In recent years, there is a delightful re-booming about this problem. One key role in this re-booming is the bottom-up approach called celestial holography [26; 27], which proposes a correspondence between 4D gravity theories in asymptotically flat spacetimes (AFS) and 2D celestial conformal field theories (CCFT) living on the celestial sphere at null infinity due to advances in understanding the soft theorems and asymptotic structure of AFS[28; 29; 30; 31; 32]. Bulk S-matrix elements, when written in boost eigenstate basis, can be reinterpreted as correlation functions in 2D conformal field theory. Thus very powerful CFT techniques, such as operator product expansion [33; 34; 35; 36; 37], conformal block decomposition [38; 39], crossing symmetry [40], can all be used to explore properties of celestial CFT. Also using this kind of language lead people find new \(w_{1+\infty}\) symmetries [41]. However it is rather vague at this stage that how much and in which aspects the 2D CCFT would differ from the usual 2D Virasoro CFT. Moreover, the emergence of bulk flat spacetime from boundary degree of freedom of celestial CFT seems to be at least complicated. However viewing recent fascinating developments of understanding how bulk spacetime emerge from boundary and the nature of holographic map as a quantum error correction code [42], it is very attractive to see whether similar nature of holographic duality in AdS hold true in flat case. The most important object underlying this kind of story in AdS/CFT is the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) dual to specific boundary subregion, and we would like to find similar objects in flat holography. With this curiosity, we turn to another bottom-up approach called Carrollian holography, which is more similar to the usual AdS/CFT set up. Due to the matter or gravitational radiation, the gravitational charge defined at null infinity would be non-conserved [43; 44; 45]. Thus we focus on 3D flat bulk with pure Einstein action, more specifically, the flat\({}_{3}\)/BMSFT model. The analysis in this paper are special to 3D AFS, and we make a first but essential step in this direction. Our explorations are complementary to the main trend in literature on flat holography focusing on S-matrix elements, ward identities as well as asymptotic symmetry, and focus more on dynamical gravity aspects of holographic duality. Note another more information theoretic approach about flat holography [46] explores how quantum information is stored at null infinity by using boundary operator algebra. Also there are interesting and illuminating works trying to link Carrollian holography approach with celestial holography approach in 4D flat spacetime [47; 48]. The Carrollian holography has been proved to be successful in 3D case, and we would like to link its story to the limiting problem of AdS/CFT. Whether flat holography can be understood as a limiting case of AdS/CFT is still an open problem. Although there are plenty of research working on extracting perturbative S-matrix elements from AdS correlators [49; 50; 51], they are limited to very special state in the Hilbert space of quantum gravity in AFS (if it exists!). At the level of asymptotic symmetry algebra (ASA) in the machinery of holography, [52; 53] made an interesting observation that ASA of 3D AFS, i.e., the BMS\({}_{3}\) algebra [54; 55], can be obtained as a ultra-relativistic limit of 2D conformal Figure 1: The two figures show the modular flow of 2D CFT and BMS\({}_{3}\) field theory separately. Blue lines are boundary intervals \(\mathcal{A}\) and brown lines denote the boundary \(\partial D[\mathcal{A}]\) of the causal domain \(D[\mathcal{A}]\). We can see that the direction of modular time of BMS\({}_{3}\) field theory is rather different than the ones in CFT case. algebra. Starting from these works, flat\({}_{3}\)/BMSFT model go through several non-trivial checks, such as reproducing thermal entropy in the bulk from a Cardy-like formula at the boundary [56; 57], reproducing characters of the BMS\({}_{3}\) group from one loop partition function of 3D flat gravity [58], reproducing BMS\({}_{3}\) blocks from bulk geodesic Feynman diagrams [59] and reproducing boundary entanglement entropy from bulk swing surface [60]. Considering the holographic entanglement entropy, [61; 62] updated the generalized Rindler method used in [60] for limited regions and states to more general cases using approximated modular flow method and a general swing surface proposal. Note that in flat\({}_{3}\)/BMSFT model, the vacuum state in the Hilbert space of quantum gravitational theory of AFS is assumed to be unique. This may contradict with the lessons learned from celestial holography and soft theorems that the vacuum state of 4D AFS is infinitely degenerated due to supertranslation and soft gravitons [63]. In purpose of exploring how boundary information are related to bulk subregion, similar to the aim of [46] but working in a more concrete model, we use various tools developed in AdS/CFT to try to find the analogue of the entanglement wedge \(\mathcal{W_{E}}[\mathcal{A}]\) in flat holography. In the literature there are some checks about matching of holographic reflected entropy and balanced partial entanglement entropy (BPE) [64; 65; 66] in flat\({}_{3}\)/BMSFT model, however the calculations and physical conclusions need to be reconsidered. The necessity of this revisiting originates from the following facts: 1). The flat\({}_{3}\)/BMSFT model has a Lorentzian bulk spacetime and no Euclidean path integral apply here. So we should consult to a coordinate invariant codimension zero bulk region to define the entanglement wedge not a coordinate non-invariant codimension one bulk surface, which should be different from the usual AdS/CFT case; 2). Also in the literature, they only studied very limited symmetric boundary two intervals, which gave them an unrealistic illusion about their results. Although these works are interesting, actually no well defined connected entanglement wedge has ever been established in works [64; 65; 66] and the related ones. It turns out that for generic boundary non-symmetric two intervals the entanglement wedge cross section (EWCS) can totally locate outside the naive expected connected entanglement wedge, see Figure 5, although the numerical values can mysterious match with each other. Then the natural question is what's the entanglement wedge related to single boundary interval? What's the connected entanglement wedge related to multi-boundary intervals? If we can not specify the parameter range of intervals related to the connected entanglement wedge, what is the meaning of bulk EWCS we compute? Actually according to the results obtained in this paper, finding accurate answers to the above questions are a rather non-trivial task. Even the existence of normal entanglement wedge should be reasonably questioned because of the existence of negative holographic entanglement entropy noticed already in [61]. Actually not only can the holographic entanglement entropy be negative, all entanglement measures calculated in flat\({}_{3}\)/BMSFT model including the holographic reflected entropy, holographic entanglement negativity and balanced partial entanglement entropy (BPE) can have negative values. This is in fact a general and unique property of flat\({}_{3}\)/BMSFT model, which has not been given sufficient attention in the literature. Viewing from field theory, the negative value may come from non-unitary property. From bulk side, we make the key observation that the negativeness is just a reflection about the unique structure of local modular flow of BMS\({}_{3}\) field theory. More intuitively we can see from Figure 1, the modular evolution along local modular flow of BMS field theory are quite different from the modular evolution of CFT which is consistent with the global time defined on the whole 2D plane. Part of the results in this paper can be viewed as the bulk manifestation of this unusual boundary modular flow behavior. Another tool we use to find the analogue of entanglement wedge in flat holography is the PEE (partial entanglement entropy) correspondence [67], which proposes to give a fine version of RT formula. We don't comment on the physical foundation of this proposal, but rather view it as a useful tool to manifest various aspects of holographic duality when we have a local modular flow. From PEE correspondence, people can derive the balanced partial entanglement entropy (BPE) and EWCS correspondence [13]. However in literature [66; 68] people just observed the match of BPE and EWCS without giving a more basic proof about PEE correspondence in flat\({}_{3}\)/BMSFT model. The reason can again be traced back to the curious behavior of modular flow of BMS\({}_{3}\) field theory, which makes the finding of corresponding bulk point from modular flow method rather unclear with less physical intuition. As a byproduct, we solve the existence of PEE correspondence in flat\({}_{3}\)/BMSFT model by using the intersection of swing surfaces (first method) and rewriting the original modular flow correspondence (second method). We find exact match between these two methods and these are solid mathematical results in this paper. The way we solve the above existing problem on modular flow method in flat\({}_{3}\)/BMSFT model is to explicitly manifest the degree of freedom using our rewriting, and this is also a good place to see the subtlety in flat\({}_{3}\)/BMSFT model. Although we find more structures of the correspondence between boundary and bulk modular flow, as well as make a bulk decomposition of global flat\({}_{3}\) related to single boundary interval \(\mathcal{A}\), we fail to specify which bulk subregion is the most natural entanglement wedge in this model. Especially in two intervals case considering the connected entanglement wedge, the confirmed results are only made on the field side. Also we can't integrate the implications of general lesson learned from negative entanglement entropy and swing surface penetrating phenomena into the exploration of entanglement wedge. We hope that bringing these fundamental issues to researchers in a clearer and more obvious way is more valuable than the mathematical results presented in this paper. This is our second motivation for writing this paper. The structure of the paper is organized as follows. In section 2, we review the flat\({}_{3}\)/BMSFT model, the general prescription of swing surface proposal for single interval holographic entanglement entropy and the PEE correspondence in AdS/CFT with useful comments at the end of each subsection. In section 3, we explicitly draw the Penrose diagram of the quotient manifolds, i.e., the zero mode solutions, in AFS and show a subtle issue of the order of taking infinite limit. Then we show where the bulk negative sign of holographic entanglement entropy come from using Noether charge formalism. Finally we present observations about EWCS for general boundary intervals which manifest the loopholes of arguments in the literature. In section 4, we mathematically and pictorially analyze the behavior of bifurcating horizons related to both finite bench \(\gamma\) and the infinite bifurcating surface \(\gamma_{\xi}\). Then we decompose the global flat\({}_{3}\) spacetime into four disconnected parts using both the past and future bifurcating horizons. Intersection of swing surface method and bulk boundary modular flow correspondence method for deriving the PEE correspondence in flat\({}_{3}\)/BMSFT model are presented. In the last by comparing with the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) in AdS/CFT case, we show the subtleties of the flat\({}_{3}\)/BMSFT model. In section 5, we analyze the entanglement phase transition of two intervals on the boundary side and entanglement wedge nesting (EWN) property in the bulk side. In section 6, we discuss two important open questions unique to flat\({}_{3}\)/BMSFT model which are observed in section 2. We collect several additional results in Appendices. In Appendix A, we give a complete derivation of two disjoint interval reflected entropy in BMS\({}_{3}\) field theory with explicit calculations about the three point coefficient. This is a necessary but missing part of the calculations about reflected entropy in [65], which sincerely pointed out by [68]. Appendix B repeat the analytic analysis of the Poincare vacuum for the \(M>0\) zero mode backgrounds including the bifurcating horizon in Penrose diagram and the entanglement phase transition. ## 2 flat\({}_{3}\)/BMSFT model and PEE correspondence In 3D asymptotically flat spacetimes (AFS) Einstein gravity admits consistent boundary conditions at future null infinity \(\mathscr{I}^{+}\), where the finite dimensional Poincare isometry group is enhanced to infinite dimensional asymptotic symmetry group, i.e., the BMS\({}_{3}\) group [54; 55]. These facts lead people to conjecture that there is a toy model of flat holography, dubbed flat\({}_{3}\)/BMSFT model, which maps between Einstein gravity in 3D AFS and BMS invariant field theories at 2D conformal boundary. Intuitively the topology of the null boundary of 3D AFS is \(S^{1}\times\mathbb{R}\) with \(\mathbb{R}\) the null direction. And BMS\({}_{3}\) group include super-translation which is coordinate dependent translation along the null direction and super-rotation which is the diffeomorphism of \(S^{1}\). This section includes a self-contained review of 2D BMS\({}_{3}\) invariant field theory with more emphasize on entanglement entropy, the development of the general swing surface proposal and the PEE correspondence in AdS/CFT holography. At the end of each subsection, useful comments on the subtleties are presented. ### BMS\({}_{3}\) invariant field theory BMS invariant field theory is a class of 2D ultra relativistic quantum field theories invariant under following spacetime reparametrizations [52; 56], \[\tilde{x}=f(x),\qquad\tilde{y}=yf^{\prime}(x)+g(x) \tag{1}\] where \(f(x)\) and \(g(x)\) are arbitrary functions, and \((x,y)\) are coordinates of the plane the field theory lives. The infinitesimal BMS transformations are generated by following Fourier modes, \[l_{n}=-x^{n+1}\partial_{x}-(n+1)yx^{n}\partial_{y}\quad m_{n}=-x^{n+1}\partial _{y} \tag{2}\] Under Lie bracket they form the BMS\({}_{3}\) algebra without the centrally extension term. While the generators \(L_{m}\) and \(M_{n}\) implementing local coordinate transformations (2) on quantum fields form the centrally extended BMS\({}_{3}\) algebra, \[[L_{n},L_{m}]=(n-m)L_{m+n}+\frac{c_{L}}{12}n(n^{2}-1)\delta_{m+n,0}\] \[[L_{n},M_{m}]=(n-m)M_{m+n}+\frac{c_{M}}{12}n(n^{2}-1)\delta_{m+n,0}\] \[[M_{n},M_{m}]=0 \tag{3}\] where \(c_{L}\) and \(c_{M}\) are the central charges. The Einstein-Hilbert gravity in flat holography are expected to be dual to a BMS field theory with central charges \(c_{L}=0,c_{M}=\frac{3}{G}\), while the field theory with more general value of central charges could be constructed by adding a Chern-Simons term to the Einstein-Hilbert action. The generators \(L_{m}\) and \(M_{n}\) are also called BMS charges on the plane, which are the Fourier modes of the conserved currents \(T(x)\) and \(M(x)\), \[L_{n} =\frac{1}{2\pi i}\oint\left(x^{n+1}T(x)+(n+1)x^{n}yM(x)\right) \tag{4}\] \[M_{n} =\frac{1}{2\pi i}\oint x^{n+1}M(x) \tag{5}\] where \(\oint\) can be seen as the contour integral of the complexified \(x\) coordinates. The conserved currents \(T(x)\) and \(M(x)\) generating the coordinate transformations (1) transform under the transformations as \[\tilde{M}(x) =f^{\prime 2}M(\tilde{x})+\frac{c_{M}}{12}\{f,x\} \tag{6}\] \[\tilde{T}(x,y) =f^{\prime 2}T(\tilde{x},\tilde{y})+2f^{\prime}(g^{\prime}+yf^{ \prime\prime})M(\tilde{x})+\frac{c_{L}}{12}\{f,x\}+\frac{c_{M}}{12}\left(y \frac{d}{dx}\{f,x\}+f^{\prime 2}\frac{\partial^{3}g}{\partial f^{3}}\right)\] where \(\{,\}\) denotes the ordinary Schwarzian derivative and the last term denotes the BMS Schwarzian derivative \[\{f,x\}=\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3}{2} \left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2} \tag{7}\] \[f^{\prime 2}\frac{\partial^{3}g}{\partial f^{3}}=f^{\prime-1} \left(g^{\prime\prime\prime}-g^{\prime}\frac{f^{\prime\prime\prime}}{f^{\prime }}-3f^{\prime\prime}\left(\frac{g^{\prime}}{f^{\prime}}\right)^{\prime}\right) \tag{8}\] The infinite dimensional BMS\({}_{3}\) algebra not only have the singlet version of the highest weight representation (HWR), but also the multiplet version of the HWR [69]. In the singlet version of HWR, a local primary operator \(\mathcal{O}(0,0)\) at the origin is labelled by the eigenvalues of generators \(L_{0}\) and \(M_{0}\) which are the center of the BMS\({}_{3}\) symmetry algebra (3), \[[L_{0},\mathcal{O}]=\Delta\mathcal{O},\quad[M_{0},\mathcal{O}]=\xi\mathcal{O} \tag{9}\] where \(\Delta\) denotes the conformal weight and \(\xi\) denotes the boost charge. The HWR respect the following conditions, \[[L_{n},\mathcal{O}]=0,\quad[M_{n},\mathcal{O}]=0,\quad n>0 \tag{10}\] The singlet primary operators transform under finite transformation (1) as follows, \[\tilde{O}(\tilde{x},\tilde{y})=|f^{\prime}|^{-\Delta}e^{-\xi}\frac{g^{\prime}+ yf^{\prime\prime}}{f^{\prime}}O(x,y) \tag{11}\] By requiring the vacuum to be invariant under the global symmetry of BMS\({}_{3}\) field theory, the correlation functions on the plane have the following form, \[\langle\phi(x_{1},y_{1})\phi(x_{2},y_{2})\rangle=\delta_{\Delta_{1 },\Delta_{2}}\delta_{\xi_{1},\xi_{2}}|x_{21}|^{-2\Delta_{1}}e^{-2\xi_{1}\frac{ y_{21}}{x_{21}}} \tag{12}\] \[\langle\phi_{1}\phi_{2}\phi_{3}\rangle=\frac{c_{123}}{|x_{12}|^{ \Delta_{123}}|x_{23}|^{\Delta_{231}}|x_{31}|^{\Delta_{312}}}e^{-\xi_{123}\frac {y_{12}}{x_{12}}-\xi_{312}\frac{y_{13}}{x_{13}}-\xi_{231}\frac{y_{23}}{x_{23}}}\] (13) \[\langle\phi_{1}\phi_{2}\phi_{3}\phi_{4}\rangle=e^{\xi_{12}\left( \frac{t_{24}}{x_{24}}-\frac{t_{14}}{x_{14}}\right)+\xi_{34}\left(\frac{t_{14} }{x_{14}}-\frac{t_{13}}{x_{13}}\right)-\left(\xi_{1}+\xi_{2}\right)\frac{t_{12 }}{x_{12}}-\left(\xi_{3}+\xi_{4}\right)\frac{t_{34}}{x_{34}}}\] \[\times\left|\frac{x_{24}}{x_{14}}\right|^{\Delta_{12}}\left| \frac{x_{14}}{x_{13}}\right|^{\Delta_{34}}\frac{\mathcal{F}(x,t)}{|x_{12}|^{ \Delta_{1}+\Delta_{2}}\left|x_{34}\right|^{\Delta_{3}+\Delta_{4}}} \tag{14}\] where two point function of primary operators are properly normalized, \(c_{123}\) is the coefficient of three-point function encoding dynamical information of the BMS\({}_{3}\) field theory and \[x_{ij}=x_{i}-x_{j},\quad y_{ij}=y_{i}-y_{j},\quad\Delta_{ijk}= \Delta_{i}+\Delta_{j}-\Delta_{k},\quad\xi_{ijk}=\xi_{i}+\xi_{j}-\xi_{k}. \tag{15}\] The \(x\) and \(t\) appearing in function \(\mathcal{F}(x,t)\) are BMS invariant cross ratios \[x=\frac{x_{12}x_{34}}{x_{13}x_{24}},\quad\frac{t}{x}=\frac{t_{1 2}}{x_{12}}+\frac{t_{34}}{x_{34}}-\frac{t_{13}}{x_{13}}-\frac{t_{24}}{x_{24}} \tag{16}\] Entanglement entropy of BMS\({}_{3}\) field theory was first considered in [70] using algebraic twist operator method [71]. By generalizing the Rindler method to BMS\({}_{3}\) invariant field theory, [60] not only gets the consistent entanglement entropy through an explicitly local modular flow expression, but also extends the calculation into the bulk getting the swing surface picture. We list some results related to entanglement entropy here for later convenience. BMS\({}_{3}\) field theory is not Lorentz invariant, thus a general spatial interval \(\{(x_{1},y_{1})\)\((x_{2},y_{2})\}\) instead of an equal time interval is need to show the dependence of entanglement entropy on the choice of frame. In the plane vacuum state, the conformal weight \(\Delta\) and boost charge \(\xi\) in cyclic orbifold \(\mathbf{Z}_{n}\) are, \[\Delta_{n}=\frac{c_{L}}{24}(n-\frac{1}{n}),\quad\xi_{n}=\frac{c_{M}}{24}(n- \frac{1}{n}). \tag{17}\] Then the partition function of the replica manifold \(\Sigma_{n}\) and the entanglement entropy of single interval \(\mathcal{A}\) are, \[\mathrm{Tr}\rho_{A}^{n}=k_{n}\langle\sigma_{n}(x_{1},y_{1})\tilde {\sigma}_{n}(x_{2},y_{2})\rangle_{\mathrm{BMS}^{\otimes n}}^{plane}=k_{n}|x_{2 1}|^{-\frac{c_{L}}{12}(n-\frac{1}{n})}e^{-\frac{c_{M}}{12}(n-\frac{1}{n})\frac{ y_{21}}{x_{21}}} \tag{18}\] \[S_{EE;vac}^{BMS}=-\lim_{n\to 1}\partial_{n}\mathrm{Tr}\rho_{A}^{n}= \frac{c_{L}}{6}\log\frac{|x_{21}|}{\delta_{x}}+\frac{c_{M}}{6}\left(\frac{y_{2 1}}{x_{21}}\right) \tag{19}\] where \(\delta_{x}>0\) is the \(x\) direction UV regulator introduced by \(k_{n}\) relating to the regularization of the divergent partition function \(\mathrm{Tr}\rho_{A}^{n}\). We can see from (19) that for the bulk correspondence of Einstein gravity with \(c_{L}=0\), the entanglement entropy \(S^{BMS}_{EE;vac}\) can be negative due to possible different sign of \(y_{21}\) and \(x_{21}\). When considering the finite temperature state on the plane, we use the following general thermal periodicity, \[(\phi,u)\sim(\phi+i\beta_{\phi},u-i\beta_{u}) \tag{20}\] where \(\{\phi,u\}\) denote the coordinates on the thermal cylinder. We can use the BMS conformal transformation to map from plane to cylinder [60], \[x=e^{\frac{2\pi\phi}{\beta_{\phi}}},\quad y=\frac{2\pi}{\beta_{\phi}}e^{\frac{ 2\pi\phi}{\beta_{\phi}}}\left(\phi\frac{\beta_{u}}{\beta_{\phi}}+u\right) \tag{21}\] The two point function of twist operators evaluated on this cylinder then is given by \[\langle\sigma_{n}(\phi_{1},u_{1})\tilde{\sigma}_{n}(\phi_{2},u_{2})\rangle^{ cylinder}_{\rm BMS^{\otimes n}}=k_{n}\big{(}\frac{\beta_{\phi}}{\pi\delta_{\phi}} \sinh\frac{\pi\left|\phi_{21}\right|}{\beta_{\phi}}\big{)}^{-2\Delta_{n}}e^{- 2\xi_{n}\left(\frac{\pi(u_{21}+\frac{\beta_{u}}{\beta_{\phi}}\phi_{21})}{ \beta_{\phi}}\coth\frac{\pi\phi_{21}}{\beta_{\phi}}-\frac{\beta_{u}}{\beta_{ \phi}}\right)}\] Thus the entanglement entropy of single interval \(\mathcal{A}\) in the thermal state is, \[S^{BMS}_{EE;thermal}=\frac{c_{L}}{6}\log\big{(}\frac{\beta_{\phi}}{\pi\delta_{ \phi}}\sinh\frac{\pi\left|\phi_{21}\right|}{\beta_{\phi}}\big{)}+\frac{c_{M}} {6}\bigg{[}\frac{\pi}{\beta_{\phi}}\big{(}u_{21}+\frac{\beta_{u}}{\beta_{\phi }}\phi_{21}\big{)}\coth\big{(}\frac{\pi\phi_{21}}{\beta_{\phi}}\big{)}-\frac{ \beta_{u}}{\beta_{\phi}}\bigg{]} \tag{22}\] **Comments:** A key assumption in the above calculations is that the twist operators \(\sigma_{n}\) and \(\tilde{\sigma}_{n}\) belong to the singlet version of HWR of the BMS\({}_{3}\) algebra. It was noticed and proved in [69] that primary fields can also be organized in a Jordan chain and form a multiplet which is a reducible but indecomposable module together with their descendants. Cyclic \(\mathbf{Z}_{n}\) Orbifold theory of BMS field on replicated Carrollian geometry is a much unexplored area and could go beyond the usual expectations. For example, see [72] for the subtleties about the Orbifold theory of 2D WCFT living in Newton-Cartan geometry. It is possible that the twist operators in BMS orbifold theory belong to the multiplet version of HWR, thus affect the final answer of entanglement entropy (19) and (22). ### Swing Surface Proposal Instead of directly extend the HRT formula into the flat\({}_{3}\)/BMSFT model, [60] derive the swing surface configuration by the exact correspondence between boundary local modular flow generators and bulk killing vector fields. The advantage of this method is the holographic dictionary of the entanglement entropy is automatically consistent. While the disadvantage of this method is that the local modular flow can only exist for special entangled regions and special states. Due to the above reasons, [61; 62] update the above method and propose a more general prescription to get the swing surface \(\gamma_{\mathcal{A}}\) for holographic entanglement entropy by using the approximate modular flow in both the boundary and the bulk. Let us summarize the main steps of these developments in flat\({}_{3}\)/BMSFT model following [61; 62] closely. For an interval \({\cal A}\) on the vacuum state of BMS\({}_{3}\) field theory, which is dual to a spacetime in the bulk invariant under the same set of symmetries, we can find a consistent boundary flow generator \(\zeta\) and the corresponding bulk Killing field \(\xi\), \[\zeta=\sum_{i}a_{i}h_{i}\equiv\partial_{\tau_{B}},\quad\xi=\sum_{i}a_{i}H_{i} \equiv\partial_{\tau_{b}} \tag{23}\] where \(a_{i}\) are parameters depending on the entangling region \({\cal A}\), \(\tau_{B},\tau_{b}\) are boundary and bulk Rindler time respectively satisfying periodicity conditions \[\tau_{B,b}\sim\tau_{B,b}+2\pi i, \tag{24}\] \(h_{i}\) are the vacuum symmetry generators defined on the boundary, and \(H_{i}\) are the corresponding bulk Killing vectors under the dictionary of flat\({}_{3}\)/BMSFT holography satisfying \(H_{i}|_{\partial{\cal M}}\)= \(h_{i}\). The boundary modular flow generator \(\zeta\) need satisfy following conditions: 1). The transformation \(x\rightarrow\tilde{x}=f(x)\) is a symmetry of the field theory where the domain of \(f(x)\) is the causal domain \(D[{\cal A}]\); 2). The transformation \(x\rightarrow\tilde{x}\) is invariant under a pure imaginary (thermal) identification \(\left(\tilde{x}^{1},\tilde{x}^{2}\right)\sim\left(\tilde{x}^{1}+i\tilde{ \beta}^{1},\tilde{x}^{2}+i\tilde{\beta}^{2}\right)\); 3). The one parameter flow \(\tilde{x}^{i}[s]\) generated by \(\zeta\) through the exponential map \(e^{s\zeta}\) leave the causal domain \(D[{\cal A}]\) and its boundary \(\partial D[{\cal A}]\) invariant when \(s\) is real. The periodicity (24) is considered as a thermal identification, which implies that the bulk modular flow generator \(\xi\) features bifurcating Killing horizons with surface gravity \(2\pi\). We denote the bifurcating surface as \(\gamma_{\xi}\) and two Killing horizons as \(N_{l,r}\), which satisfy \[\xi|_{\gamma_{\xi}}\!=0 \tag{25}\] \[\nabla^{\mu}\xi^{\nu}|_{\gamma_{\xi}}\!=2\pi n^{\mu\nu}\] (26) \[\xi^{\nu}\nabla_{\nu}\xi^{\mu}|_{N_{l,r}}\!=\pm 2\pi\xi^{\mu}\] (27) \[\xi_{[\mu}\nabla_{\nu}\xi_{\lambda]}|_{N_{l,r}}\!=0 \tag{28}\] where \(n^{\mu}=n^{\mu}_{1}n^{\nu}_{2}-n^{\mu}_{2}n^{\nu}_{1}\) is the unit vector binormal to \(\gamma_{\xi}\). (25) follows from the fact that \(\gamma_{\xi}\) is an extremal surface; (26) shows that \(\xi\) is the boost generator in the local Rindler frame near \(\gamma_{\xi}\); (27) means the surface gravity is indeed a constant value \(2\pi\); (28) is the Frobenius' theorem guaranteeing the vector field is hypersurface orthogonal. Finally in this special case, the ropes \(\gamma_{(p)}\) of the swing surface \(\gamma_{{\cal A}}\) are null geodesics generated by bulk modular flow while the bench \(\gamma\) of the swing surface is the set of fixed points of bulk modular flow generator \(\xi\) that extremizes the distance between the ropes. For more general states and boundary configurations, we need to consult to the approximate modular flow \(\zeta^{(p)}\), which on the 2D boundary can be obtained from the expressions for single intervals on the vacuum by sending the other endpoint to infinity. For each end point of interval, it is possible to find the null geodesic whose tangent vector is an asymptotic Killing vector reducing to \(\zeta^{(p)}\) at the conformal boundary. Then the general swing surface is the minimal extremal surface bounded by these null geodesics. One major difference compared to standard HRT surface in AdS/CFT is that, in flat\({}_{3}\)/BMSFT model the fixed points of the boundary modular flow \(\zeta\) are not the fixed points of the bulk modular flow \(\xi\) meaning the bifurcating surface \(\gamma_{\xi}\) is not attached to the interval \(\mathcal{A}\) at the boundary. **comments:** * In [61] the authors propose that the holographic dictionary of entanglement entropy in flat\({}_{3}\)/BMSFT model is the area of swing surface \(\gamma_{\mathcal{A}}\) (or bench \(\gamma\)) \[S_{\mathcal{A}}=\frac{\text{Area}(\gamma_{\mathcal{A}})}{4G}=\min\underset{X_{ \mathcal{A}}\sim\mathcal{A}}{\text{Area}(X_{\mathcal{A}})},\quad X_{\mathcal{ A}}=X\cup\gamma_{b\partial}\] (29) However as also been noticed by the same paper, the problem is how can the area term have negative value (19). In the next section, we would find that the holographic dictionary of \(S_{\mathcal{A}}\) need more elements not just the pure gravity area property. * The descriptions of bifurcating horizons 1\(N_{l,r}\) in section (2.3) of [61] are not precise. According to the results in section 3, the bifurcating horizons \(N_{l,r}\) connected to boundary interval \(\mathcal{A}\) are both future directed and the killing horizons emitted from the finite bench \(\gamma\) only touch the future null infinity \(\mathscr{I}^{+}\) at two single points. Note that these unusual features seem to be unique for flat\({}_{3}\)/BMSFT model, which is not due to the swing surface construction, see [67; 72] for comparison. Footnote 1: Note our notations are different from those in [61]. ### PEE correspondence Since in this paper we just take the partial entanglement entropy (PEE) correspondence as a useful tool, so we only present the most basic elements of them. Please see [13; 67] for more physical interpretations. In [67], the author made two proposals about the holographic dictionary (PEE correspondence) for the entanglement contour of a single interval in the context of AdS\({}_{3}\)/CFT\({}_{2}\). The first proposal states that the partial entanglement entropy \(S_{A}(A_{2})\), see Figure 2(a), is given by a linear combination of entanglement entropies of relevant subsets inside interval \(\mathcal{A}\) for general 2D theories \[S_{A}(A_{2})=\frac{1}{2}\left(S_{A_{1}\cup A_{2}}+S_{A_{2}\cup A_{3}}-S_{A_{1} }-S_{A_{3}}\right) \tag{30}\] The second proposal is a fine structure analysis about the entanglement wedge through boundary and bulk modular flow, which is used in this paper as a way to explore the "entanglement wedge" of flat\({}_{3}\)/BMSFT model. This bulk and boundary one-to-one correspondence can also be obtained by intersection of RT surfaces, see Figure 2(a). Finally the holographic dictionary about PEE says that \[S_{A}(A_{i})=\frac{\text{Length}\left(\varepsilon_{i}\right)}{4G} \tag{31}\] **Comments:** As rigorously said by [13] the bulk modular flows exactly settle at the boundary when they approach the boundary, so there are no orbits in the bulk. Thus to really find a boundary and bulk correspondence through local modular flow method, we should choose a cut-off surface, see Figure 2. Then there is a degree of freedom in choosing which modular flow line in the chosen cut-off surface correspond to a specific modular flow line at the asymptotic boundary. This freedom not only can affect the bulk point of PEE correspondence, i.e., \(\epsilon_{i}\) in Figure 2, but also can affect the shape of the line between boundary and bulk corresponding points. [67] make a good proposal on how to fix this freedom in AdS\({}_{3}\)/CFT\({}_{2}\), but how to fix this freedom in flat\({}_{3}\)/BMSFT model is not clear. As a byproduct in this paper, we find there is a consistent way to fix the d.o.f. in flat case although the underlying physical reasons need further study. In any case, this is not the focus of this paper and the intersection of RT like surfaces way turn out to be more general and less uncertain. ## 3 Quotient manifolds and observations After a summary of the phase space of Einstein gravity solutions under the consistent asymptotic boundary conditions (11) in flat\({}_{3}\)/BMSFT model, we give the exact Penrose diagrams (not cartoon pictures) of the zero mode solutions, which are quotient manifolds of global Minkowski spacetime (the global flat\({}_{3}\)). To gain more intuition, a subtle issue about drawing boundary causal domain \(D[\mathcal{A}]\) on compact Penrose diagram of the covering global flat\({}_{3}\) is shown. Then two key observations about holographic entanglement entropy (swing surface) and holographic reflected entropy related (EWCS) are presented. One is about how to derive the "negative" sign of holographic entanglement entropy and reflected entropy in the bulk, the other one is about whether the finite bench or the infinite bifurcating surface is more fundamental, or at least more useful, in finding the "entanglement wedge" of this model. The first two subsections are preliminary to understand the explorations in this paper, the last two subsections are a revisit of the results in [61; 64]. We try to extract some general lessons from these new observations about flat holography. More precisely, the above mentioned asymptotic boundary conditions near future null infinity [73] in the retarded Bondi coordinates \((u,r,\phi)\) is \[g_{rr}=0,\;\;g_{ru}=-1+\mathcal{O}\left(\frac{1}{r}\right),\;\;g_{r\phi}=0,\; \;g_{u\phi}=\mathcal{O}(1),\;\;g_{uu}=\mathcal{O}(1),\;\;g_{\phi\phi}=r^{2} \tag{3.1}\] where \(\phi\sim\phi+2\pi\). The phase space of solutions to pure Einstein's equations in Bondi gauge is parametrized by two periodic functions \(\Theta(\phi)\) and \(\Xi(\phi)\) such that \[ds^{2}=\Theta(\phi)du^{2}-2dudr+2\left[\Xi(\phi)+\frac{1}{2}u\partial_{\phi} \Theta(\phi)\right]dud\phi+r^{2}d\phi^{2}, \tag{3.2}\] where the null infinity is located at \(r\rightarrow\infty\). The zero mode solutions with constant \(\Theta(\phi)=M\) and \(\Xi(\phi)=J/2\) describe some classical backgrounds of spacetime and are our main interest. With the convention \(8G=1\), the parameters \(M\) and \(J\) correspond to the canonical energy and the angular momentum of the spacetime. In particular, the \(M=-1\), \(J=0\) solution corresponds to the global Minkowski vacuum, the \(-1<M<0\) solutions correspond to the conical defect geometries, and the \(M=J=0\) solution, called the null orbifold, is supposed to be the analogue of zero temperature BTZ. Solutions with \(M>0\) is usually referred to as flat cosmological solutions (FSC) and have Cauchy horizons. This fact can be seen clearly in the ADM form [74] of the zero mode metric \[ds^{2}=-\left(-M+\frac{J^{2}}{4r^{2}}\right)^{2}dt^{2}+\left(-M+\frac{J^{2}}{4r ^{2}}\right)^{-2}dr^{2}+r^{2}\left(d\varphi+\frac{J}{2r^{2}}dt\right)^{2} \tag{3.3}\] which implies that the Cauchy horizon is located at \[r_{H}\equiv|r_{c}|=\frac{|J|}{2\sqrt{M}} \tag{3.4}\] We are also interested in the vacuum state in flat Poincare coordinates with metric \[ds^{2}=-2dudr+r^{2}dz^{2},\qquad r\geq 0,\;u\in(-\infty,\infty),\;z\in(- \infty,\infty) \tag{3.5}\] which can be obtained by decompactify the angular direction \(\phi\) of \(M=J=0\) null orbifold solution. This is the flat limit of the Poincare patch of AdS\({}_{3}\). ### Boundaries and Horizons In 3D pure Einstein gravity, there is no propagating degree of freedom. The only way to construct different solutions is by taking quotient. Like the BTZ black holes are the discrete quotient manifolds of global AdS\({}_{3}\), the above mentioned zero mode backgrounds, i.e., the \(M=0,J=0\) (the Poincare vacuum), \(M>0\) (FSC) and \(M<0\) (including global Minkowski) zero mode backgrounds, are also the discrete quotient manifolds of global flat\({}_{3}\). For each case we first give the coordinate transformations [61] that map that zero mode background to the global flat3 with coordinates \((t,x,y)\), then point out the corresponding boundaries or horizons of these quotient manifolds. Figure 3: The figures show the Penrose diagrams of the quotient manifolds, i.e., the Poincaré vacuum, \(M<0\) zero mode backgrounds and \(M>0\) zero mode backgrounds, in 3D global Minkowski spacetime respectively. All yellow light cones are asymptotic boundaries of the global flat\({}_{3}\); null red surface in figure 3(a) denotes the boundary \(t+y\geq 0\) of the Poincaré vacuum; red surface (not null) in figure 3(b) denotes the boundary \(x^{2}+y^{2}=2r_{c}^{2}/(-M)\) of \(M<0\) zero mode background; null green surface in figure 3(c) denotes Cauchy horizon \(t-x>0\) and null purple surface denotes Cauchy horizon \(t+x>0\) of \(M>0\) zero mode background. * **The Poincare vacuum** The coordinate transformations 2 are, Footnote 2: Note the transformations here are different with [60, 61], which depend on the boundary interval \(\mathcal{A}\). \[t=\frac{(\alpha^{2}+4z^{2})r}{4\alpha}+\frac{2u}{\alpha},\quad x=zr+\frac{\beta }{\alpha},\quad y=\frac{(\alpha^{2}-4z^{2})r}{4\alpha}-\frac{2u}{\alpha}\] (3.6) for any value of \(\alpha\) and \(\beta\). Without loss of generality we choose \(\alpha=1,\beta=0\) in this paper. In order to see the boundary of spacetime clearly, we need an inverse coordinate transformations \[u=\frac{t^{2}-x^{2}-y^{2}}{4(t+y)},\quad r=2(t+y),\quad z=\frac{x}{2(t+y)}.\] (3.7) So the Poincare vacuum cover only the \(t+y\geq 0\) part of the global Minkowski spacetime, see Figure 3. * \(M<0\) **zero mode backgrounds** The coordinate transformations are: \[t =\frac{1}{\sqrt{-M}}\left(r-Mu-\sqrt{-M}r_{c}\phi\right)\] \[x =\frac{1}{\sqrt{-M}}\left[r\cos\sqrt{-M}\phi-r_{c}\sin\sqrt{-M} \phi\right]\] (3.8) \[y =\frac{1}{\sqrt{-M}}\left[r\sin\sqrt{-M}\phi-r_{c}\cos\sqrt{-M} \phi\right]\] So we have the relation \(x^{2}+y^{2}=(r^{2}+r_{c}^{2})/(-M)\), which leads to the boundary location \(x^{2}+y^{2}=2r_{c}^{2}/(-M)\), See Figure 3. Note that if we have \(J=r_{c}=0\), i.e., the whole Minkowski spacetime, then the codimension one boundary in Figure 3 would shrink to one dimensional line with \(x=0,y=0\) excluding nothing from the global flat3 and consistent with the expectation. * \(M>0\) **zero mode backgrounds** The coordinate transformations are: \[u= \frac{1}{M}\left(r-\sqrt{M}y-\sqrt{M}r_{c}\phi\right),\quad r= \pm\sqrt{M(t^{2}-x^{2})+r_{c}^{2}}\] \[\phi=-\frac{1}{M}\log\left[\frac{\sqrt{M}(t-x)}{r+r_{c}}\right]= \frac{1}{M}\log\left[\frac{\sqrt{M}(t+x)}{r-r_{c}}\right]\] (3.9) The spacetime region with \(r>r_{c}\) exterior to Cauchy horizon locating at \(r=r_{c}\) cover the parameter range \[t-x>0,\qquad t+x>0,\] (3.10) while the interior of the Cauchy horizon \(0<r<r_{c}\) cover the parameter range \[t-x>0,\qquad t+x<0.\] (3.11) In Figure 3, the exterior of Cauchy horizon is above both the green and blue surfaces, and the interior of Cauchy horizon is the right part of the region enclosed by both the green and blue surfaces. Note that if we draw the swing surface in the above compact Penrose diagrams, the finite bench would always penetrate the boundaries or horizons of the original spacetime. This curious phenomena is discussed in the last section. ### Order of taking the Infinity Limit The order of taking the infinite limit is a subtle issue in mathematics. Here is a good example due to the infinite range of both \(r\) and \(u\) coordinates in Bondi gauge. For the above coordinate transformations, if we take the limit \(r>|u|\to\infty\) of Bondi coordinate, which is equivalent to keep coordinate \(u\) a finite but arbitrary value and taking \(r\) to infinity, the BMS field theory would always live on the future null infinity \(\mathscr{I}^{+}\) for all \(M=0\), \(M>0\) and \(M<0\) cases. However, we know that the global Minkowski spacetime with \(M=-1,J=0\) contain not only the future null infinity \(\mathscr{I}^{+}\) but also the past null infinity \(\mathscr{I}^{-}\), which actually comes from another limit 3\(u<-r\to-\infty\). With the above observation, we explore the following limits Footnote 3: For simplicity in this subsection, we omit the mathematical proof of the statements, which can be obtained by following the same route as (4.11) and (4.18). \[1).\;\;u<-r\to-\infty,\qquad 2).\;\;u>r\to\infty \tag{3.12}\] in \(M=0\), \(M>0\) and \(M<0\) solutions separately and summarize the new phenomena. * For the usual \(r>|u|\to\infty\) limit, we have the expected Penrose diagram as 4 for all the zero mode solutions. * **The Poincare vacuum** New phenomena only happen in the first limit of (3.12). As shown in 4, the \(u<-r\) part of the boundary \(\partial D[\mathcal{A}]\) of causal domain \(D[\mathcal{A}]\) go around the spacelike infinity \(i^{0}\) making now the boundary \(\partial D[\mathcal{A}]\) a closed curve. * \(M>0\) **zero mode backgrounds** New phenomena happen in both limits of (3.12). When \(u<-r\) the \(\partial D[\mathcal{A}]\) go around the spacelike infinity \(i^{0}\) as in the case of the Poincare vacuum; when \(u>r\) the \(\partial D[\mathcal{A}]\) go through the timelike infinity \(i^{+}\), see 4, to a similar configuration symmetric about \(\Phi=\frac{\pi}{2}\) axis. Thus the causal domain \(D[\mathcal{A}]\) contains two disconnected parts, which is quite unusual. * \(M<0\) **zero mode backgrounds** New phenomena happen only in the first limit of (3.12). As shown in 4, the \(u<-r\) part of the \(\partial D[\mathcal{A}]\) would plot a similar configuration on past null infinity \(\mathscr{I}^{-}\) as the one on future null infinity \(\mathscr{I}^{+}\). This is the only case that the field theory can touch \(\mathscr{I}^{-}\) which is consistent with boundary of zero mode backgrounds in the last subsection. If we consider the configurations of boundary interval \(\mathcal{A}\) or the corresponding swing surface \(\gamma_{\mathcal{A}}\) in the unusual limits (3.12), they are the limiting ones of the usual cases and do not affect our main conclusions. ### Negative pure and mixed state entanglement measures We already observed that the entanglement entropy can be negative (2.19) in flat\({}_{3}\)/BMSFT model. From the BMS field theory point of view, the reason and meaning of negative entanglement entropy need further solid explorations. However from the Einstein gravity Figure 4: These figures show the usual and unusual limits (3.12) of boundary \(\partial D[\mathcal{A}]\) of causal domain \(D[\mathcal{A}]\) in \(M=0\), \(M>0\) and \(M<0\) zero mode backgrounds. Brown/Purple lines are the boundaries \(\partial D[\mathcal{A}]\), and green lines are image of ordinate \(z=0\) or \(\phi=0\) of original coordinates. Figure 4 shows the expected configuration of usual limit \(r>|u|\to\infty\), and figure 4 to 4 show the unusual limits of the Poincaré vacuum, \(M>0\) zero mode backgrounds and \(M<0\) zero mode backgrounds respectively with explicitly marked parameter ranges. See the detailed descriptions in the main context. point of view, the negative holographic entanglement entropy is already annoying enough and may ruin the correspondence of swing surface proposal. In this subsection, we give the mathematical derivation of negative sign of holographic entanglement entropy by identifying the entanglement entropy as a Noether surface charge 4 and explicitly using the swing surface construction. Also we would give the physical intuition about why the situations in flat\({}_{3}\)/BMSFT model are different from the ones in AdS/CFT. Footnote 4: Thank Wei Song for pointing out this viewpoint to us and Boyang Yu for early cooperation on this subsection. The holographic entanglement entropy can be viewed as an Noether surface charge evaluated along the HRT surface in AdS/CFT [75], \[\mathcal{S}_{\mathcal{A}}=\mathcal{Q}_{\xi}^{\gamma_{\mathcal{A}}}=-\frac{1} {16\pi G}\int_{\text{HRT}}\nabla^{\mu}\xi^{\nu}\epsilon_{\mu\nu\rho}dx^{\rho}= -\frac{1}{8}\int_{\text{HRT}}n^{\mu\nu}\epsilon_{\mu\nu\rho}dx^{\rho} \tag{3.13}\] where \(dx^{\rho}\) denotes the unit vector along the HRT surface, and \(\xi^{\nu}\) denotes the bulk modular flow vector. We used the fact (2.26) in the third equality. The surface charge (3.13) is actually a line integral in 3D bulk, and to do the computation we should embed it into a specific coordinate system. Take the Poincare coordinate of AdS/CFT as an example. If we fix the sign of \(\epsilon_{txy}=1\) in Poincare coordinates \((t,x,y)\) and integrate from the left endpoint of interval \(\mathcal{A}\) to the right one, we would always have the following formulas \[n^{\mu\nu}\epsilon_{\mu\nu\rho}|_{\gamma_{\mathcal{A}}}=-2\hat{e}_{\rho}, \quad\mathcal{Q}_{\xi}^{\gamma_{\mathcal{A}}}=\frac{1}{4G}\int_{left}^{right }\hat{e}_{\rho}dx^{\rho}=\frac{\text{Area}(\gamma_{\mathcal{A}})}{4G}. \tag{3.14}\] In CFT the modular flow of a general boundary interval always have positive component along the positive direction of ordinate. While in BMS\({}_{3}\) field theory when fixing the abscissa, we can change the \(u\) coordinate of the boundary interval to change the relative sign of the modular flow to the positive direction of ordinate. This global degree of freedom of boundary interval in flat\({}_{3}\)/BMSFT model is the key to understand the negative sign of holographic entanglement entropy. Mathematically, using (4.49) we can get the parametrization equations of the bifurcating surface in Bondi coordinates of the Poincare vacuum 5 Footnote 5: Note there are typos in (3.38) and (3.39) of [61]. \[u(z)=\frac{-u_{r}(z-z_{l})^{2}+u_{l}(z-z_{r})^{2}}{(z_{l}-z_{r})(2z-z_{l}-z_{r} )},\quad r(z)=-\frac{2(u_{l}-u_{r})}{(z_{l}-z_{r})(2z-z_{l}-z_{r})} \tag{3.15}\] then the normalized directional vector \(dx^{\rho}\) along the bench can be obtained as \[dx^{\rho}=\text{sign}(u_{l}-u_{r})\left(\frac{(z-z_{l})(z-z_{r})}{z_{l}-z_{r} },\frac{2}{z_{l}-z_{r}},\frac{(z_{l}+z_{r}-2z)^{2}}{u_{l}-u_{r}}\right) \tag{3.16}\] where we can see explicitly the sign of \(dx^{\rho}\) depend on the relative value of \(u_{l},u_{r}\) when we fix the values of \(z_{l}\),\(z_{r}\). The vector \(\nabla^{\mu}\xi^{\nu}\epsilon_{\mu\nu\rho}\) can also be computed using (4.49) \[\hat{e}_{\rho}=\nabla^{\mu}\xi^{\nu}\epsilon_{\mu\nu\rho}=\left(\frac{8\pi}{z _{l}-z_{r}},\frac{4\pi(z-z_{l})(z-z_{r})}{z_{l}-z_{r}},-\frac{8\pi(u_{l}-u_{r })}{(z_{l}-z_{r})^{2}}\right) \tag{3.17}\] thus we have \[\mathcal{Q}_{\xi}^{\gamma A}=\frac{1}{4G}\int_{z_{l}}^{z_{r}}\hat{e}_{\rho}dx^{ \rho}=\text{sign}(u_{l}-u_{r})\frac{\text{Area}(\gamma_{\mathcal{A}})}{4G} \tag{3.18}\] which is indeed the expected form of holographic entanglement entropy in the Poincare vacuum of the flat\({}_{3}\)/BMSFT model [62], \[\mathcal{S}_{\mathcal{A}}=\frac{u_{lr}}{2Gz_{lr}}=\text{sign}(u_{l}-u_{r}) \frac{|2l_{u}|}{4Gl_{z}}=\mathcal{Q}_{\xi}^{\gamma A} \tag{3.19}\] where \(u_{lr}=u_{l}-u_{r}\) and \(z_{lr}=z_{l}-z_{r}\). We emphasize that not only the entanglement entropy [61], but also the reflected entropy [65], entanglement negativity [64] and PEE in flat\({}_{3}\)/BMSFT model all can be negative. These key observations imply us that this is actually a general character of flat\({}_{3}\)/BMSFT model. Although for the mixed state entanglement measures we do not have a local modular flow to mathematically prove the above statement. In flat\({}_{3}\)/BMSFT model the bulk theory is pure Einstein gravity, however we need other property of swing surface to match the expectation of being holographic entanglement entropy and have to consult to the Noether charge formalism. This is unusual to what we learned in AdS/CFT. We leave the discussion in the last section. ### Finite bench or Infinity bifurcating surface? Due to the phenomena that swing surface always penetrate the boundary or horizon of the original spacetime, in the following we explore the causality structure in global flat\({}_{3}\) and overlook the quotient manifolds stuff momentarily. In order to specify which bulk region in global flat\({}_{3}\) have similar properties of entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) in AdS/CFT holography, we are facing two unavoidable questions. The first one is whether this region is a closed co-dimension zero bulk region. The other one is which part of the bifurcating surface is more fundamental, or at least more useful, the finite bench \(\gamma\) or the infinite bifurcating surface \(\gamma_{\xi}\). Due to the non-local property in \(u\) direction of boundary BMS field theory, which can be seen from the two point correlation function (2.12), it's rather unclear that a closed bulk region related to the swing surface \(\gamma_{\mathcal{A}}\) would exist. One example is the AdS\({}_{3}\)/WCFT holographic model, the "pre-entanglement wedge" is not closed in the \(v\) direction due to the non-local feature in \(z\) direction of boundary WCFT [72]. Related to the second question, there are two special bulk surfaces that we could grow null normal congruence from to construct the boundaries of a bulk region. One is the whole bifurcating surface \(\gamma_{\xi}\) which is unbounded and invariant under the bulk modular flow \(\xi\). Another one is the finite bench \(\gamma\) which is a bounded portion of \(\gamma_{\xi}\). Due to the homology condition between swing surface and boundary interval as well as the Noether charge formalism (3.13), the finite bench \(\gamma\) may be more basic. The above mentioned questions turn out to be closely related to each other in flat\({}_{3}\)/BMSFT model. We approach these problems from more practical ways rather than the more philosophical homology condition. To be consistent with the presentation style of this section, we just show the observations. The existing literature [64; 65] only consider the symmetric two intervals on the boundary, where the EWCS would end on the finite bench \(\gamma\). However when considering more general configurations of boundary two intervals, we observed that the endpoints of EWCS can exceed \(\gamma\). We plot several new situations in Figure 5, where the usual expected entanglement wedge [61; 65] and true EWCS are plotted. We can see from the pictures that the not carefully defined connected entanglement wedge would lead to big problems. Due to these observations, we see that the whole modular invariant bifurcating horizon \(\gamma_{\xi}\) may be more basic. We would provide more evidence along this perspective through PEE, BPE and bulk modular flow in the next section. ## 4 Bulk Causality related to single interval In this section we give a detailed analysis about causality structures related to finite bench \(\gamma\) and infinite bifurcating surface \(\gamma_{\xi}\) of a single boundary interval \(\mathcal{A}\). We use PEE as a useful tool to explore fine correspondence between boundary and bulk modular flow. When familiar with the subtleties in flat\({}_{3}\)/BMSFT model during the process, we go to the question of finding "entanglement wedge" \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) in flat\({}_{3}\)/BMSFT model. As a by product, we solve the problem of PEE in flat\({}_{3}\)/BMSFT model stated in section 1. For simplicity and without loss of generality we present all the detailed analysis in the Poincare Vacuum. Let us slightly generalize the parametrization in [61] of swing surface in the Poincare vacuum (3.5). Considering a general boundary field interval \(\mathcal{A}\) with endpoints \[\partial\mathcal{A}=\big{\{}\big{(}u_{l},z_{l}\big{)},\big{(}u_{r},z_{r}\big{)} \big{\}} \tag{4.1}\] Figure 5: Configurations of EWCS (brown lines between points \(Q_{1}\) and \(Q_{2}\)) for general boundary two intervals \(A\) and \(B\) (blue interval), as well as the usual expected connected entanglement wedge (blue region) are shown. Blue dotted lines are null ropes \(\gamma_{(p)}\), blue solid lines are bench \(\gamma\) and the chain lines are part of the whole bifurcating surface \(\gamma_{\xi}\). The boundary conditions of the null ropes \(\gamma_{l,r}\) emanating from endpoints \(\partial{\cal A}\) are simply \[\gamma_{l,r}:\ u=u_{l,r},\ z=z_{l,r} \tag{4.2}\] The length of the spacelike geodesic connected between two null ropes \(\gamma_{l,r}\) is given by \[L(r_{l},r_{r})=\sqrt{2r_{r}(u_{l}-u_{r})+r_{l}(-2u_{l}+2u_{r}+r_{r}(z_{l}-z_{r} )^{2})} \tag{4.3}\] where \(r_{l,r}\) are radial coordinates of the points on \(\gamma_{l,r}\). The extreme of (4.3) is found at \[r_{l}=-r_{r}=-\frac{2(u_{l}-u_{r})}{(z_{l}-z_{r})^{2}}. \tag{4.4}\] From here we can see a necessity to analytically continue the original Poincare vacuum spacetime with only \(r\geq 0\) to the one that also includes negative values of \(r\) in order to include just the single interval swing surface \(\gamma_{\cal A}\). The bench \(\gamma\) is just a straight line going through the points parametrized by \[t(s)=t_{l}+(t_{r}-t_{l})s,\quad x(s)=x_{l}+(x_{r}-x_{l})s,\quad y(s)=y_{l}+(y _{r}-y_{l})s \tag{4.5}\] where the left and right endpoints of \(\gamma\) have following expressions, \[(t_{l,r},x_{l,r},y_{l,r})=\left(2u_{l,r}-\frac{(u_{l,r}-u_{r,l})(1+4z_{l,r}^{2 })}{2(z_{l,r}-z_{r,l})^{2}},-\frac{2(u_{l,r}-u_{r,l})z_{l,r}}{(z_{l,r}-z_{r,l}) ^{2}},-2u_{l,r}+\frac{(u_{l,r}-u_{r,l})(-1+4z_{l,r}^{2})}{2(z_{l,r}-z_{r,l})^{ 2}}\right) \tag{4.6}\] ### Bifurcating horizons There are several coordinate systems that we would go back and forth when trying to clearly show the causal relations between boundary field theory and bulk gravity theory. * **Bondi coordinates** of the original Poincare vacuum: \[\text{bulk}:(u,r,z),\quad\text{boundary}:(u,r\to\infty,z)\] (4.7) * **Cartesian coordinates and Penrose coordinates** of the covering global Minkowski spacetime: \[\text{Cartesian}:(t,x,y),\quad\text{Penrose}:(U,V,\Phi),\ (T,X,Y)\] (4.8) which are related by the standard textbook transformations, \[U=\arctan{(t-\sqrt{x^{2}+y^{2}})},\quad V=\arctan{(t+\sqrt{x^{2}+y^{2}})}, \quad\Phi=\phi=\arctan{\frac{y}{x}}\] \[T=V+U,\quad X=(V-U)\cos{\Phi},\quad Y=(V-U)\sin{\Phi}\] (4.9) **From boundary to boundary** we first deal with the image of field interval \({\cal A}\) on the future null infinity \({\mathscr{I}}^{+}\)6 of Penrose diagram with coordinates \((U,V,\Phi)\). There are several facts about this map: Footnote 6: This is a choice of us, which means that we can also choose to map boundary field theory to the past null infinity \({\mathscr{I}}^{-}\) by putting minus signs in coordinate transformations (3.6). * constant \(z\) line of field theory would be mapped to constant \(\Phi\) line on \(\mathscr{I}^{+}\) of the Penrose diagram. So a strip like region which is the causal domain \(D[\mathcal{A}]\) of field interval would map to a corner region on the boundary null cone. In particular, the \(z=0\) axis would be mapped to \(\Phi=\frac{\pi}{2}\) line. This can be seen from the following transformation, \[\Phi=\arctan\frac{y}{x}|_{r\to\pm\infty}=\arctan\frac{1-4z^{2}}{4z}.\] (4.10) When \(z\) goes from \(0\) to \(\infty\), \(\Phi\) would go from \(\frac{\pi}{2}\) to \(-\frac{\pi}{2}\). Because (4.10) is a monotonic decreasing function when \(z>0\), then the map is one to one. * A symmetric interval about the origin (\(u=0,z=0\)) would map to a symmetric interval about the point (\(U=0,V=\frac{\pi}{2},\Phi=\frac{\pi}{2}\)) on \(\mathscr{I}^{+}\) of the Penrose diagram. This can be seen as follows: \[\sqrt{x^{2}+y^{2}}|_{r\to\pm\infty}=\frac{1}{4}(1+4z^{2})|r|+ \frac{2u(1-4z^{2})}{1+4z^{2}}\frac{|r|}{r},\] \[\text{when }r\to\infty,\quad U=\arctan\frac{4u}{1+4z^{2}}, \quad V=\frac{\pi}{2}.\] (4.11) (4.10) and (4.11) give us a bijective map from the infinite \((u,z)\) plane where the original BMS field theory live to the whole future null infinity \(\mathscr{I}^{+}\) of the compact Penrose diagram. **bench \(\gamma\) and bifurcating surface \(\gamma_{\xi}\)** we choose the symmetric boundary interval \(\mathcal{A}\) in (4.1) for convenience, \[-u_{l}=u_{r}=\frac{l_{u}}{2},\quad-z_{l}=z_{r}=\frac{l_{z}}{2}\] (4.12) putting them into (4.5), we get the parametrization of the finite bench \[(t,x,y)=\left(\lambda,-\frac{l_{u}}{l_{z}},-\frac{l_{z}^{2}+1}{l_{z}^{2}-1} \lambda\right),\quad|\lambda|<\left|\frac{l_{u}}{2}(1-\frac{1}{l_{z}^{2}}) \right|.\] (4.13) When the parameter \(\lambda\) has parameter range \(\lambda\in(-\infty,\infty)\) in (4.13), the parameter equations denote the whole bifurcating surface \(\gamma_{\xi}\). In the Penrose diagram, \(\gamma_{\xi}\) always end on the spacelike infinity \(i^{0}\) with coordinates \((U,V,\Phi)=(-\frac{\pi}{2},\frac{\pi}{2},\pm\frac{\pi}{2})\), which are the results of \(|(l_{z}^{2}+1)/(l_{z}^{2}-1)|>1\). **bifurcating Killing horizon** The bifurcating Killing horizon \(N_{l,r}\) are composed of null congruence emitted from the bifurcating surface \(\gamma_{\xi}\). Locally each null generator of \(N_{l,r}\) is perpendicular to \(\gamma_{\xi}\) at the intersection point. They could be parametrized as, \[t=\lambda_{1}+\kappa\lambda_{2}\,\text{sgn}(\kappa),\quad x=-\frac{l_{u}}{l_{z }}\pm\sqrt{\kappa^{2}-1}\lambda_{2}\,\text{sgn}(\kappa),\quad y=\kappa\lambda _{1}+\lambda_{2}\,\text{sgn}(\kappa) \tag{4.14}\] where \(\kappa\equiv-\frac{l_{2}^{2}+1}{l_{2}^{2}-1}\) and \(\text{sgn}(\kappa)\) denotes the sign function of parameter \(\kappa\). \(\lambda_{1}\) parametrize \(\gamma_{\xi}\) similar to (4.5), and \(\lambda_{2}\) parametrize the null congruence emitted from \(\gamma_{\xi}\). When \(\lambda_{2}\) take values in \((0,\infty)\), two future Killing horizons where two null ropes \(\gamma_{l,r}\) sit appear with plus and minus sign in \(x\) coordinates of (4.14). When \(\lambda_{2}\) take values in \((-\infty,0)\), two past Killing horizons would appear. From equations (4.9) and (4.14) we can draw the Killing horizons in the compact Penrose diagram, see Figure 6 and 6. In accordance with our intuition that the non-local property of BMS field theory would destroy the closeness, the two null surfaces \(N_{\gamma}\) related to the finite bench \(\gamma\) suspend in the Lorentzian Minkowski spacetime with only two points touching the null infinity, so no closed region is formed. We have \[N_{\gamma}\cap\mathscr{I}^{+}=\partial\mathcal{A},\quad\text{where }N_{\gamma} \subset N_{l}\cup N_{r} \tag{4.15}\] Unlike the AdS\({}_{3}\)/WCFT case [72], the Killing horizons related to \(\gamma_{\xi}\) touch the boundary of Minkowski spacetime on the whole spacelike infinity \(i^{0}\) and two lines on future null infinity \(\mathscr{I}^{+}\), so a closed region is formed. We have \[(N_{l}\cup N_{r})\cap\mathscr{I}^{+}=i^{0}\cup l_{\partial\mathcal{A}} \tag{4.16}\] Figure 6: In Penrose diagrams with coordinates \((T,X,Y)\) defined in (4.8), figure 6 and 6 show the bifurcating horizons \(N_{\gamma}\) related to finite bench \(\gamma\) (4.15) and the ones \(N_{l,r}\) related to the infinite bifurcating surface \(\gamma_{\xi}\) (4.16) of single boundary interval \(\mathcal{A}\) (blue curve) with \((l_{u}=1,l_{z}=2)\) in (4.12) separately. In addition to the basic elements appearing in figure 4, we also have null ropes (cyan curves), finite bench \(\gamma\), bifurcating surface \(\gamma_{\xi}\) (black curves) and bifurcating horizon \(N_{l,r}\) (orange surfaces). Figure 6 has no closed region bounded by the \(N_{\gamma}\cup\mathscr{I}^{+}\), while figure 6 actually form a closed region (4.17) which can not be shown perfectly due to limitation on the computational power of Mathematica. We hope the two orange curves can show the limiting behaviors of null congruence to the endpoints of bifurcating surface \(\gamma_{\xi}\). where \(l_{\partial\mathcal{A}}\subset\partial D[\mathcal{A}]\) represent part of boundary of \(D[\mathcal{A}]\) which start from endpoints \(\partial\mathcal{A}\) and end on the spacelike infinity \(i^{0}\). So a special region, we call it \(\mathcal{W}^{f}_{\mathcal{E}}[\mathcal{A}]\), bounded by bifurcating Killing horizons and asymptotic boundaries of Minkowski spacetime are formed \[\mathcal{W}^{f}_{\mathcal{E}}[\mathcal{A}]=\tilde{J}^{+}(\gamma_{\xi})=\text{ Region Bounded by }\ N_{l}\cup N_{r}\cup\mathscr{I}^{+}\cup i^{0} \tag{4.17}\] where \(\tilde{J}^{+}(\gamma_{\xi})\) denote the bulk causal future of \(\gamma_{\xi}\). There are two important features about the Figure 6, both are related to the limiting behaviors of the null congruence emitted from \(\gamma_{\xi}\) * All null congruence emitted from finite points of \(\gamma_{\xi}\) would only intersect with asymptotic boundaries on one point (4.15), which is one endpoint of boundary interval \(\mathcal{A}\). Mathematically when we have \(\lambda_{1}<\infty\) and \(\lambda_{2}\to\infty\) in (4.14), then \[r\to|\kappa|\lambda_{2}+\lambda_{1}\pm\frac{\sqrt{\kappa^{2}-1}}{|\kappa|} \frac{l_{u}}{l_{z}}=t+\lambda_{1}\pm\frac{2l_{u}}{l_{z}^{2}+1},U\to\pm\arctan \frac{2l_{u}}{l_{z}^{2}+1},V\to\frac{\pi}{2}\] (4.18) Comparing (4.12), (4.11) and (4.18) we see the validity of the above statement. * The null congruence emitted from the endpoints of \(\gamma_{\xi}\) locating on the spacelike infinity \(i^{0}\) would first go around \(i^{0}\) until it touches the boundary of causal domain \(D[\mathcal{A}]\), then it would go up following this boundary line until touches the endpoint \(\partial\mathcal{A}\) of field interval, see the orange lines in Figure 6. Clearly there is a critical point for its different behaviors on the Penrose diagram. Actually when \(\lambda_{2}\ll\lambda_{1}\), the first part of \(t,x,y\) parametrization in (4.14) would dominate and the effect of this term is changing the \(\Phi\) angle. When \(\lambda_{2}\gg\lambda_{1}\), the second part, which shows more explicitly its lightlike property, of \(t,x,y\) parametrization in (4.14) would dominate. The competition of first part and second part in (4.14) tell us where the turning point sit. ### Decomposition of bulk spacetime In this subsection we analyze the decomposition of the global flat\({}_{3}\) in terms of both the future and past bifurcating horizons of \(\gamma_{\xi}\), and make a comparison with AdS/CFT to show the unusual features in flat\({}_{3}\)/BMSFT model. Viewing from the global coordinates of AdS/CFT, the four null Killing horizons of HRT surface related to single interval \(\mathcal{A}\) together would separate the whole Lorentzian AdS spacetime into four non-intersection parts, which nicely match with the boundary causal structure [20]. Mathematically, we can decompose boundary spacetime \(\mathcal{B}\) as follows: \[\mathcal{B}=D[\mathcal{A}]\cup D[\mathcal{A}^{c}]\cup J^{+}[\partial\mathcal{ A}]\cup J^{-}[\partial\mathcal{A}] \tag{4.19}\] where \(D[\mathcal{A}]\) is the boundary causal domain of interval \(\mathcal{A}\) and \(J^{\pm}[p]\) denote the causal future and past of point \(p\) on \(\mathcal{B}\). This tells us that the full boundary spacetime \(\mathcal{B}\) would decompose into four causally non-overlapping regions: the causal domain of the region \(\mathcal{A}\) and its complement \(\mathcal{A}^{c}\), and the causal future and past of the entangling surface \(\partial\mathcal{A}\). For the bulk spacetime \(\mathcal{M}\) we have the decomposition, \[\mathcal{M}=\tilde{D}[\mathcal{R}_{\mathcal{A}}]\cup\tilde{D}[\mathcal{R}_{ \mathcal{A}^{c}}]\cup\tilde{J}^{+}[\gamma_{\mathcal{A}}]\cup\tilde{J}^{-}[ \gamma_{\mathcal{A}}] \tag{4.20}\] Figure 8: Different perspectives on the bulk decomposition (4.23) with respect to the bifurcating surface \(\gamma_{\xi}\) of global flat\({}_{3}\). In addition to the main elements appearing in figure 6, we also have two past null bifurcating horizons (purple surfaces) which again actually form closed surfaces like the future ones. Causal regions (I) to (IV) are defined below (4.21). Figure 7: Bulk decomposition (4.20) of global AdS\({}_{3}\) with respect to the HRT surface [20] is presented. Basic components are boundary interval (blue), HRT surface (dotted black), boundary \(\partial D[\mathcal{A}]\) (yellow) of causal domain \(D[\mathcal{A}]\) and \(D[\mathcal{A}^{c}]\) related to the complement interval \(\mathcal{A}^{c}\). Causal regions (I) to (IV) are defined below (4.21). where the tilde of corresponding notation denote the bulk one, for example \(\tilde{D}\) is the bulk causal domain and \(\mathcal{R}_{\mathcal{A}}\) is the spacelike homology surface interpolating between boundary subregion \(\mathcal{A}\) and bulk HRT surface \(\gamma_{\mathcal{A}}\). The causal split of the bulk into two spacelike and two timelike separated regions from the perspective of \(\gamma_{\mathcal{A}}\) precisely match the boundary causal decomposition (4.19) when restrict (4.21) to the boundary due to the following relations \[\tilde{D}[\mathcal{R}_{\mathcal{A}}]\cup\mathcal{B}=D[\mathcal{A}]\quad\tilde{ D}[\mathcal{R}_{\mathcal{A}^{c}}]\cup\mathcal{B}=D[\mathcal{A}^{c}]\quad\tilde{J}^{ \pm}[\gamma_{\mathcal{A}}]\cup\mathcal{B}=J^{\pm}[\partial\mathcal{A}] \tag{4.21}\] To facilitate the discussion about AdS and flat spacetime in a unified way, we define the following notations, see Figure 7 and 8 * bulk Region (I): Causal future of bifurcating horizon \(\gamma_{\xi}\): \(\tilde{J}^{+}[\gamma_{\xi}]\) * bulk Region (II): Causal past of bifurcating horizon \(\gamma_{\xi}\): \(\tilde{J}^{-}[\gamma_{\xi}]\) * bulk Region (III), (IV): Two spacelike region which contain all the points spacelike separated from \(\gamma\) For flat\({}_{3}\)/BMSFT model, mathematically we have following relations on the boundary, \[\mathcal{B}=\mathscr{I}^{+}=D[\mathcal{A}]\cup D[\mathcal{A}^{c}] \tag{4.22}\] where \(\mathcal{B}\) denote the spacetime where BMS field theory lives and \[\mathcal{M}=\tilde{J}^{+}[\gamma_{\xi}]\cup\tilde{J}^{-}[\gamma_{\xi}]\cup( \text{III})\cup(\text{IV}) \tag{4.23}\] which satisfy \[\tilde{J}^{+}[\gamma_{\xi}]\cap\partial\mathcal{M}=\mathscr{I}^{+},\quad \tilde{J}^{-}[\gamma_{\xi}]\cap\partial\mathcal{M}=\mathscr{I}^{-},\quad( \text{III})\cap\partial\mathcal{M}=i_{1}^{0},\quad(\text{IV})\cap\partial \mathcal{M}=i_{2}^{0} \tag{4.24}\] where \(i_{1}^{0}\) denotes part of the spacelike infinity with parameter range \(\Phi\in(-\pi/2,\pi/2)\), \(i_{2}^{0}\) denotes part of the spacelike infinity with parameter range \(\Phi\in(\pi/2,3\pi/2)\). Although regions (III) and (IV) can be defined as the bulk causal domains \[(\text{III})=\tilde{D}[\mathcal{R}_{i_{1}^{0}}],\quad(\text{IV})=\tilde{D}[ \mathcal{R}_{i_{2}^{0}}] \tag{4.25}\] of the spacelike homology surface \(\mathcal{R}_{i_{1}^{0}}\) and \(\mathcal{R}_{i_{2}^{0}}\) with properties \[\partial\mathcal{R}_{i_{1}^{0}}=i_{1}^{0}\cup\gamma_{\xi},\quad\partial \mathcal{R}_{i_{2}^{0}}=i_{2}^{0}\cup\gamma_{\xi},\quad\mathcal{R}_{i_{1}^{0} }\cup\mathcal{R}_{i_{1}^{0}}=\Sigma_{i^{0}} \tag{4.26}\] where \(\Sigma_{i^{0}}\) is a bulk Cauchy surface of the whole Minkowski spacetime \(\mathcal{M}\), we can not find special meaning of \(i_{1}^{0}\), \(i_{2}^{0}\) and the corresponding homology surface \(\mathcal{R}_{i_{1}^{0}}\) and \(\mathcal{R}_{i_{2}^{0}}\). Also we have no good idea about how to make physical distinctions between \(\mathcal{R}_{i_{1}^{0}}\) and \(\mathcal{R}_{i_{2}^{0}}\). we summarize main features of causality structures in flat\({}_{3}\)/BMSFT model by comparisons with AdS/CFT * Both global AdS and global flat\({}_{3}\) can be decomposed into four regions according to (I) \(\sim\) (IV). In AdS spacetime, (III) is identified as the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\), and the homology surface \(\mathcal{R}_{\mathcal{A}}\) is a spacelike surface in (III). Boundary interval \(\mathcal{A}\) is a spacelike interval and spacelike separated from the HRT surface. In flat spacetime, (I) is the special region we called \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}]\) (4.17). \(\mathcal{A}\) is a interval viewed from the bulk and locate in the causal future of bifurcating horizon \(\gamma_{\xi}\). * For AdS/CFT the bulk decomposition of spacetime \(\mathcal{M}\) precisely match the boundary one (4.21). The entanglement wedge of \(\mathcal{A}\) and its complement \(\mathcal{A}^{c}\) have no overlap. While in flat\({}_{3}\)/BMSFT model, the bulk decomposition have no relation with the boundary one (4.24). In addition, the special region \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}]\) is exactly the same as the one of the complement interval \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}^{c}]\). ### PEE: intersection of swing surface In the following two subsections, we study the PEE correspondence in flat\({}_{3}\)/BMSFT model using two ways. One is similar to the intersection of HRT surfaces, the other one is the correspondence between boundary and bulk modular flow. This quantity not only provides us more bulk quantities other than swing surface and EWCS to explore, but also gives us a chance to be familiar with the structures of bulk modular flow. As a byproduct, we solve the PEE problem in flat\({}_{3}\)/BMSFT model, thus giving the foundations about the observed match of BPE [66; 68]. On the field side without loss of generality, we choose the boundary interval \(\mathcal{A}\) to be straight line between two points \(A_{1},A_{2}\) with coordinates \((z=-1,u=-\frac{1}{2})\) and \((z=1,u=\frac{1}{2})\), see Figure 9(b). Then the PEE for subinterval \(a_{1}a_{2}\) with endpoints \[\left(u=z_{a1},z=\frac{z_{a1}}{2}+\lambda_{B1}(z_{a1}^{2}-1)\right),\quad \left(u=z_{a2},z=\frac{z_{a2}}{2}+\lambda_{B1}(z_{a2}^{2}-1)\right) \tag{4.27}\] Figure 9: Figure 9(a) shows the swing surface intersection method to determine the PEE correspondence. Bulk geodesic \(\gamma_{2}\) correspondences to boundary subinterval \(\mathcal{A}_{2}\). Figure 9(b) shows the boundary modular flow line with parametrization consistent with (4.41). Points \(a_{1},b_{1},a_{3}\) and \(a_{2},b_{2}\) are on the same modular flow lines separately. in interval \(A_{1}A_{2}\) is given by \[S_{A_{1}A_{2}}(a_{1}a_{2})=2(\lambda_{B1}-\lambda_{B2}) \tag{111}\] We can see that if two points of subinterval \(a_{1}a_{2}\) lie on the same modular flow line, the PEE of this subinterval would be zero. This implies that points on the same modular flow line correspond to exactly one point on the bifurcating surface \(\gamma_{\xi}\), and different modular flow lines correspond to different points on \(\gamma_{\xi}\). In other words, boundary modular flow lines are in one to one correspondence with points on the bifurcating surface \(\gamma_{\xi}\). This is exactly what happens in the bulk. On the bulk side to find the specific point on the bifurcating surface \(\gamma_{\xi}\) which corresponds to the boundary point \(a_{1}\), we need choose another boundary point \(b_{1}\), see Figure 9(a), and demand that the bifurcating surface related to \(a_{1},b_{1}\) intersect the one related to \(A_{1},A_{2}\), see Figure 9(b). Using the parametrization (108), we find that when \(a_{1}\) have coordinates (110), the above intersection condition need us to have \[u_{b1}=\frac{z_{b1}}{2}+\lambda_{B1}(z_{b1}^{2}-1)),\quad s_{1}=\frac{1+4z_{b1 }\lambda_{B}}{2+4z_{a1}\lambda_{B}+4z_{b1}\lambda_{B}},\quad s_{2}=\frac{1}{2 }-2\lambda_{B} \tag{112}\] where \((u_{b1},z_{b1})\) is the coordinates of \(b_{1}\) point, and \(s_{1,2}\) are the parameters in (108) related to intervals \(a_{1},b_{1}\) and \(A_{1},A_{2}\) separately. Putting (112) into (108) we can see that the intersection point is \[\left(-\frac{3}{2}\lambda_{B},-\frac{1}{2},\frac{5}{2}\lambda_{B}\right) \tag{113}\] Thus we proved our above statement about the one to one correspondence between modular flow line denoted by \(\lambda_{B}\) and bulk point on bifurcating horizon (113), which is consistent with modular flow invariant property of PEE [67]. Two features need to be emphasized. One is that the two corresponding points \(a_{1}\) and \(b_{1}\) can both run on the same modular flow line independently, the specific bulk point would not change. This is not the same as AdS/CFT case, where the two points should run synchronously. Another feature is that the intersection point seldom lie on the finite bench of swing surface, which can be seen from the value of \(s_{1,2}\) in (112). The second feature provides us more evidence that only the information about the finite bench is not enough. Because from PEE we can intuitively derive BPE correspondence [13], so we have proved from a more basic step the observations in [66; 68]. We list several unconventional configurations in Figure 10 related to adjacent and non-adjacent BPE for completeness. Again we plot the usual expected connected entanglement wedge to show its problem. ### PEE: boundary and bulk modular flow In this subsection we use modular flow method to explore the PEE correspondence. Subtleties appear in flat\({}_{3}\)/BMSFT model compared to AdS/CFT case, which is another manifestation of modular flow property of BMS\({}_{3}\) field theory stated in section 1. We first revisit the modular flow method in standard AdS\({}_{3}\)/CFT\({}_{2}\) case using our notations. This rewriting manifests a freedom of bulk and boundary correspondence as mentioned in section 3, which is needed for the success of this method in flat\({}_{3}\)/BMSFT model 7. We can obtain the modular flow generator for a general interval Footnote 7: Thank Qiang Wen for discussion about this point. \[\mathcal{A}=\{\left(-\frac{r_{p}+r_{m}}{2},-\frac{r_{p}-r_{m}}{2}\right),\left( \frac{r_{p}+r_{m}}{2},\frac{r_{p}-r_{m}}{2}\right)\} \tag{4.31}\] in CFT by the coordinate transformations [76] \[x+t=r_{p}\frac{x_{r}+t_{r}-1}{x_{r}+t_{r}+1},\qquad\quad x-t=r_{m}\frac{x_{r}- t_{r}-1}{x_{r}-t_{r}+1} \tag{4.32}\] from Rindler spacetime which have local flow generator \[l^{\mu}\propto x_{r}\partial_{t_{r}}+t_{r}\partial_{x_{r}} \tag{4.33}\] to the causal domain \(D[\mathcal{A}]\) of this interval. Then we get the the modular flow generator \(\zeta\) of symmetric interval \(\mathcal{A}\) lying in \(t=\frac{r_{p}-r_{m}}{r_{p}+r_{m}}x\) timeslice as follows, \[\zeta^{\mu}=r_{m}r_{p}\left[(r_{p}+r_{m})P_{t}+(r_{p}-r_{m})P_{x }-(r_{p}+r_{m})k^{t}+(r_{p}-r_{m})k^{x}\right] \tag{4.34}\] \[\propto \left(r_{m}^{2}r_{p}+r_{p}^{2}r_{m}-r_{p}(t-x)^{2}-r_{m}(t+x)^{2 }\right)\partial_{t}-\left(-r_{m}^{2}r_{p}+r_{p}^{2}r_{m}+r_{p}(t-x)^{2}-r_{m} (t+x)^{2}\right)\partial_{x}\] Figure 10: Configurations about BPE correspondence of general boundary two intervals \(A,B\) for both adjacent and non-adjacent cases (but with different colors) in flat\({}_{3}\)/BMSFT model are presented. Bulk geodesics \(Q_{1}Q_{2}\) are the bulk dual of boundary BPE(A:B). Comparisons with those in [66; 68] for preliminary set up and notations are useful. Again the original expected connected entanglement wedge are present to show shortcomings of the original definition. where \(P_{t},P_{x},k^{t},k^{x}\) are the boundary conformal generators: t direction translation, x direction translation, t-component special conformal transformation and x-component special conformal transformation. The explicit expressions of these conformal generators are \[\begin{split} P_{t}=\partial_{t},\quad P_{x}=\partial_{x}\\ k^{t}=(t^{2}+x^{2})\partial_{t}+2tx\partial_{x},\quad k^{x}=(t^{ 2}+x^{2})\partial_{x}+2tx\partial_{t}\end{split} \tag{4.35}\] The corresponding bulk killing vector fields are \[\begin{split} P_{t}=\partial_{t},&\quad P_{x}= \partial_{x}\\ k^{t}=(t^{2}+x^{2}+z^{2})\partial_{t}+2tx\partial_{x}+2tz \partial_{z},& k^{x}=(t^{2}+x^{2}-z^{2})\partial_{x}+2tx\partial_ {t}+2zx\partial_{z}\end{split} \tag{4.36}\] Using the holographic dictionary between (4.35) and (4.36) we can obtain the exact bulk modular flow generator \(\xi\) as \[\xi^{\mu} =t^{\mu}_{bulk}\partial_{t}+x^{\mu}_{bulk}\partial_{x}+z^{\mu}_ {bulk}\partial_{z} \tag{4.37}\] \[\propto\partial_{z}(-2(r_{m}+r_{p})tz+2(r_{p}-r_{m})xz)+ \partial_{x}(r_{m}r_{p}(r_{p}-r_{m})+(r_{p}-r_{m})(t^{2}+x^{2}-z^{2})\] \[-2(r_{m}+r_{p})tx)+\partial_{t}(r_{m}r_{p}(r_{m}+r_{p})+2(r_{p}-r _{m})tx-(r_{m}+r_{p})(t^{2}+x^{2}+z^{2})).\] We can see that when taking the \(z\to 0\) boundary limit, the bulk killing vector field (4.37) reduces to the boundary modular flow generator (4.34). In the following we choose the interval \(\mathcal{A}\) to lie on the \(t=0\) constant time slice with \(r_{m}=r_{p}\) in (4.31). Then from (4.37) we can easily derive the location of bifurcating surface and bifurcating Killing horizon, * bifurcating Killing horizon: \[-(t^{\mu}_{bulk})^{2}+(x^{\mu}_{bulk})^{2}+(z^{\mu}_{bulk})^{2}=0\to z=\pm \sqrt{(t\pm R)^{2}-x^{2}}\] (4.38) * bifurcating surface: \[t^{\mu}_{bulk}=x^{\mu}_{bulk}=z^{\mu}_{bulk}=0\] (4.39) \[\{t=0,z^{2}+x^{2}=R^{2}\},\quad\text{Or}\quad\{t^{2}=R^{2},z=x=0\}\] (4.40) For explicit manifestation take \(r_{p}=r_{m}=1\) in (4.31), we can derive the corresponding parametrization equations of boundary modular flow from (4.34), \[\frac{dx(t)}{dt}=-\frac{2x(t)t}{1-x(t)^{2}-t^{2}}\to x(t)=\frac{1}{2}\left( \lambda_{B}\pm\sqrt{\lambda_{B}^{2}+4t^{2}-4}\right) \tag{4.41}\] where \(t\) parametrize one modular flow line and \(\lambda_{B}\) distinct different modular flow lines. \(t\) and \(\lambda_{B}\) together manifest the degree of freedom of 2d plane. Similarly from (4.37) we get the trajectory of bulk modular flow, \[\begin{split} x(t)&=\frac{1}{2(1+\lambda_{b1}^{2})} \left(\lambda_{b2}\pm\sqrt{-4(1-t^{2})(1+\lambda_{b1}^{2})+\lambda_{b2}^{2}} \right),\\ z(t)&=\frac{\lambda_{b1}}{2(1+\lambda_{b1}^{2})} \left(\lambda_{b2}\pm\sqrt{-4(1-t^{2})(1+\lambda_{b1}^{2})+\lambda_{b2}^{2}} \right)\end{split} \tag{4.42}\] where parameters \(t,\lambda_{b1},\lambda_{b2}\) together manifest the degree of freedom of 3d bulk. The plus and minus sign in (111) and (112) denote different branches that we need. When the above mentioned parameters have the relation \[\lambda_{b1}\to 0,\quad\lambda_{b2}=\lambda_{B}, \tag{113}\] the bulk modular flow line reduce to the boundary one respectively. When they have the relation \[\lambda_{b1}=\frac{1}{2}\sqrt{\lambda_{b2}^{2}-4}, \tag{114}\] the bulk modular flow line sit on the bifurcating horizons. Choosing a specific co-dimension one plane in 3d bulk by fixing the value of \(\lambda_{b2}=\lambda_{B}\)8, we can get the boundary and bulk correspondence of PEE, Footnote 8: Note the plane defined here is not the same as the modular plane in [67], which is an explicit manifestation of the freedom mentioned above. \[\text{boundary:}\left(x=\frac{1}{2}(\lambda_{B}\pm\sqrt{\lambda_{B}^{2}-4}),t =0\right)\leftrightarrow\text{bulk}:\left(x=\frac{2}{\lambda_{B}},z=\frac{ \sqrt{\lambda_{B}^{2}-4}}{\lambda_{B}},t=0\right) \tag{115}\] which is consistent with the results in [67]. While for the Poincare vacuum of flat\({}_{3}\)/BMSFT model, the exact boundary modular flow generator \(\zeta\) and the corresponding bulk Killing vector field \(\xi\) are [62] as follows, \[\zeta \propto W(z)\partial_{u}+Y(z)\partial_{z},\quad\xi\propto W(z) \partial_{u}+X(z)\partial_{z}-r\partial_{z}X(z)\partial_{r} \tag{116}\] \[X(z) =Y(z)-\frac{u}{r}Y^{\prime\prime}(z)-\frac{1}{r}T^{\prime}(z), \quad W(z)=T(z)+uY^{\prime}(z)\] (117) \[T(z) =\frac{2\pi[u_{r}(z-z_{l})^{2}-u_{l}(z-z_{r})^{2}]}{(z_{r}-z_{l}) ^{2}},\quad Y(z)=-\frac{2\pi(z-z_{l})(z-z_{r})}{z_{r}-z_{l}},\quad z\in[z_{l}, z_{r}].\] The final expressions for \(\zeta\) and \(\xi\) are \[\zeta:\begin{cases}&\zeta^{\mu}=\frac{2\pi}{(z_{l}-z_{r})^{2}}\left(u_{r}(z-z _{l})^{2}-u_{l}(z-z_{r})^{2}+(2z-z_{l}-z_{r})(z_{l}-z_{r})u\right)\\ &\zeta^{z}=\frac{2\pi}{(z_{l}-z_{r})^{2}}(z-z_{l})(z-z_{r})(z_{l}-z_{r})\end{cases} \tag{118}\] and \[\xi:\begin{cases}&\xi^{\mu}=\frac{2\pi}{(z_{l}-z_{r})^{2}}\left(u_{r}(z-z_{l} )^{2}-u_{l}(z-z_{r})^{2}+(2z-z_{l}-z_{r})(z_{l}-z_{r})u\right)\\ &\xi^{r}=\frac{2\pi}{(z_{l}-z_{r})^{2}}\left[2(u_{r}-u_{l})+r\left(z_{l}^{2}- z_{r}^{2}+2z(z_{r}-z_{l})\right)\right]\\ &\xi^{z}=\frac{2\pi}{(z_{l}-z_{r})^{2}}\left(\frac{2u_{l}(z-z_{r})-2u_{r}(z-z _{l})}{r}+(z-z_{l})(z-z_{r})(z_{l}-z_{r})\right)\end{cases} \tag{119}\] From (116), (117) and (118), (129) we can see the existence of a consistent boundary limit \(\xi|_{r\to\infty}=\zeta\). Again without loss of generality, we set the interval to be (\(l_{u}=1,l_{z}=2\)) in (109). Then the parametrization equations for boundary and bulk modular flow are \[\text{boundary}:\quad u(z)=\frac{z}{2}+(z^{2}-1)\lambda_{B} \tag{120}\] \[\text{bulk}:\quad u(z)=\frac{1}{4}\left(z+2\lambda_{b1}\pm p( \lambda_{b1},\lambda_{b2},z)\right),\ \ r(z)=\frac{z-2\lambda_{b1}\pm p(\lambda_{b1},\lambda_{b2},z)}{2(1-z^{2})} \tag{121}\] where we have \[p(\lambda_{b1},\lambda_{b2},z)=\sqrt{z^{2}+4\lambda_{b2}z^{2}-4\lambda_{b1}z+4 \lambda_{b1}^{2}-4\lambda_{b2}}. \tag{112}\] A careful analysis tell us that when we have relations \[\{\lambda_{b2}=-4\lambda_{B}\lambda_{b1},\;\lambda_{b1}\to-\infty\} \Longrightarrow p(\lambda_{b1},\lambda_{b2},z)\to z-2\lambda_{b1}+4\lambda_{B} (z^{2}-1)=-\infty \tag{113}\] bulk modular flow trajectory (111) would reduce to the boundary one (110), where \(r(z)\) in (112) goes to \(\infty\) that is the boundary limit. When we have relations \[\lambda_{b2}=\lambda_{b1}^{2}-\frac{1}{4},\quad z=\frac{1}{2\lambda_{b1}} \tag{114}\] bulk modular trajectories lie on the bifurcating horizons, and intersect the bifurcating surface at the point \[t=\frac{3}{4}\lambda_{b1},\quad x=-\frac{1}{2},\quad y=-\frac{5}{4}\lambda_{b1} \tag{115}\] Thus when the parameters satisfy \[\lambda_{b1}=-2\lambda_{B},\quad\lambda_{b2}=4\lambda_{B}^{2}-\frac{1}{4}, \quad z=-\frac{1}{4\lambda_{B}} \tag{116}\] we can get the bulk corresponding point (109). Note that in flat\({}_{3}\)/BMSFT model we need to change the values of both bulk modular flow parameters \(\lambda_{b1}\) and \(\lambda_{b2}\) simultaneously to go from the asymptotic boundary to PEE corresponding point on the bifurcating surface \(\gamma_{\xi}\). ### Entanglement wedge \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}]\)? In our parametrization (106) and (107), there are two features of bulk modular flow in entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) which distinct from those out of entanglement wedge. One is that for both branches the modular time \(t\) has parameter range \(t\in[-1,1]\) for both boundary and bulk modular flow. Within this finite range of modular time, bulk modular flows reach the bifurcating surface. Out of this range out of the entanglement wedge. Another one is that when \(\lambda_{b1}\geq 0\) exceed (108), the continuous bulk modular flow line will break apart and at the same time go beyond the entanglement wedge of this interval. See figure 11 for a gradual change with fixed \(\lambda_{b2}=4\) and varying \(\lambda_{b1}\) from \(0\) to \(2\). One complication of the flat\({}_{3}\)/BMSFT model is that bulk modular flow hit the bifurcating surface out of the range of boundary modular time \(z\in[-1,1]\), which can be seen from (116). Apart from this subtlety, similar situations happen as in AdS/CFT case. As we grow parameter \(\lambda_{b1}\) from \(-\infty\to-2\lambda_{B}\) and at the same time \(\lambda_{b2}\) from \(-4\lambda_{B}\lambda_{b1}\to 4\lambda_{B}^{2}-\frac{1}{4}\), the bulk modular trajectories go from the asymptotic boundary to the bifurcating surface. When \(\lambda_{b2}\) exceed the value \(4\lambda_{B}^{2}-\frac{1}{4}\), the modular lines disconnected into two parts, see Figure 11 and 11 for explicitly shown of the process. It turns out that in flat\({}_{3}\)/BMSFT model the connected bulk modular flow lines can grow from boundary causal domain \(D[\mathcal{A}]\) and go through every point in \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}]\) defined in (105), especially the boundary causal domain of the complement interval \(D[\mathcal{A}^{c}]\). Figure 11: Figure 11 presents a gradual change of bulk modular flow lines in the Poincaré patch of AdS/CFT from asymptotic boundary to bifurcating horizon and finally out of entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) with parameter data \((0,4),(0.5,4),(\sqrt{3},4),(2,4)\) for \((\lambda_{b1},\lambda_{b2})\) in the minus branch of (4.42). Figure 11 and 11 present a gradual change of bulk modular flow lines in the Poincaré vacuum of flat\({}_{3}\)/BMSFT model from asymptotic boundary to bifurcating horizon and finally out of the special region \(\mathcal{W}_{\mathcal{E}}^{f}[\mathcal{A}]\) defined in (4.17). For figure 11 the data are \((-3,0),(-1,-1/3),(0,-0.251)\) for \((\lambda_{b1},\lambda_{b2})\) in both branches of (4.51); for figure 11 the data are \((0,-0.16)\) for \((\lambda_{b1},\lambda_{b2})\) in both branches of (4.51). In order to see the phenomena clearly we take a upside down of figure 11 compared to figure 11. The modular time parametrized by \(z\) in (4.51) should go beyond \((-1,1)\) to draw complete pictures shown here. A confusing feature of bulk modular flows in flat\({}_{3}\)/BMSFT model is that all flow lines are spacelike trajectories, which have spacelike modular time viewing from the global flat\({}_{3}\). A related question is how to define the homology surface \(\mathcal{R}_{\mathcal{A}}\) in this case. They are quantum algebraically and information theoretically related to the semiclassical problems in gravity side. Algebraically the bulk modular Hamiltonian \(H_{b}\) can map the algebra of operators \(\mathcal{A}_{B}\) localized in a region \(B\) to the algebra of operators \(\mathcal{A}_{B^{\prime}}\) in the region \(B^{\prime}\), \[U(s)\mathcal{A}_{B}U(-s)=\mathcal{A}_{B^{\prime}},\quad U(s)=\rho^{is}=e^{-iH_ {b}s} \tag{100}\] along the bulk modular flow lines. When the modular evolution is spacelike from bulk point of view, what is its meaning? Quantum informationally the first quantum correction to holographic entanglement entropy, \[S_{\mathcal{A}}=\frac{Area(\gamma_{\mathcal{A}})}{4G}+S_{bulk}+\mathcal{O}(1/c) \tag{101}\] need homology surface \(\mathcal{R}_{\mathcal{A}}\) to compute the reduced density matrix \(\rho_{bulk}\). How to define it in this case? We hope more solid study on these key open problems in the future. ## 5 Two interval entanglement phase transition and EWN In order to compute the reflected entropy, entanglement negativity, odd entropy or other mixed state entanglement measures from bulk side, we need to have a connected entanglement wedge. When changing the relative configurations of two boundary intervals, there is a phase transition between disconnected entanglement wedge and the connected one. Without specifying clearly what is the connected entanglement wedge from gravity side, all previous calculations regarding the EWCS [65; 66], including the ones in this paper, are problematic. We give criterion for two intervals phase transition from field theory point of view, which is not trivial already. From the gravity theory point of view, the situations are rather vague. ### Entanglement phase transition In this part we consider the entanglement phase transition for two disjoint boundary intervals using formula (2.19) with \(c_{L}=0\). Compared to the usual CFT case, here we should consider three different pairs rather than two to finally get the minimal one, see Figure 12. If we connect points \(B,C\), and \(A,D\), the corresponding holographic entanglement entropy is given by \[S_{1}=c_{M}\left(\frac{u_{2}-u_{3}}{z_{2}-z_{3}}+\frac{u_{1}-u_{4}}{z_{1}-z_{4}}\right) \tag{102}\] Similarly, we define \[S_{2}=c_{M}\left(\frac{u_{1}-u_{2}}{z_{1}-z_{2}}+\frac{u_{3}-u_{4}}{z_{3}-z_{4 }}\right),\quad S_{3}=c_{M}\left(\frac{u_{1}-u_{3}}{z_{1}-z_{3}}+\frac{u_{2}- u_{4}}{z_{2}-z_{4}}\right) \tag{103}\] The difference between them are \[S_{1}-S_{2}=\frac{u}{z(z-1)},\quad S_{2}-S_{3}=\frac{u}{z}\quad S_{1}-S_{3}= \frac{u}{z-1} \tag{104}\] where \(u,z\) are cross ratios defined in (2.16) with identifications \(z=x,t=u\). Therefore we get * \(S_{1}\) is the minimal one when: \[u<0,\quad z>1\quad\text{or}\quad u>0,\quad 0<z<1\] (5.4) * \(S_{2}\) is the minimal one when: \[u<0,\quad 0<z<1\] (5.5) * \(S_{3}\) is the minimal one when: \[u<0,\quad z<0\quad\text{or}\quad u>0,\quad z>1\] (5.6) Taking symmetric configurations for an example, \[u_{a}=-u_{2}=u_{3},\ \ z_{a}=-z_{2}=z_{3},\ \ u_{b}=-u_{1}=u_{4},\ \ z_{b}=-z_{1}=z_ {4} \tag{5.7}\] with \(z_{b}>z_{a}\). This configuration ensures that \(0<z<1\), which means that \(S_{3}\) can never be the minimal one. Fixing \(z_{a},z_{b}\) and \(u_{a}\), there is a critical value \[u_{c}=\frac{u_{a}z_{b}}{z_{a}} \tag{5.8}\] for \(u_{b}\) such that \(S_{1}=S_{2}\). When \(u_{b}<u_{c}\), \(S_{2}\) is the minimal one and vice verse. we can check the difference between the slope of the critical interval \(B\) with \(u_{b}=u_{c}\) and that of interval \(A\) is \[\frac{u_{c}}{z_{b}}-\frac{u_{a}}{z_{a}}=0 \tag{5.9}\] Figure 12: For symmetric configuration of four points \(A\), \(B\), \(C\) and \(D\) in BMS\({}_{3}\) field theory, there are three competing combination of intervals to become the minimal one. We show the coordinates of these four points and the three pairings with different colors. When \(S_{1}\) is the minimal one, we think the configuration have a connected entanglement wedge like the case of AdS/CFT. ### Entanglement wedge nesting Entanglement wedge nesting (EWN) is a prerequisite of existing a connected entanglement wedge in AdS/CFT. EWN states that nested boundary regions should be dual to nested Figure 13: Figure 13(a) and 13(b) show the future bifurcating horizons (Red) of interval (Orange) with \((l_{u}=1,l_{z}=2)\) and future bifurcating horizons (Green) of interval (Blue) with \((l_{u}=4,l_{z}=2.6)\) from different perspective. This two intervals satisfy (5.4) and should have a connected entanglement wedge from boundary point of view. Similarly figure 13(c) and 13(d) show the future bifurcating horizons (Red) of interval (Orange) with \((l_{u}=1,l_{z}=2)\) and future bifurcating horizons (Green) of interval (Blue) with \((l_{u}=1/2,l_{z}=4)\) from different perspective. This two intervals satisfy (5.5) and should have disconnected entanglement wedge. Due to our limit abilities, we can not find true difference between the two cases considering the entanglement wedge nesting (EWN) property. bulk regions, which clearly consistent with subregion duality [77; 78] in AdS hologrpahy. According to the last section, region \(\mathcal{W}^{f}_{\mathcal{E}}[\mathcal{A}]\) defined in (4.17) is a special region owing many similar properties of the entanglement wedge \(\mathcal{W}_{\mathcal{E}}[\mathcal{A}]\) in AdS/CFT. However when considering the EWN property of two boundary intervals, this special region failed. Actually, no special region can be found due to our limited capabilities, see figure 13. This is really a big problem in the flat\({}_{3}\)/BMSFT model, which puts all the previous calculations about EWCS into a dangerous situation. ## 6 Conclusions and Open Questions Until now, we have showed the usual and unusual features related to causality structure of bifurcating surface \(\gamma_{\xi}\) in flat\({}_{3}\)/BMSFT model using the tools from modular flow and various entanglement measures (mainly Reflected entropy and PEE). We studied the two intervals phase transition as well as EWN problem from both field side and gravity side. However due to the observations presented in Section 3, there are still two big problems special to flat\({}_{3}\)/BMSFT model intentionally hidden by us. **The existence of Entanglement wedge?** In flat\({}_{3}\)/BMSFT model the holographic entanglement entropy does not own just the properties of pure gravity, i.e., the length, but also contains the "direction" character even in pure Einstein gravity. This phenomena appear in all the information measures, reflected entropy, entanglement negativity, odd entropy and PEE. Actually as we argued in subsection 3.3, this is a general property of flat\({}_{3}\)/BMSFT model, which physically puts a question mark on the existence or the meaning of entanglement wedge in this model. Historically the success of "It from qubit" program, for example, the subregion-subregion duality [16], in Einstein gravity of AdS/CFT holography root in the fact that people can totally geometrize the boundary entanglement entropy \(S_{\mathcal{A}}\). The mathematical object of bulk dual of boundary entanglement entropy is just an area functional with no correction term and no additional character. Maybe the fact that we can't totally geometrize \(S_{\mathcal{A}}\) in flat\({}_{3}\)/BMSFT model imply us that there is no well defined and useful notion about entanglement wedge in this toy model. Let us make an analogy to make this point clearer. Take "Topological Massive gravity/anomalous CFT" correspondence as an example, the holographic entanglement entropy contain corrections due to the Chern Simons term [79], but the position of RT surface is the same as pure gravity case. There are spacetimes, for example the vacuum AdS\({}_{3}\), that are the solutions of both pure Einstein gravity theory and topological massive gravity theory. In these kind of spacetimes, the positions of RT surfaces and related null bifurcating horizons are the same. However the boundary dual CFTs are rather different. How can the same bulk region encoded different boundary information? Actually there are no solid results of entanglement wedge and bulk reconstruction in this duality. Another possibility would be to find more fine structures of the special bulk region \(\mathcal{W}^{f}_{\mathcal{E}}[\mathcal{A}]\) in (4.17). We hope more solid works on this problem due to its substantial role in flat\({}_{3}\)/BMSFT holography. **Flat holography: which boundary?** Flat spacetime has more complicated asymptotic boundaries than AdS spacetime, thus an important question about flat holography is where the dual boundary field theory lives. Flat\({}_{3}\)/BMSFT model provides us a vague but unexpected implication on this question. We made the observation in subsection 3.1 and proved it in (4.4) that the bench \(\gamma\) always penetrate beyond the boundary of original spacetime, see Figure 14(b). In other words, we have to analytically continue the original spacetime to include just the finite bench \(\gamma\). Viewing from gravity, analytic continuation of spacetime is quite normal. However from holographic point of view, no similar things happened before. For example in AdS/CFT holography, RT surface always totally lie within the quotient spacetime, no matter for the Poincare patch or BTZ black holes. This may imply that although field theory live just on the future null infinity \(\mathscr{I}^{+}\), we also need information on other asymptotic boundaries especially the past null infinity \(\mathscr{I}^{-}\). Let's make an analogy with the standard eternal black hole case which is dual to the thermofield double state (TFD state) in AdS/CFT holography [80], see Figure 14(a). The Left and right boundaries are spacelike separated with each other, and their CFT Hamiltonians have no coupling. In TFD state, any interval located within left(right) boundary has RT surface totally sit in left(right) exterior and would not penetrate through horizon9. Only when the boundary interval contains part of both left and right CFTs can the RT Figure 14: Figure 14(a) shows the standard Penrose diagram of eternal black hole in AdS/CFT holography. Figure 14(b) shows the relative position of swing surface with respect to the boundary of the quotient manifold–the Poincaré vacuum. surface behave like the green curve in Figure 14(a). The flat holography case has similar things to some extent, but is more subtle due to the fact that the future null infinity \(\mathscr{I}^{+}\) and the past one \(\mathscr{I}^{-}\) are timelike separated, which may communicate or contain same but modulated information. ## Appendix A Reflected Entropy In this appendix, we give a complete derivation about the reflected entropy \(S_{Ref}^{\text{BMS}}\) of two disjoint intervals in BMS field theory. For the steps to be self-contained, we have to show some similar calculations with those in [65]. The new thing that is not shown in [65] is the step of deriving three point coefficient in the OPE of twist operators (A.5), which affects the final results up to a constant [68]. See [72] for the notations, figures and detailed introduction of reflected entropy. ### The BMS Semi-classical Block The function \(\mathcal{F}(x,t)\) in (2.14) can be decomposed into BMS conformal blocks \(\mathcal{F}_{\alpha}(x,t)\), \[\mathcal{F}(x,t)=\sum_{\alpha}C_{12\alpha}C_{34}^{\alpha}\mathcal{F}_{\alpha} (x,t)\] (A.1) In the semi-classical limit a closed form for \(\mathcal{F}_{\alpha}(x,t)\), which is also consistent with the ultra-relativistic limit of the Virasoro block, is obtained by the null vectors of the BMS algebra as well as the monodromy method [59], \[\mathcal{F}_{\alpha}(x,t) \sim\left(\frac{x^{\beta-1}}{(1-x^{\beta})^{2}}\right)^{\Delta_{ L}}e^{t\left(\frac{\beta\frac{\beta}{2}}{x(x^{\beta-1})}\xi_{\alpha}-\frac{x^{ \beta}(\beta+1)+\beta-1}{x(x^{\beta}-1)}\xi_{L}\right)}\] \[\times\left(\frac{1-x^{\frac{\beta}{2}}}{1+x^{\frac{\beta}{2}}} \right)^{\Delta_{\alpha}}e^{\Delta_{H}\log x\left(\frac{2x^{\frac{\beta}{2}}}{ \beta(x^{\beta}-1)}\xi_{\alpha}+\frac{2(x^{\beta}+1)}{\beta(1-x^{\beta})}\xi_ {L}\right)}\] (A.2) where \(\beta=\sqrt{1-24\frac{\xi_{H}}{c_{M}}}\), \(\Delta_{L,H},\xi_{L,H}\) are the conformal weight and boost charge (2.9) related to external light/heavy operators and \(\Delta_{\alpha},\xi_{\alpha}\) are the ones related to internal operators in the OPE expansion. This Heavy-Heavy-Light-Light correlator need heavy operators scale freely with the central charge \(c_{M}\) and light operators obey \(1\ll\xi_{L},\Delta_{L}\ll c_{M}\). Analytically continuing the quantum numbers of heavy operators to the light one, we get the BMS block \(\mathcal{F}_{\alpha}\) of four same dimension light operators \[\log\mathcal{F}_{\alpha}(x,t) \sim\underbrace{\frac{1}{(-1+x)\sqrt{x}}\xi_{\alpha}+\log{(\frac {1-\sqrt{x}}{1+\sqrt{x}})\Delta_{\alpha}}}_{\text{\small{will contr. to the RE}}}\] \[+\underbrace{\frac{2}{1-x}\xi_{L}+[2\log{(1-x)}+\big{(}\frac{2(1 +x)}{1-x}\xi_{L}+\frac{2\sqrt{x}}{-1+x}\big{)}]\Delta_{L}}_{\text{\small{will cancel by normalization}}}\] (A.3) ### OPE coefficient and Twist operator dimension We assume the primary twist operators in orbifold BMSFT all belong to the singlet version of the highest weight representation of BMS algebra [69]. Then the twist operators \(\sigma_{g_{A}}\), \(\sigma_{g_{B}}\) and \(\sigma_{g_{A}g_{B}^{-1}}\) have the following dimensions, * \(\sigma_{g_{A}}\), \(\sigma_{g_{B}}\), \(\sigma_{g_{A}^{-1}}\), \(\sigma_{g_{B}^{-1}}\): \[\xi_{g_{A,B}}=n\xi_{m}=n\frac{c_{M}}{24}(m-\frac{1}{m}),\quad\Delta_{g_{A,B}}= n\Delta_{m}=n\frac{c_{L}}{24}(m-\frac{1}{m})\] (100) * \(\sigma_{g_{A}g_{B}^{-1}}\): \[\xi_{g_{A}g_{B}^{-1}}=2\xi_{n}=2\frac{c_{M}}{24}(n-\frac{1}{n}),\quad\Delta_{g_ {A}g_{B}^{-1}}=2\Delta_{n}=2\frac{c_{M}}{24}(n-\frac{1}{n})\] (101) For the three point OPE coefficients important for the final results of reflected entropy, we claim that \[\sigma_{g_{A}^{-1}}\sigma_{g_{B}}=C^{\text{BMS}}_{nm}\sigma_{g_{B}g_{A}^{-1}}+...\quad,\quad C^{\text{BMS}}_{nm}=(2m)^{-2\Delta_{n}} \tag{102}\] This can be proved by using the same method in CFT [11] and WCFT [72]. We show the main steps here: \[\langle\sigma_{g_{A}^{-1}}(x_{1},y_{1})\sigma_{g_{B}}(x_{2},y_{2} )\sigma_{g_{A}g_{B}^{-1}}(x_{3},y_{3})\rangle_{\text{BMS}^{\otimes mn}(plane)}\] \[= e^{S_{L}(\phi)}\underbrace{\left|\partial f^{\prime}\right|_{f=s _{1}^{+}}^{-h_{n}^{L}}\left|\partial f^{\prime}\right|_{f=s_{2}^{+}}^{-h_{n}^ {L}}e^{-h_{n}^{M}\left(\frac{g^{\prime}+s_{1}^{-}f^{\prime\prime}}{f^{\prime }}\right|_{\left\{f=s_{1}^{+},g=s_{1}^{-}\right\}}+\frac{g^{\prime}+s_{2}^{-} f^{\prime\prime}}{f^{\prime}}\right|_{\left\{f=s_{1}^{+},g=s_{1}^{-}\right\}}} }_{(\mathcal{D}_{1})}\] \[\times\underbrace{\langle\sigma_{(\tau_{n}^{0})^{-1}}(s_{1}^{+}, s_{1}^{-})\sigma_{\tau_{n}^{m/2}}(s_{2}^{+},s_{2}^{-})\rangle_{\text{BMS}^{ \otimes n}(plane)}}_{(\mathcal{D}_{2})}\] \[=\left(\langle\sigma_{g_{A}^{-1}}(x_{1},y_{1})\sigma_{g_{B}}(x_{ 2},y_{2})\rangle_{\text{BMS}^{\otimes m}(plane)}\right)^{n}\Bigr{|}_{A=B}\times (\mathcal{D}_{1}\mathcal{D}_{2})\] \[= e^{-2\xi_{n}(\frac{y_{22}}{x_{22}}+\frac{y_{31}}{x_{31}}-\frac{ y_{21}}{x_{21}})-2n\xi_{m}\frac{y_{21}}{x_{21}}}(2m)^{-2\Delta_{n}}|x_{32}|^{-2 \Delta_{n}}|x_{31}|^{-2\Delta_{n}}|x_{12}|^{-2n\Delta_{m}+2\Delta_{n}} \tag{103}\] the first equality comes from a BMS symmetry transformation \[s^{+}=f=\frac{(x-x_{1})^{1/m}}{(x-x_{2})^{1/m}},\quad s^{-}=g=\frac{f}{m(x-x_ {1})(x-x_{2})}[tx_{12}-xt_{21}-x_{1}t_{2}+x_{2}t_{1}] \tag{104}\] which maps the \(mn\) replica sheets to a \(n\) replica sheets. The twist operators \(\sigma_{(\tau_{n}^{0})^{-1}}(s_{1}^{+},s_{1}^{-})\), \(\sigma_{\tau_{n}^{m/2}}(s_{2}^{+},s_{2}^{-})\) have quantum numbers \(\Delta_{n}\) and \(\xi_{n}\) due to their \(n\)-cyclic monodromy conditions getting from the above map (104), and the explicit values of \(s_{1,2}^{\pm}\) are \[s_{1}^{+}=-s_{2}^{+}=\frac{(x_{3}-x_{1})^{1/m}}{(x_{3}-x_{2})^{1/m}}\] \[s_{1}^{-}=-s_{2}^{-}=\frac{s_{1}^{+}}{m(x_{3}-x_{1})(x_{3}-x_{2} )}[t_{3}x_{12}-x_{3}t_{21}-x_{1}t_{2}+x_{2}t_{1}] \tag{105}\] From the result (103) we can directly see the OPE coefficient \(C^{\text{BMS}}_{nm}=(2m)^{-2\Delta_{n}}\) as claimed. ### Reflected entropy of vacuum and thermal state on the plane In the holographic BMS field theory we assume that the single block dominance work in the semi-classical limit, and the dominant BMS block in the block expansion of the four point function (A.1) is the one with lowest quantum dimensions. For \(t\)-channel OPE of twist operator \(\sigma_{g_{A}}\sigma_{g_{B}^{-1}}\), the dominant one is related to the primary twist operator \(\sigma_{g_{B}g_{A}^{-1}}\). By taking the Von-Neumann limit \(n,m\to 1\), the external twist operators \(\sigma_{g_{A,B}}\) and the internal one \(\sigma_{g_{B}g_{A}^{-1}}\) all become light operators, then (A.3) can be used to evaluate the reflected entropy, \[\Big{\langle}\sigma_{g_{A}}(x_{1},t_{1})\sigma_{g_{A}^{-1}}(x_{2}, t_{2})\sigma_{g_{B}}(x_{2},t_{2})\sigma_{g_{B}^{-1}}(x_{4},t_{4})\Big{\rangle}_{ \text{BMS}^{\otimes mn}}\] (A.10) \[=\frac{e^{-2\xi_{g_{A}}\frac{t_{12}}{x_{12}}-2\xi_{g_{B}}\frac{t_ {34}}{x_{34}^{2\Delta_{g_{B}}}}}}{x_{12}^{2\Delta_{g_{A}}}x_{34}^{2\Delta_{g_{ B}}}}\sum_{\alpha}C_{AB}^{2}\sigma_{\alpha}(mnc,\Delta_{i},\xi_{i},\Delta_{ \alpha},\xi_{\alpha},x,t)\] \[\approx\frac{e^{-2\xi_{g_{A}}\frac{t_{12}}{x_{12}}-2\xi_{g_{B}} \frac{t_{34}}{x_{12}^{2\Delta_{g_{A}}}x_{34}^{2\Delta_{g_{B}}}}}}{e^{\frac{t^{ \frac{2}{1-x}}\xi_{L}+[2\log{(1-x)}+\left(\frac{2(1+x)}{1-x_{2}}\xi_{L}+\frac{2 \sqrt{x}}{-1+x}\right)]\Delta_{L}}{\text{cancel out}}}}\] \[\times\underbrace{\left(C_{nm}^{\text{BMS}}\right)^{2}e^{t\frac{1 }{(1-x)\sqrt{x}}\xi_{\alpha}+\log{(\frac{1-\sqrt{x}}{1+\sqrt{x}})\Delta_{\alpha }}}\Big{|}_{\alpha=\sigma_{g_{B}g_{A}^{-1}}}}\] (A.11) after the cancellation of several factors explicitly shown in (A.11) between numerator and denominator in the evaluation of reflected entropy, the final result of vacuum state on the BMS plane turn out to be \[S_{Ref;vac}^{\text{BMS}} =\lim_{m,n\to 1}\frac{1}{1-n}\ln\frac{\Big{\langle}\sigma_{g_{A}}(x _{1})\sigma_{g_{A}^{-1}}(x_{2})\sigma_{g_{B}}(x_{3})\sigma_{g_{B}^{-1}}(x_{4}) \Big{\rangle}_{\text{BMS}^{\otimes mn}}}{\Big{(}\Big{\langle}\sigma_{g_{m}}(x _{1})\sigma_{g_{m}^{-1}}(x_{2})\sigma_{g_{m}}(x_{3})\sigma_{g_{m}^{-1}}(x_{4} )\Big{\rangle}_{\text{BMS}^{\otimes m}}\Big{)}^{n}}\] \[\approx\lim_{m,n\to 1}\{-2\ln C_{mn}^{\text{BMS}}+\frac{n+1}{n} \left(\frac{c_{M}}{12}\frac{t}{(1-x)\sqrt{x}}+\frac{c_{L}}{12}\log{(\frac{1+ \sqrt{x}}{1-\sqrt{x}})}\right)\}\] \[=\frac{c_{M}}{6}\frac{t}{(1-x)\sqrt{x}}+\frac{c_{L}}{6}\log{(\frac {1+\sqrt{x}}{1-\sqrt{x}})}\] (A.12) For the thermal state reflected entropy, we need the correlator of four point twist operators on the thermal cylinder using (2.11) and (2.20), \[\langle\sigma_{g_{A}}(u_{1},\phi_{1})\sigma_{g_{A}^{-1}}(u_{2}, \phi_{2})\sigma_{g_{B}}(u_{3},\phi_{3})\sigma_{g_{B}^{-1}}(u_{4},\phi_{4}) \rangle_{\text{BMS}^{\otimes mn}}^{cylinder}\] \[=e^{\frac{\xi_{g_{A}}\left(\beta_{u}\beta_{\phi}+2\pi\left(\beta _{u}\sum_{j=1}^{4}\phi_{j}+\beta_{\phi}\sum_{j=1}^{4}u_{j}\right)\right)}{ \beta_{x}^{2}}}\left(\left(\frac{2\pi}{\beta_{\phi}}\right)^{4}e^{\frac{2\pi \sum_{j=1}^{4}\phi_{j}}{\beta_{\phi}}}\right)^{\Delta_{g_{A}}}\] \[\times\langle\sigma_{g_{A}}(x_{1},y_{1})\sigma_{g_{A}^{-1}}(x_{2},y_{2})\sigma_{g_{B}}(x_{3},y_{3})\sigma_{g_{B}^{-1}}(x_{4},y_{4})\rangle_{ \text{BMS}^{\otimes mn}}^{plane}\] (A.13) The first line of (A.13) would again cancel out between enumerator and denominator, and the second line of (A.13) contributes to the final answer. Thus the thermal state reflected entropy can be obtained by the same formula (A.12), but with the cross ratios \(x,t\) getting from the map (21). Finally we have \[S^{BMS}_{ref;thermal} =\frac{c_{L}}{6}\log\left(\frac{1+\sqrt{x}}{1-\sqrt{x}}\right)+ \frac{c_{M}t/6}{(1-x)\sqrt{x}},\] (A.14) \[x =\frac{x_{12}x_{34}}{x_{13}x_{24}}\bigg{|}_{x_{i}\mapsto e^{\frac {2\pi\phi_{i}}{\phi}}},\] (A.15) \[\frac{t}{x} =\left(\frac{t_{12}}{x_{12}}+\frac{t_{34}}{x_{34}}-\frac{t_{13}}{ x_{13}}-\frac{t_{24}}{x_{24}}\right)\bigg{|}_{\{x_{i}\mapsto e^{\frac{2\pi \phi_{i}}{\phi_{\phi}},t_{i}\mapsto-\frac{2\pi}{\beta_{\phi}}}e^{\frac{2\pi \phi_{i}}{\beta_{\phi}}}\left(\phi_{i}\frac{\partial u}{\partial_{\phi}}+u_{i} \right)\}}\] ## Appendix B \(M>0\) Zero Mode Background In this appendix we work in the \(M>0\) zero mode background. After a similar analysis of the bifurcating horizon behavior and the entanglement phase transition in this solution, we will see the conclusions get in the main context from the analysis of the Poincare vacuum are universal. ### Bifurcating horizon The \(M>0\) zero mode backgrounds which can be regarded as the flat limit of BTZ black hole of asymptotically AdS\({}_{3}\) spacetime [56; 57] have metric in Bondi coordinates, \[ds^{2}=Mdu^{2}-2dudr+Jdud\phi+r^{2}d\phi^{2},\quad u\in(-\infty,\infty),\;r \in(0,\infty),\;\phi\in(0,2\pi)\] (B.1) For the general interval \(\mathcal{A}\) with \(\partial\mathcal{A}=\{(u_{l},\phi_{l}),\;(u_{r},\phi_{r})\}\) The length \(L(r_{l},r_{r})\) of spacelike geodesic between the two null ropes \(r_{l},r_{r}\) satisfying \[\gamma_{l,r}:\quad u=u_{l,r},\;\;\phi=\phi_{l,r}\] (B.2) is not illuminating, so we choose to not present it. The extreme of \(L(r_{l},r_{r})\) can be found at \[r_{l}=-r_{r}=\frac{Mu_{21}+\sqrt{M}r_{c}\phi_{21}+r_{c}\sinh\left(\sqrt{M} \phi_{12}\right)}{\cosh\left(\sqrt{M}\phi_{12}\right)-1}\] (B.3) where \(r_{c}\) is the Cauchy horizon (3.4) and \(u_{ij}=u_{i}-u_{j},\phi_{ij}=\phi_{i}-\phi_{j}\). The parameter equations for the Killing horizon \(N_{l,r}\) of the bifurcating surface \(\gamma_{\xi}\) are \[t =t_{l}+\kappa(t_{l}-t_{r})+\lambda\] (B.4) \[x =x_{l}+\kappa(x_{l}-x_{r})+\tanh\sqrt{M}\phi_{l,r}\lambda\] \[y =y_{l}+\kappa(y_{l}-y_{r})+\cosh\sqrt{M}^{-1}\phi_{l,r}\lambda\] where \((t_{l,r},x_{l,r},y_{l,r})\) are the endpoints of the bench \(\gamma\) that can be obtained by using (B.3) and (3.9). When \(\lambda=0\), (B.4) would reduce to the parametrization of the bifurcating surface \(\gamma_{\xi}\) for \(\kappa\in(-\infty,\infty)\) and the bench \(\gamma\) for \(\kappa\in(0,1)\). When \(\lambda>0\), (B.4) denote two future bifurcating horizons \(N_{l,r}\) with two null ropes \(\gamma_{l,r}\) sitting on; while for \(\lambda<0\) these equations parametrize the two corresponding past horizons. Similarly these four bifurcating horizons, which together decompose the global flat\({}_{3}\) into four non-intersecting causal regions, would converge to four single points on the future/past null infinity separately in the Penrose diagram. Mathematically, we can map boundary in Bondi coordinates \((u,r\rightarrow\infty,\phi)\) to boundary in Penrose coordinates \((U,V,\Phi)\) using (4.9): \[\sqrt{x^{2}+y^{2}}|_{r\rightarrow\pm\infty}=\frac{\cosh{(\sqrt{M} \phi)}}{\sqrt{M}}|r|-\frac{\sqrt{M}}{\cosh{(\sqrt{M}\phi)}}\left(u+\frac{J \phi}{2M}+\frac{J\cosh{(\sqrt{M}\phi)}\sinh{(\sqrt{M}\phi)}}{2M^{3/2}}\right) \frac{|r|}{r}\] \[\text{when }r\rightarrow\infty,\quad U=\arctan\left(\left(u+\frac{J \phi}{2M}\right)\frac{\sqrt{M}}{\cosh{(\sqrt{M}\phi)}}\right)\text{, }V=\frac{\pi}{2},\;\Phi=\arccos{\left(\frac{\sinh{\phi}}{\cosh{\phi}}\right)}\] (B.5) ### Entanglement phase transition We consider thermal state entanglement phase transition of BMS field theory using formula (2.22) with \(c_{L}=0\) for the same configurations in Figure 12. The boundary intervals are \[\partial A=\{(u_{1},\phi_{1}),(u_{2},\phi_{2})\},\quad\partial B=\{(u_{3}, \phi_{3}),(u_{4},\phi_{4})\}\] (B.6) Similarly we can define \[S_{1} =\sqrt{M}\Big{[}\Big{(}u_{23}+\frac{J\phi_{23}}{2M}\Big{)}\coth \Big{(}\frac{\sqrt{M}\phi_{23}}{2}\Big{)}+\Big{(}u_{14}+\frac{J\phi_{14}}{2M} \Big{)}\coth\Big{(}\frac{\sqrt{M}\phi_{14}}{2}\Big{)}\Big{]}-\frac{2J}{M}\] (B.7) \[S_{2} =\sqrt{M}\Big{[}\Big{(}u_{12}+\frac{J\phi_{12}}{2M}\Big{)}\coth \Big{(}\frac{\sqrt{M}\phi_{12}}{2}\Big{)}+\Big{(}u_{34}+\frac{J\phi_{34}}{2M} \Big{)}\coth\Big{(}\frac{\sqrt{M}\phi_{34}}{2}\Big{)}\Big{]}-\frac{2J}{M}\] \[S_{3} =\sqrt{M}\Big{[}\Big{(}u_{13}+\frac{J\phi_{13}}{2M}\Big{)}\coth \Big{(}\frac{\sqrt{M}\phi_{13}}{2}\Big{)}+\Big{(}u_{24}+\frac{J\phi_{24}}{2M} \Big{)}\coth\Big{(}\frac{\sqrt{M}\phi_{24}}{2}\Big{)}\Big{]}-\frac{2J}{M}\] The difference between them are \[S_{1}-S_{2}=\frac{u}{\phi(\phi-1)},\quad S_{2}-S_{3}=\frac{u}{\phi}\quad S_{1 }-S_{3}=\frac{u}{\phi-1}\] (B.8) where \(u,\phi\) are finite temperature cross ratios. Therefore we get * \(S_{1}\) is the minimal one when: \[u<0,\quad\phi>1\quad\text{or}\quad u>0,\quad 0<\phi<1\] (B.9) * \(S_{2}\) is the minimal one when: \[u<0,\quad 0<\phi<1\] (B.10) * \(S_{3}\) is the minimal one when: \[u<0,\quad\phi<0\quad\text{or}\quad u>0,\quad\phi>1\] (B.11) Taking symmetric configurations, \[u_{a}=-u_{1}=u_{2},\;\;\phi_{a}=-\phi_{1}=\phi_{2},\;\;u_{b}=-u_{3}=u_{4},\;\; \phi_{b}=-\phi_{3}=\phi_{4}\] (B.12) with \(\phi_{b}>\phi_{a}\). This configuration ensures that \(0<\phi<1\), which means that \(S_{3}\) can never be the minimal one. Fixing \(\phi_{a},\phi_{b}\) and \(u_{a}\), there is a critical value \[u_{c}=\frac{(2Mu_{a}+J\phi_{a})\sinh(\sqrt{M}\phi_{b})\sinh^{-1}(\sqrt{M}\phi_{a })-J\phi_{b}}{2M} \tag{113}\] for \(u_{b}\) such that \(S_{1}=S_{2}\). When \(u_{b}<u_{c}\), \(S_{2}\) is the minimal one and vice verse. we can check the difference between the slope of the critical interval \(B\) with \(u_{b}=u_{c}\) and that of interval \(A\) is \[\frac{u_{c}}{\phi_{b}}-\frac{u_{a}}{\phi_{a}}=\frac{(2Mu_{a}+J\phi_{a})(\phi_{a }\sinh(\sqrt{M}\phi_{b})-\phi_{b}\sinh(\sqrt{M}\phi_{a}))}{2M\phi_{a}\phi_{b} \sinh(\sqrt{M}\phi_{a})} \tag{114}\] Since \(\phi_{b}>\phi_{a}\), we always have \(\phi_{b}\sinh(\sqrt{M}\phi_{a})-\phi_{a}\sinh(\sqrt{M}\phi_{b})<0\). Therefore, the sign of (114) only depends on the sign of \(2Mu_{a}+J\phi_{a}\). ###### Acknowledgements. We thank Bin Chen, Wei Song, Qiang Wen, Boyang Yu and Luis Apolo for useful discussions. The work is in part supported by NSFC Grant No. 12275004, 11735001.
2309.15033
Unveiling the Phase Diagram and Reaction Paths of the Active Model B with the Deep Minimum Action Method
Nonequilibrium phase transitions are notably difficult to analyze because their mechanisms depend on the system's dynamics in a complex way due to the lack of time-reversal symmetry. To complicate matters, the system's steady-state distribution is unknown in general. Here, the phase diagram of the active Model B is computed with a deep neural network implementation of the geometric minimum action method (gMAM). This approach unveils the unconventional reaction paths and nucleation mechanism in dimensions 1, 2 and 3, by which the system switches between the homogeneous and inhomogeneous phases in the binodal region. Our main findings are: (i) the mean time to escape the phase-separated state is (exponentially) extensive in the system size $L$, but it increases non-monotonically with $L$ in dimension 1; (ii) the mean time to escape the homogeneous state is always finite, in line with the recent work of Cates and Nardini~[Phys. Rev. Lett. 130, 098203]; (iii) at fixed $L$, the active term increases the stability of the homogeneous phase, eventually destroying the phase separation in the binodal for large but finite systems. Our results are particularly relevant for active matter systems in which the number of constituents hardly goes beyond $10^7$ and where finite-size effects matter.
Ruben Zakine, Eric Simonnet, Eric Vanden-Eijnden
2023-09-26T16:02:35Z
http://arxiv.org/abs/2309.15033v2
# Unveiling the Phase Diagram and Reaction Paths of the Active Model B ###### Abstract Nonequilibrium phase transitions are notably difficult to analyze because their mechanisms depend on the system's dynamics in a complex way due to the lack of time-reversal symmetry. To complicate matters, the system's steady-state distribution is unknown in general. Here, the phase diagram of the active Model B is computed with a deep neural network implementation of the geometric minimum action method (gMAM). This approach unveils the unconventional reaction paths and nucleation mechanism by which the system switches between the homogeneous and inhomogeneous phases in the binodal region. Our main findings are: (i) the mean time to escape the phase-separated state is (exponentially) extensive in the system size \(L\), but it increases _non-monotonically_ with \(L\); (ii) the mean time to escape the homogeneous state is always finite, in line with the recent work of Cates and Nardini [1]; (iii) at fixed \(L\), the active term increases the stability of the homogeneous phase, eventually destroying the phase separation in the binodal for large but finite systems. Our results are particularly relevant for active matter systems in which the number of constituents hardly goes beyond \(10^{7}\) and where finite-size effects matter. _Introduction-_ Activated processes are ubiquitous in nature but intrinsically difficult to probe in simulations since they require the sampling of rare events [2; 3; 4; 5]. When a first-order phase transition occurs, a nucleation event is usually required for the system to reach its stable phase [6; 7; 8; 1]. In equilibrium systems, we can exploit the property of time-reversal symmetry (TRS) and the knowledge of their equilibrium distribution to derive a free energy from which we can infer both the thermodynamic stability of each phase, and the reaction paths that are followed by the system during activation [9; 10; 11]. In contrast, the breakdown of TRS in nonequilibrium systems means that we no longer have access to their free energy, and the mechanism of activated processes must be understood from their dynamics rather than their unknown steady-state distribution [12; 13; 14; 15; 16; 17; 18; 19]. Mapping their phase diagram therefore poses a persistent challenge. In this letter, we consider this problem in the context of the active Model B, a natural nonequilibrium extension of the Cahn-Hilliard dynamics with a nonlinear growth term [20; 21] that breaks TRS. This generic model has attracted a lot of attention in the last decade [22; 23; 24; 25], and can be used, for instance, as an effective description of the dynamics of active particles that are known to undergo a motility-induced phase separation (MIPS) [26; 27; 28]. Here, we map the phase diagram of the active Model B and calculate the pathways by which first-order phase transitions occur in this system. Our results indicate that these transitions involve nucleation events that are markedly different from their equilibrium counterpart, and are shaped by the interplay between the noise and nongradient terms in the stochastic dynamics of the system. In large but finite systems, we also show that the active term can decrease the probability to observe the nucleation of the phase-separated state and help the reverse transition from the phase-separated phase to the homogeneous state. To obtain these results, we compute reaction paths using an implementation of the geometric Minimum Action Method (gMAM) [29; 30; 31] that relies on Physics-Informed Neural Networks (PINNs) [32; 33]; this neural implementation, referred to as _deep gMAM_[34], is interesting in its own right as it is transferable to study first-order phase transitions in other nonequilibrium systems. Here we also cross-check some of the results of the deep gMAM algorithm using the traditional gMAM method as benchmark. _Problem setting-_ The active Model B (AMB) describes the stochastic dynamics of a conserved scalar field \(\phi(x,t)\), typically interpreted as the local (relative) density of particles or the local composition of a mixture, and can be written as the divergence of a noisy flux [1; 22; 23; 35] \[\partial_{t}\phi =\nabla\cdot(M\nabla\mu+\xi), \tag{1}\] \[\mu([\phi],x) =\frac{\delta\mathcal{F}[\phi]}{\delta\phi(x)}+\lambda|\nabla\phi (x)|^{2}, \tag{2}\] where \(\mathcal{F}[\phi]\) is a Ginzburg-Landau free energy, \(M\) is the mobility operator, and \(\xi\) is a spatio-temporal white-noise, i.e. a Gaussian process with mean zero and covariance \(\langle\xi(x,t)\xi(x^{\prime},t^{\prime})\rangle=2\epsilon M\delta(x-x^{ \prime})\delta(t-t^{\prime})\) with \(\epsilon\) controlling the amplitude of the fluctuations. We will investigate Eq. (1) in \(d=1\) and \(d=2\) dimensions, assuming periodic boundary condition of the domain \(\Omega=[0,L]^{d}\) with lateral size \(L\). For simplicity, we will focus on the situations where \(M=\mathbbm{1}\) and \(\mathcal{F}[\phi]=\int_{\Omega}[\frac{1}{2}\nu(\nabla\phi)^{2}+f(\phi)]dz\), where \(\nu>0\) and \(f(\phi)\) is a double-well potential. With this choice of free energy there exists a region in the phase diagram where a homogeneous state, denoted \(\phi_{H}\), will coexist with a phase-separated state (or inhomogeneous state), denoted \(\phi_{I}\), see Fig. 1(a). These states correspond to the two (locally) stable fixed points of the noiseless version of Eq. (1), i.e. the solution to \(\nabla\cdot(M\nabla\mu)=0\) with a prescribed value of the spatial average \(\phi_{0}\) of \(\phi\) in the domain. When \(\lambda=0\) the chemical potential \(\mu\) is the functional derivative of a free energy \(\mathcal{F}[\phi]\), and the dynamics is in detailed balance with respect to the Gibbs-Boltzmann measure, and the stationary probability of observing a configuration \(\phi(x)\) is thus given by \(P_{s}[\phi]\propto\exp(-\mathcal{F}[\phi]/\epsilon)\). In this case, the relative stability of the phases associated with \(\phi_{H}\) and \(\phi_{I}\) can be inferred from the values of \(\mathcal{F}[\phi_{H}]\) and \(\mathcal{F}[\phi_{I}]\), and transitions between these states involve a reaction path that goes through a saddle point configuration on \(\mathcal{F}[\phi]\). In contrast, when \(\lambda\neq 0\), TRS is broken because \(\mu\) does not satisfy the Schwarz condition on its functional derivative [36; 37; 25], the stationary distribution of the system is no longer available. As a consequence, the functional \(\mathcal{F}[\phi]\) brings no information on the relative stability of the phases associated with \(\phi_{H}\) and \(\phi_{I}\). Rather, a characterization of the relative stability of these phases must rely on the dynamics. _Phase transitions and quasipotential_- We will resort to Freidlin-Wentzell large-deviation theory (LDT) to calculate the rates of the transitions from \(\phi_{H}\) to \(\phi_{I}\) and vice-versa, as well as their most likely paths [12], in the limit as \(\epsilon\to 0\) (i.e. when the system is either in \(\phi_{H}\) or in \(\phi_{I}\)with probability one, and proper phases can be defined). Denoting by \(k_{I,H}\) the rate to go from \(\phi_{I}\) to \(\phi_{H}\), it is asymptotically given by \(k_{I,H}\asymp\exp{(-V_{\phi_{I}}(\phi_{H})/\epsilon)}\), where \(V_{\phi_{I}}(\phi_{H})\) is the so-called quasipotential of \(\phi_{H}\) relative to \(\phi_{I}\) that plays a role similar to a potential barrier in Arrhenius' law; a similar expression holds for \(k_{H,I}\), the rate to go from \(\phi_{H}\) to \(\phi_{I}\). The relative stability of the two phases can then be assessed by the difference of the logarithm of ratio of these escape rates \[\epsilon\log{k_{I,H}}-\epsilon\log{k_{H,I}}\asymp-V_{\phi_{I}}(\phi_{H})+V_{ \phi_{H}}(\phi_{I}), \tag{3}\] which is positive when \(\phi_{H}\) is the preferred phase, and negative when \(\phi_{I}\) is. Note that the values of the quasipotential \(V_{\phi_{I}}(\phi_{H})\) and \(V_{\phi_{H}}(\phi_{I})\) depend on the control parameters in the system, such as \(\lambda\) and \(\phi_{0}\), and so the sign of their difference can switch: when this happens, it is the signature of a first-order phase transition. This offers us a route to analyze these transitions, as advocated in [38], by computing these quasipotentials for various values of \(\lambda\) and \(\phi_{0}\), using the fact that they are the minima of the action functional \(S_{T}[\phi]\) defined as \[S_{T}[\phi]=\int_{0}^{T}\int_{\Omega}|\nabla^{-1}(\partial_{t}\phi-\nabla^{2} \mu)|^{2}dxdt \tag{4}\] where \(\Omega\) denotes the domain. The action (4) must be minimized with respect to both \(T\) and \(\phi\), subject to \(\phi(t=0,x)=\phi_{H}\) and \(\phi(t=T,x)=\phi_{I}\) to get \(V_{\phi_{H}}(\phi_{I})\), and \(\phi(t=0,x)=\phi_{I}\) and \(\phi(t=T,x)=\phi_{H}\) to get \(V_{\phi_{I}}(\phi_{H})\). _Deep gMAM_- To minimize (4) we use the PINN scheme introduced in [34]. In a nutshell, this approach amounts to approximating the field \(\phi(x,t)\) within a rich parametric class, such as a deep neural network, and viewing (4) as an objective (or loss, in the terminology of machine learning) for the parameters in the representation. The boundary conditions in space and time are accounted for by adding suitable pieces to (4), and the parameter optimization is performed using a standard stochastic gradient descent (SGD) algorithm such as ADAM on this compounded loss. This require evaluating the loss, which is done using space-time collocation points that are drawn randomly at each step of SGD (which amounts to performing online learning). The deep gMAM algorithm is simple to implement, does not require any gridding of space or time, and gives an analytical approximation of \(\phi(t,x)\) everywhere in the spatio-temporal domain. Here the results of the deep gMAM algorithm in \(d=1\) were cross-checked against those obtained using a classical implementation of gMAM, which requires discretizing the field in space and time, and is somewhat more delicate to implement. For more details on both the deep and the classic gMAM algorithms, in particular how to handle the optimization on \(T\) by reparameterizing the solution using arc-length instead of physical time, see the Supplemental Material (SM). _Phase diagram in 1d_- We focus first on the one Figure 1: (a) Three remarkable configurations in dimension \(d=1\). The solid line is the inhomogeneous state \(\phi_{I}\) (stable). The dashed line indicates the (unstable) critical state \(\phi_{c,1}\). The grey line is a configuration of the field on the nonequilibrium reaction path from \(\phi_{I}\) to the homogeneous state \(\phi_{H}\) (not shown). Parameters: \(\phi_{0}=0.65\), \(\lambda=2\), and \(L=120\). (b) Phase diagram of active Model B in parameter space \((\lambda,\phi_{0})\) in \(d=1\) dimension. Panel (b) shows the binodal (black line) and the spinodal (red line) that were already computed in [22]. In finite size systems, the bistable region does not fully span between the spinodal and the binodal but stops at the blue line (here plotted for \(L=60\)). The states \(\phi_{H}\) and \(\phi_{I}\) are both stable in the shaded region. Panel (c) focuses on the bistable region. The purple dashed line pinpoints the first-order (f-.o.) transition between \(\phi_{H}\) and \(\phi_{I}\). On this line, one has \(V_{\phi_{H}}(\phi_{I})=V_{\phi_{I}}(\phi_{H})\). In region \(\mathcal{H}\), above the f-.o. transition, \(\phi_{H}\) is thermodynamically preferred, while below the f-.o. transition, in regions \(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\), \(\mathcal{I}_{3}\), the inhomogeneous state \(\phi_{I}\) is preferred. The index \(q\) in \(\mathcal{I}_{q}\) refers to the number of bumps that appear along the reaction path from \(\phi_{I}\) to \(\phi_{H}\). Note that the region \(\mathcal{I}_{3}\) may display asymmetric paths with an action slightly smaller than their symmetric versions. -dimensional system, whose dynamics reads \[\partial_{t}\phi=-\partial_{x}^{2}[\partial_{x}^{2}\phi+\phi-\phi^{3}-\lambda( \partial_{x}\phi)^{2}]+\partial_{x}\xi, \tag{5}\] with \(\langle\xi(x,t)\xi(x^{\prime},t^{\prime})\rangle=2\epsilon\delta(t-t^{\prime}) \delta(x-x^{\prime})\). Space has been rescaled such that all lengths are given in units of \(\sqrt{\nu}\). We consider a system of size \(L\) and we take periodic boundary conditions. The relevant parameters are thus \(L\), the total mass \(\phi_{0}\equiv L^{-1}\int_{0}^{L}\phi\;dx\), and the activity level \(\lambda\). The constant density solution of Eq. (5) is the homogeneous state \(\phi_{H}\), and since the mass \(\phi_{0}\) is conserved, we have \(\phi_{H}=\phi_{0}\). In the following, using the fact Eq. (5) is invariant under \((\lambda,\phi)\rightarrow(-\lambda,-\phi)\), we restrain the study to the region \(\phi_{0}>0\). The homogeneous state \(\phi_{H}\) is always a stable fixed point of the noiseless dynamics for \(\phi_{0}>\phi_{\rm sp^{+}}^{\lambda}\), where \(\phi_{\rm sp^{+}}^{\lambda}=1/\sqrt{3}\) is the frontier of the spinodal in the space \((\lambda,\phi_{0})\) for \(\phi_{0}>0\). We are interested in the region where \(\phi_{H}\) competes with the inhomogeneous state \(\phi_{I}\). In the infinite system size limit, this region lies between the spinodal \(\phi_{\rm sp^{+}}^{\lambda}\) (red line in Fig. 1(b)) and the binodal curve \(\phi_{\rm bi^{+}}^{\lambda}\) (black line in Fig. 1(b-c)) that yields the bulk densities of each phase when the system undergoes a phase separation. The binodal curve \(\phi_{\rm bi^{+}}^{\lambda}\) has been obtained in a series of works [22; 27], in which it was also shown that the (active) term \(\lambda(\partial_{x}\phi)^{2}\), though seemingly being dynamically relevant only close to interfaces where strong gradients exist, has in fact a deep and non-local impact on the bulk of each phase. However, the stationary measure associated to the stochastic dynamics is still unknown. In particular, in the binodal region, where both the homogeneous state and the phase-separated state are metastable, the thermodynamically preferred phase is not necessarily the phase-separated state. We will denote by \(\phi_{\rm f.o.}^{\lambda}\) the transition density indicating the change of thermodynamic stability of the two competing phases. Naturally we have \(\phi_{\rm sp^{+}}^{\lambda}\leq\phi_{\rm f.o.}^{\lambda}\leq\phi_{\rm bi^{+}} ^{\lambda}\). The gMAM algorithm will eventually allow for inferring \(\phi_{\rm f.o.}^{\lambda}\), by providing insights on the nucleation paths between the two states. First, let us recall that for large but finite systems, the phase-separated state cannot be the preferred phase if \(\phi_{0}\) is taken too close to the binodal density \(\phi_{\rm bi^{+}}^{\lambda}\). For instance, in equilibrium, (i.e. \(\lambda=0\)) the binodal densities are \(\phi_{\rm bi^{\pm}}^{\lambda=0}=\pm 1\) but a free energy argument that compares interfaces and bulk contributions shows that \(\phi_{\rm f.o.}^{\lambda=0}\) converges to 1 as \(\phi_{\rm f.o.}^{\lambda=0}\sim 1-(1/L)^{1/2}\). More than that, due to finite-size effects, the phase-separated state may not exist at all when there is not enough space in the domain to nucleate the phase separation. Hence, one should keep in mind that in a finite system, say of size \(L\), bistability can only be observed below some threshold density \(\phi_{\rm m^{+}_{\rm L}}^{\lambda=0}\leq\phi_{\rm bi^{+}}^{\lambda=0}\), represented as the blue curve in Fig. 1. Nonetheless, we have \(\phi_{\rm m^{+}_{\rm L}}^{\lambda}\rightarrow\phi_{\rm bi^{\pm}}^{\lambda}\) as \(L\rightarrow\infty\). To pinpoint the first-order phase transition (FOPT), we run the gMAM algorithm for \(\phi_{0}\in[\phi_{\rm sp}^{\lambda},\phi_{\rm m^{+}_{\rm L}}^{\lambda}]\) and \(\lambda\in[-10,10]\). Solving \(V_{\phi_{H}}(\phi_{I})=V_{\phi_{I}}(\phi_{H})\) identifies the FOPT line \(\phi_{\rm f.o.}^{\lambda}\), the purple dashed line in Fig. 1(c), which splits the diagram into two regions: for \(\phi_{0}<\phi_{\rm f.o.}^{\lambda}\) the thermodynamically stable state is the inhomogeneous one, \(\phi_{I}\), while for \(\phi_{0}>\phi_{\rm f.o.}^{\lambda}\) the homogeneous state \(\phi_{H}=\phi_{0}\) is preferred. Interestingly, we also find that the binodal and the FOPT have a reentrance direction along \(\lambda\) that does not exist in the system of infinite size (see Fig. 1(c)). _Reaction paths in 1d-_ We consider first the reaction path starting from the homogeneous state \(\phi_{H}\) and reaching \(\phi_{I}\), and we compute \(V_{\phi_{H}}(\phi_{I})\) for different values of \(\lambda\) and system size \(L\). Interestingly this path is very close to the heteroclinic orbit joining \(\phi_{H}\) to \(\phi_{I}\), and going through the critical (saddle) state \(\phi_{c,1}(x)\) that displays one density bump (see Fig. 1(a)) and possesses only one unstable direction. This behavior is very similar to the equilibrium nucleation scenario occurring in the Cahn-Hilliard dynamics, as already noted in [1]: to escape \(\phi_{H}\), the system only needs to nucleate a finite size droplet of the opposite phase. The cost for the action associated to this event is always finite, and the value of the action does not differ much from the one computed using the time-reversed relaxational path (a few percent difference, not shown). In contrast, the transition from \(\phi_{I}\) to \(\phi_{H}\) is more complex, and its analysis had never been explored so far. For \(\phi_{0}>0\), as \(\lambda\) increases, the reaction path no longer follows the time-reverse relaxation path that goes through the saddle \(\phi_{c,1}\), but rather passes close to critical points with a large number of unstable directions, see Fig. 2(a) and 2(b), as it may sometimes be observed in nonequilibrium systems [34; 38]. In ad Figure 2: (a) Minimum action path joining \(\phi_{I}\) (at \(s=0\)) to \(\phi_{H}\) (at \(s=1\)) for \(\lambda=2\), \(\phi_{0}=0.65\) and \(L=44.7\) in \(d=1\) dimension. The vertical lines pinpoint the states where the norm of the flow is minimal (and almost zero), corresponding to the states close to the critical points. The corresponding critical points are displayed in panel (b). The state at the dashed line lies in the basin of attraction of the inhomogeneous state, while the state at the solid line lies on the separatrix between the \(\phi_{I}\) and \(\phi_{H}\). The action from the dashed line to the solid line is strictly positive, while the action from the solid line to \(\phi_{H}\) is zero. (b) Pair of critical states displaying two bumps, for same parameters as panel (a). If \(L=L_{2}^{*}\), these two states merge in a saddle-node bifurcation. (c) Threshold lengths \(L_{q}^{*}(\lambda)\) indicating the apparition of critical states with a given number \(q\) of bumps as a function of the system activity \(\lambda\). Above the critical \(q\)-line, pairs of critical states with \(q\) bumps are dynamically accessible. dition, nothing prevents the reaction paths to cross the separatrix at non-critical points, a feature that cannot be observed in equilibrium, where reaction paths necessarily go through saddles of the potential. The critical points of higher Morse index can be obtained by solving the noiseless and stationary version of Eq. (5). Since the system is one-dimensional with periodic boundary conditions, the critical point solves \(\partial_{x}^{2}\phi_{c}=-\phi_{c}+\phi_{c}^{3}+\lambda(\partial_{x}\phi_{c})^ {2}+\mu_{0}\), with \(\mu_{0}\) a constant, subject to the constraints of periodicity and \(L^{-1}\int_{0}^{L}\phi_{c}(x)dx=\phi_{0}\). A Newton mapping similar to the one introduced [22] enables us to compute precisely the critical points using a symplectic scheme (see SM). For given \(\lambda\) and \(\phi_{0}\), pairs of critical points with \(q\) bumps (\(q\in\mathbb{N}^{*}\)) appear at critical values of the system size denoted \(L_{q}^{*}\), reported in Fig. 2(c). The saddle-node bifurcation at \(L_{q}^{*}\) occurs when the system size \(L\) is large enough to fit an additional bump on the density profile. At precisely the bifurcation length \(L_{q}^{*}\), one degenerate critical state \(\phi_{c,q}^{*}\) becomes accessible to the dynamics. As \(L>L_{q}^{*}\), the degeneracy is lifted and two distinct critical states of \(q\) bumps appear. Any of the states \(\phi_{c,q}\) can be decomposed into \(q\) identical bumps of size \(L/q\). Notably, the state with bumps of largest amplitude strictly lies in the basin of attraction of \(\phi_{I}\), while the other state lies on the separatrix between \(\phi_{I}\) and \(\phi_{H}\). We display an example of such a pair of critical states for \(q=2\) in Fig. 2(b). For all \(q\geq 2\), the critical states are of Morse index \(q\geq 2\). The case \(q=1\) is special as it corresponds to the apparition of the inhomogeneous metastable state \(\phi_{I}\), jointly with the critical state of Morse index 1, \(\phi_{c,1}(x)\). A sketch of the structure of the deterministic flow between critical points is given in the SM. In summary, while the path from \(\phi_{H}\) to \(\phi_{I}\) indeed resembles the equilibrium one, the path from \(\phi_{I}\) to \(\phi_{H}\) displays spatial microstructures which are not present in equilibrium. Notably, the number \(q\) of bumps along the instanton changes with parameters \(L\), \(\phi_{0}\) and \(\lambda\), which is indicated by the \(\mathcal{I}_{q}\)-labeled regions in Fig. 1. _The roles of \(\phi_{0}\), \(\lambda\) and \(L\)-_ Even with the ability to compute the critical states, it remains a hard task to gain analytical insights on the complete reaction paths. Our extensive numerical computations eventually show several non-trivial features, gathered in Fig. 3. First, for fixed \(\phi_{0}\) and \(\lambda\), we notice that the action \(V_{\phi_{I}}(\phi_{H})\) non-monotonically increases as the system size \(L\) increases, a behavior that is triggered by the apparition of new bumps along the reaction path. Second, the scaling of the action remains extensive in the system size: we find that \(V_{\phi_{I}}(\phi_{H})\propto c(\phi_{0},\lambda)L\) asymptotically. Our study suggests that \(c\) is a _decreasing_ function of \(\phi_{0}\) and \(\lambda\): for given system size \(L\), increasing \(\phi_{0}\) or \(\lambda\) drives the system in the homogeneous phase, see Fig. 1. Third, above some critical value of \(\lambda\), we find that the reaction path from \(\phi_{I}\) to \(\phi_{H}\) goes through the critical states with the _highest number_ of unstable directions. More precisely, when the critical states with \(q\) bumps fit into the system, then, _either_ (i) the reaction path goes through the critical states \(\phi_{c,q}(x)\) (and displays also \(q\) bumps), _or_ (ii) the reaction path displays \(q+1\) bumps, does not converge to \(\phi_{c,q}(x)\) and crosses the separatrix elsewhere. Situation (i) corresponds to the parts of the curves in Fig. 3 where the action is locally increasing, while situation (ii) corresponds to the locally decreasing parts of the curves on the same plot. In other words, Fig. 3 shows that the reaction paths can display \(q\) bumps _before_ the corresponding critical states \(\phi_{c,q}(x)\) emerges. The fact that the reaction paths go through the highest Morse index states is not observed for values of \(\lambda<0\) (when \(\phi_{0}>0\)). To gain insights on the selected reaction paths, we have performed a spectrum analysis of the operator acting on the perturbations around \(\phi_{I}\) (see SM). The analysis confirms that \(\phi_{I}\) possesses stable direction only, and two marginally stable directions (Goldstone modes) corresponding to the mass conservation and to space translation invariance (due to the periodic boundary condition). Interestingly, the eigenvectors may display an oscillating profile reminiscent of the states along the instanton. However, the less stable eigenvector (corresponding to the less negative eigenvalue) does not correlate to the number of bumps selected along the instanton, as one could have expected. Finally, our numerical results seem to indicate that more than one reaction path can be accessible. The yellow dots in Fig. 3 pinpoint the crossing of the branches where reaction paths display \(q\) and \(q+1\) bumps. There is thus a region close to these points where the action \(S[\phi]\) is multivalued, and convergence seems to depend on the path initialization. _Phase transitions in 2d-_ The reaction paths can also be calculated in dimension \(d=2\) using the deep gMAM algorithm. We found that the path from \(\phi_{H}\) to \(\phi_{I}\) (not shown) follows what is predicted by classical nucleation theory [1]. Also, as expected, the path from \(\phi_{I}\) to \(\phi_{H}\) for \(\lambda=0\) follows the reverse relaxation path since the dynamics is in equilibrium (not shown). As with \(d=1\), the action path from \(\phi_{I}\) to Figure 3: Minimum action \(V_{\phi_{I}}(\phi_{H})\) as a function of the system size \(L\) (top panel), for paths starting at \(\phi_{I}\) and reaching \(\phi_{H}\). Here \(\lambda=2\) and \(\phi_{0}=0.65\). The action non-monotonically increases because increasing the system size \(L\) allows for qualitatively different reaction paths. The successive portions of the curve correspond to different types of paths displaying an increasing number of bumps, see bottom panels. The vertical dashed lines indicate the \(L_{q}^{*}\), the critical lengths where pairs of critical states with \(q\) bumps appear. The values \(L_{q}^{*}\) are also given in Fig. 2(c). The yellow dots indicate where branches cross each other. The \((*)\) symbol indicates a branch on which the path is no longer axisymmetric (see SM). is however more complicated, displaying microstructure patterns, now with radial symmetry. There is some evidence that the instantons do not go through multi-spike profiles that are found (numerically) to be the critical states of the AMB (see SM and Ref. [39]), since the action values for such instantons are always larger than the one of the radially symmetric path. We emphasize that in \(d=2\) the critical states in Cahn-Hilliard are much more difficult to characterize [39] than in \(d=1\)[40], and this question remains open for the active Model B. All in all, a comparison to the Arrhenius law for \(\lambda=0\) shows that the active term significantly reduces the action needed to escape the inhomogeneous state. _Conclusion_- We have computed the phase diagram of the AMB in \(d=1\), identified the various nucleation scenarii in the binodal, and showed that the instanton phenomenology is similar in \(d=2\). By computing the reaction paths, we were able to identify the regions were the homogeneous state is thermodynamically preferred. The fact that the action \(V_{\phi_{I}}(\phi_{H})\) remains extensive in the system size, while \(V_{\phi_{H}}(\phi_{I})\) remains finite, confirms that eventually, the system should phase-separate as \(L\rightarrow\infty\), when lying in the binodal region. Our results are consistent with the ones of Cates and Nardini [1], who show that nucleation from homogeneous state in AMB for \(d\geq 2\) is qualitatively similar to classical nucleation theory in equilibrium. Our numerical results were obtained using deep gMAM [34] and cross-checked in \(d=1\) by running the classical gMAM [30]. While the latter algorithm is more accurate, the discretization scheme adopted for the Cahn-Hilliard equation is very hard to treat in \(d\geq 2\), where the stability conditions of the scheme are very constraining. The deep gMAM suffers less from the increase of dimensionality. These features make the method proposed here relevant for numerous active matter systems which may undergo phase separation. ###### Acknowledgements. We thank Thibaut Arnoulx de Pirey, Cesare Nardini, Jeremy O'Byrne, and Julien Tailleur for interesting discussions. This work was supported by the Materials Research Science and Engineering Center (MRSEC) program of the National Science Foundation under Grants No. NSF DMR-1420073 and by Grant No. NSF DMR-1710163. R. Z. thanks Laboratoire MSC Paris for hospitality. R. Z. and E. V.-E. would also like to thank the Center for Data Science ENS Paris for hospitality.
2309.03651
Learning of Generalizable and Interpretable Knowledge in Grid-Based Reinforcement Learning Environments
Understanding the interactions of agents trained with deep reinforcement learning is crucial for deploying agents in games or the real world. In the former, unreasonable actions confuse players. In the latter, that effect is even more significant, as unexpected behavior cause accidents with potentially grave and long-lasting consequences for the involved individuals. In this work, we propose using program synthesis to imitate reinforcement learning policies after seeing a trajectory of the action sequence. Programs have the advantage that they are inherently interpretable and verifiable for correctness. We adapt the state-of-the-art program synthesis system DreamCoder for learning concepts in grid-based environments, specifically, a navigation task and two miniature versions of Atari games, Space Invaders and Asterix. By inspecting the generated libraries, we can make inferences about the concepts the black-box agent has learned and better understand the agent's behavior. We achieve the same by visualizing the agent's decision-making process for the imitated sequences. We evaluate our approach with different types of program synthesizers based on a search-only method, a neural-guided search, and a language model fine-tuned on code.
Manuel Eberhardinger, Johannes Maucher, Setareh Maghsudi
2023-09-07T11:46:57Z
http://arxiv.org/abs/2309.03651v1
Learning of Generalizable and Interpretable Knowledge in Grid-Based Reinforcement Learning Environments ###### Abstract Understanding the interactions of agents trained with deep reinforcement learning is crucial for deploying agents in games or the real world. In the former, unreasonable actions confuse players. In the latter, that effect is even more significant, as unexpected behavior cause accidents with potentially grave and long-lasting consequences for the involved individuals. In this work, we propose using program synthesis to imitate reinforcement learning policies after seeing a trajectory of the action sequence. Programs have the advantage that they are inherently interpretable and verifiable for correctness. We adapt the state-of-the-art program synthesis system DreamCoder for learning concepts in grid-based environments, specifically, a navigation task and two miniature versions of Atari games, Space Invaders and Asterix. By inspecting the generated libraries, we can make inferences about the concepts the black-box agent has learned and better understand the agent's behavior. We achieve the same by visualizing the agent's decision-making process for the imitated sequences. We evaluate our approach with different types of program synthesizers based on a search-only method, a neural-guided search, and a language model fine-tuned on code. ## 1 Introduction Humans can easily explain other agents' behavior, living or artificial, using a single demonstration. However, generating explanations post-hoc in (deep) reinforcement learning (RL) after observing an agent's interactions in an environment remains underexplored. Moreover, it is challenging to produce an informative explanation to understand the agent's reasoning for selecting a specific action for a particular state. Nevertheless, understanding the behavior of artificial agents is crucial for deploying agents in the real world or games. In games, one wants to ensure the agent behaves similarly and avoid confusing the players with unreasonable actions. In real-world scenarios, this is even more important because, for example, in self-driving cars unpredictable actions can cause accidents and lead to serious harm for those involved. Therefore, RL is not yet applicable in real-world scenarios since the behavior of agents trained with RL is not always predictable and, thus, cannot be verified for all edge cases [1]. In this work, we propose using program synthesis to imitate RL policies after seeing a trajectory of the action sequence. Program synthesis is the task of finding a program for a given specification, such as a natural language description or input-output examples [15]. By distilling neural network policies into programmatic policies, we are able to verify the program for correctness and use traditional formal verification tools [1] to analyze the behavior and edge cases of synthesized programs. Another benefit of distilling policies into programs is that software developers can adapt the policy to their own needs, which makes it easy to further improve the programmatic policies or adapt them to other scenarios [16]. Ideally, we desire to extract programs that can explain decisions and solve the environment; Nevertheless, in this work, we start by dividing complete trajectories into sub-trajectories to be able to find programs at all. Therefore, we intend to lay the foundation for a complete policy extraction algorithm. To accomplish this, we adopt a state-of-the-art program synthesis system DreamCoder [10] for learning concepts in grid-based RL environments and demonstrate that it can extract a library of functions. We collect trajectories of the policy, i.e., state-action pairs, from an oracle trained with RL and use the collected data for DreamCoder to imitate these state-action sequences with programs. We use these programs to extract core concepts from the environment, represented as functions. To enable the system to learn a library, we introduce a domain-agnostic curriculum based on the length of the state-action sequences to imitate. By inspecting the generated library, we can make inferences about the concepts the agent has learned and better understand the agent's behavior. We achieve the same by visualizing the agent's decision-making process for the imitated sequences. We evaluate our approach with three different program synthesizers: a search-only approach, a neural-guided search, and a language model fine-tuned on code. Our main contributions are as follows: * Introducing a framework for learning reusable and interpretable knowledge that can reason about agent behavior in grid-based reinforcement learning environments * An evaluation of the method on a navigation task through a maze and on two simplified Atari game environments, Asterix and Space Invaders * A comparison of different program synthesis algorithms, including enumerative search, neural-guided enumerative search, and a fine-tuned language model with and without library learning * An analysis of extracted functions of the generated libraries * We open-source the code to enable further research1. Footnote 1: [https://github.com/ManuelEberhardinger/ec-rl](https://github.com/ManuelEberhardinger/ec-rl) ## 2 Related Work Program Synthesis and Library LearningProgram Synthesis has a long history in the artificial intelligence research community [21, 14]. In recent years, many researchers have combined deep learning with program synthesis to make program search more feasible by reducing or guiding the search space [1, 22, 15, 16, 17]. In contrast to the heuristic-based search algorithms, one can also use language models to synthesize programs from text prompts [11, 12, 13, 14, 15]. Another promising method is learning a library of functions from previously solved problems. These functions are then reusable in an updated domain-specific language to solve more challenging problems [16, 17, 18, 19, 20]. Explainable Reinforcement LearningThere exists a variety of methods in the explainable reinforcement learning (XRL) domain. In a recent comprehensive survey [13], the authors divide XRL into four explainable categories: model, reward, state and task. Programmatic policies, where a policy is represented by a program, are part of the model-based explanations [15, 16, 17, 18, 19]. Other works in this category synthesize finite state machines to represent policies [10] or use models based on decision trees [20, 2]. Our method belongs to the same category since we explain sub-trajectories of policies, and our main goal in the future is to extract a program that can represent the full policy. ## 3 Background In this section we give a brief introduction of the different research topics and concepts, that we combine to learn structured and reusable knowledge in grid-based reinforcement learning environments. Program and Domain-specific LanguageThis work considers programs defined in a typed domain-specific language (DSL) which is based on the Lisp programming language [16]. The primitives, i.e. provided functions and constants, of the DSL are control flows, the actions the agent can use, and also modules to perceive the agent's environment. Since we work with grid environments, the agent's perception consists of modules to determine certain positions on the grid and compare them with the available objects in the environment such as walls, empty cells or game-specific objects. The control flows include if-else statements and Boolean operators to formulate more complex conditions. The DSL is a probabilistic grammar with a uniform distribution over the primitives, i.e., each primitive is assigned the same probability of being used. Listing 1 shows an example program that gets an object on the map x and compares it to a wall object. If there is a wall at the specified position, the left action is chosen, otherwise the forward action. The full DSL is included in Appendix A. Program SynthesisOne of the core component in this work is the program synthesizer. Our work is based on DreamCoder, a state-of-the-art program synthesis system that combines program synthesis with library learning. DreamCoder provides an implementation of a search algorithm that enumerates programs in decreasing order of their probability of being generated based on a given probabilistic grammar. The programs are checked if they fit a given specification until the top \(k\) most likely solutions are found or a timeout is reached [10]. To improve search time, a neural network is trained to predict a distribution over the programs defined by the DSL, thus, adapting the uniform distribution to the one that fits the training data of the neural network. This means that the network predicts the probability of the primitives in the DSL, which results in programs being found faster because they are checked earlier by the enumerative search algorithm [10]. In order to perform a study between different types of program synthesizers, we replace DreamCoder's program synthesis component with a neural program synthesizer. For this, we use CodeT5 [13], a finetuned T5 model [16] on multiple programming languages and code related tasks. raffel2020explaining introduced the T5 model with an encoder-decoder architecture and unified different natural language processing (NLP) tasks into a single one by converting them into a text-to-text format. That allows the authors to treat every problem in one way, i.e., using text as input and producing text as output. In this work, CodeT5 is further trained on Lisp programs to synthesize programs in the provided DSL by converting the agent's observation into a text prompt and synthesizing programs in text format as output. Library LearningThe goal of learning libraries is to build a library of specialized concepts in a particular domain that allow programs to be expressed in a concise way. This is similar to software engineers using open source libraries to improve their programming efficiency, since someone else already implemented the needed concepts in a given domain. We use the DreamCoder library learning module to extract functions from solved tasks by analyzing synthesized programs. These programs are refactored to minimize their description length while growing the library. Instead of only extracting syntactic structures from programs, DreamCoder refactors programs to find recurring semantic patterns in the programs [1]. In this work, we use library learning to build a library of functions for grid-based RL environments that allow us to make inferences about the knowledge acquired by the black box agent during training. **Imitation Learning** To simplify the problem, we limit ourselves to imitation learning, instead of directly finding programs from rewards [10]. Although directly finding programs from rewards is an interesting challenge for future research, our main objective is to make the agent's behavior interpretable. We define the problem we want to solve as an imitation of sub-trajectories of state-action pairs collected from a previously trained agent. ## 4 Method Figure 1 shows a high-level overview how we adapted DreamCoder with a neural program synthesizer based on the CodeT5 model [22], therefore we call this approach LibT5. The framework consists of three components and a curriculum. In addition, we need an oracle for collecting the data to be imitated. The collected data is provided to the curriculum in the beginning of the training process. For evaluating DreamCoder on the problem of imitating sub-trajectories of state-action sequences, we exchange the LibT5 component with DreamCoder and integrate it into the curriculum. The main differences of LibT5 compared to DreamCoder are the program synthesizer and that DreamCoder first performs an enumerative search before training the neural network and then includes programs found from solved tasks in the training data set. LibT5 is trained only on random programs. As the oracle is dependent on the domain and not part of the method, we describe the used oracles in Section 5. ### LibT5 LibT5 consists of three components that are executed iteratively. First, we train the CodeT5 model, then we synthesize programs for the provided test data. Finally, the synthesized programs are evaluated by the symbolic component for correctness and analyzed to extract a library of functions. **Training Component** The first part is the training process of the CodeT5 model [22] with randomly generated programs from the current DSL. The randomly generated programs are executed in a given environment for \(t\) steps to collect input-output examples, i.e., sequences of state-action pairs to be imitated, as a training dataset. \(t\) is chosen randomly for each program, so we do not overfit on a specific sequence length. In our setup, we generate 50000 random programs. They are then executed in a randomly selected environment of the provided ones for each domain to collect data to imitate (see Figure 2 for example environments). We execute each program with a random sequence length \(t\) between \(t_{min}\) and \(t_{max}\). The programs do not have a specific target or reward since they are sampled from the DSL. Our goal in creating a training data set is to exhibit the behavior of programs in a specific RL domain, i.e., how the agent is controlled by given programs in a domain. Random program generation is limited to a maximum depth \(d_{max}\) of the abstract syntax tree. We train the model for five epochs in each iteration. CodeT5 is used without any modifications, as we generate text prompts from the state-action pairs which the model maps to the random programs. The agent's observation, a 2D array of integers, is converted into a string representation, where each integer represents an object in the environment, such as a wall or an enemy. Then the action is appended after the observation. We explain the text prompt generation in Appendix B. **Program Synthesis Component** The second component converts the data provided from the curriculum for the current sequence length into the text prompt for the model, and then the model synthesizes \(\mathrm{P}\) programs for the state-action Figure 1: The architecture for creating explanations for a given reinforcement learning environment. The framework can be decomposed into three components that are executed iteratively and are guided by a curriculum. First, we train the CodeT5 model, then we synthesize programs for the provided data from the curriculum. Finally, the synthesized programs are evaluated by the symbolic component for correctness and analyzed to extract a library of functions. We describe the method in detail in Section 4. sequences to be imitated. These programs are then passed to the symbolic part of the framework. Symbolic ComponentIn this component, the filter module evaluates \(\mathrm{P}\) programs for syntactic and functional correctness in the provided DSL that imitates the state-action sequence. The library learning module uses the correct programs to generate a library by extracting functions from the found programs and adding them to the current DSL. It extracts functions only if a part in a program occurs multiple times in other synthesized programs on the oracle data. That way, the extracted functions are beneficial for the DSL since they have been synthesized several times for different state-action sequences. ### Curriculum The curriculum is based on the action sequence length and, therefore domain-agnostic. We start with an initial sequence length of three, and at each iteration, after the symbolic component has completed the library learning phase, we check whether the sequence length should be increased. We always sample new random programs from the DSL and run them in the environment as the library is updated each iteration to represent more diverse programs. We increment the action sequence length if at least 10% of the oracle's data is imitated and stop the training process if the action sequence length has not been incremented twice in a row. This curriculum strategy is based on the assumption that longer sequence lengths are more complex than shorter ones. Programs that need to imitate three actions do not need to represent as much information as programs that imitate five actions; Thus, the program length is shorter. Shorter programs are easier to synthesize compared to long ones because of the smaller search space. Ellis et al. (2021) showed that building up a library of complex functions, enables DreamCoder to synthesize programs for more difficult tasks. Table 1 shows the different parameters for the evaluated domains, as they depend on the complexity and observation space of the environment. ## 5 Experiments In this section we evaluate the different program synthesis systems on the problem of imitating sub-trajectories on two different domains. We first introduce both domains. Then, we evaluate the conducted experiments, followed by an introduction of the method for generating explanations from programs. Finally, we perform a thorough analysis of the extracted functions in the library. ### Domains GridworldThe first domain we evaluate on is a navigation task from the grid-world domain (Chevalier-Boisvert et al., 2018). We trained the agent with the default hyperparameters from Parker-Holder et al. (2022) and then collected state-action pairs from the medium-sized perfect grid environment displayed on the top left in Figure 2. MinAtarYoung and Tian (2019) introduced a miniature version for five games of the Arcade Learning Environment (Bellemare et al., 2013). The games are simplified to enable more efficient experimentation. MinAtar converted the game into a symbolic 10x10 grid representation with the same game dynamics. Each environment provides a one-hot encoded 10x10x\(n\) observation, where \(n\) channels correspond to specific game objects such as cannon, enemy or bullet objects in Space Invaders. For our experiments we converted the state into a single 10x10 grid. We evaluate our method on Asterix and Space Invaders. We trained both agents with the default parameters provided from Young and Tian (2019). To generate diverse training data that starts not always from similar game states, we let the oracle play the episode for a random time before executing the program. This ensures that the training data set is capturing different aspects of the policy. ### Evaluation We evaluate the problem of imitating sub-trajectories for RL environments with four different program synthesizers: * Search: Program synthesis with an enumerative search algorithm. We use the implementation from Ellis et al. (2021) and the same DSL as DreamCoder to show the benefits of the neural-guided search. * DreamCoder: A neural-guided search algorithm with a library learning module (Ellis et al., 2021). \begin{table} \begin{tabular}{|c||c|c|} \hline \hline Parameters & Navigation Task (5x5) & MinAtar Games (10x10) \\ \hline \(t_{min}\) & 5 & 3 \\ \(t_{max}\) & 60 & 20 \\ \(d_{max}\) & 6 & 20 \\ P & 100 & 500 \\ \hline \end{tabular} \end{table} Table 1: The parameters for the different domains. MinAtar games have a grid size of 10x10, while the navigation task has a partial observation of size 5x5. Figure 2: The environments for the Gridworld and the MinAtar domain used for the training data generation. * CodeT5: A language model fine-tuned on Lisp programs on our data [20]. * LibT5: The CodeT5 model combined with DreamCoder's library learning module. For the final evaluation, we use data collected from the same agent but on different runs to ensure that we do not evaluate and train on the same data. The performance is measured by \[Accuracy=\frac{1}{N}\sum_{\tau\in D}f(\mathrm{P},\tau),\] \[f(\mathrm{P},\tau)=\begin{cases}1,&\text{if }\sum_{\rho\in\mathrm{P}}g(\rho, \tau)>0\\ 0,&\text{otherwise}\end{cases}\] \[g(\rho,\tau)=\underbrace{\mathds{1}\left\{\ \mathrm{EXEC}(\rho,s)==a,\ \forall(s,a)\in\tau\right\}}_{\text{ is }0\text{ after the first }(\rho,s)\text{ where }\mathrm{EXEC}(\rho,s)\uparrow=a}\] where \(N\) is the size of the dataset \(D\) to imitate, \(\tau\) is a sub-trajectory from \(D\) that consists of state-action pairs \((s,a)\), and \(\mathrm{P}\) is the set of all synthesized programs from a given method. \(f(\mathrm{P},\tau)\) checks if there exists any program \(\rho\) out of all synthesized programs \(\mathrm{P}\) that is correct. \(g(\rho,\tau)\) evaluates if a given program \(\rho\) can imitate the full rollout \(\tau\) and returns 1 if this is the case and otherwise 0. \(\mathrm{EXEC}(\rho,s)\) executes the program on a given state \(s\) and returns an action \(a\). The identity function \(\mathds{1}\) maps Boolean values to 0 and 1. Since we have formulated the problem as an imitation of sub-trajectories, we cannot use a more appropriate metric for evaluation. In RL, there are often many different trajectories to achieve a goal in an environment, but in our case we need to evaluate our framework using a more rigorous metric until a complete policy can be distilled into a program with our framework. Fairness of evaluationConsidering fundamental differences, a fair comparison of the used algorithms can be challenging. We describe the used hardware resources and the main distinctions of the experimental setup in more detail in Appendix C. ResultsFigure 3 shows the final evaluation of newly collected data in the same environment used to extract functions from found programs on the solved test tasks. Depending on the domain, different program synthesis methods are well suited. Figure 2(a) shows the evaluation for the maze environment with the smallest observation space of 5x5. The search-based methods can solve sub-trajectories almost twice as long as the neural-based models. For Asterix all methods show similar performance (Fig. 2(b)). For Space Invaders, LibT5 performs worse compared to the other methods (Fig. 2(c)). Figure 2(a) shows that library learning can be useful for neural program synthesizers, but also detrimental depending on the environment. For both MinAtar environments, LibT5 is not as good as CodeT5. This suggests that the more diverse DSL can lead to the problem of "catastrophic forgetting" [17], and previously solved programs become unsolvable. In addition, longer action sequences are no longer solved as well. Our hypothesis is that the library is not beneficial, although more functions have been extracted from LibT5 compared to DreamCoder (see Table 2). Inspecting programs synthesized with LibT5 for Space Invaders shows that they are too complicated compared to programs synthesized with CodeT5. The reason is that LibT5 uses the extracted functions of the library, even if the task is easier to solve with the initial primitives. From both observations, we conclude that neural program synthesizers may be useful for larger observation spaces. Catastrophic forgetting could be mitigated by adjusting the probabilities of the functions in the probabilistic grammar according to their usefulness. By lowering the probability of the more complex functions, the grammar will produce simpler programs. In addition, we need to improve the generation of training data by collecting different runs for each program or trying different representations for encoding the state-action pairs. Therefore, further research is imperative to better integrate the library learning module into the framework with neural program synthesizers. It is also evident from Figure 3 that a system without a curriculum cannot imitate complete action sequences, as it can currently imitate up to sequence lengths of 50. In comparison, complete trajectories are up to 1000 steps long depending on the environment. \begin{table} \begin{tabular}{|c||c|c|} \hline Environment & DreamCoder & LibT5 \\ \hline Maze & 17 & 27 \\ Asterix & 4 & 8 \\ Space Invaders & 15 & 23 \\ \hline \end{tabular} \end{table} Table 2: Number of extracted functions for different program synthesis methods. Figure 3: The evaluation of the different methods on the three environments. The evaluation data was collected on new rollouts of the trained agent. We evaluated the percentage of the correct imitated sub-trajectories for an increasing sequence length until no more programs were found. ### Inspecting the Program Library In this section, we analyze the libraries extracted from the evaluated methods. Appendix D includes the full libraries. Table 2 shows the number of extracted functions for the different program synthesis methods. LibT5 extracts for all environments the most functions. Figure 4 shows the discovered library of DreamCoder for the maze environment with a deep hierarchical structure. f10 is using seven previous discovered functions. Figure 5 shows the library of LibT5. In contrast to DreamCoder, more functions were extracted, but more often semantic similar functions are found and the deep hierarchical structure is missing. The use of a language model for program search in combination with library learning raises a new problem similar to one previously addressed in Inductive Logic Programming. Cropper (2020) analyzed what the perfect library size is and how to forget unnecessary programs in the library. This is also necessary in our case, as we assume that LibT5 synthesizes many programs that are semantically the same but differ syntactically. Therefore, the library learning module extracts many similar functions and adds them to the library. A similar problem is also observable in the Alpha-Code system, which clusters synthesized programs before selecting solutions for submission to programming competitions (Li et al., 2022). From this we conclude, that a larger library is not always beneficial for the program synthesizer. ### Visualization of the Decision Making Process Since programs are inherently interpretable, we developed a method to visualize the agent's decision-making process by highlighting those grid positions responsible for choosing a particular action. Since one position is not always sufficient to select the correct action, we create step-by-step explanations of the "reasoning process" by traversing the program call graph (Ryder, 1979) and logging all function calls and their parameters. Figure 6-left shows a program synthesized by DreamCoder. The data was collected from the maze environment for a sub-trajectory of 24 state-action pairs. From line 1 to 13, we show the full implementation of the program. The first \(\lambda\) denotes the start of the program and receives two input parameters, the map and the direction of the agent. All subsequent \(\lambda\) represent discovered functions from the library, followed by the input parameters. In line 16, we show the program when we use the function f5 from the library. This shows the effectiveness of using library learning in combination with program synthesis. Figure 6-right visualizes three examples of the reasoning process by highlighting the responsible grid cells in yellow. The agent's position is in blue, which is the same in all visualizations because the partial observation of the agent is aligned to the same position by the environment. The direction above the images indicates in which direction the agent is looking on the full map, since the surrounding area is only partially visible. The walls are gray, and the path through the maze is black. The program first checks if the direction is direction-3 and returns an goal-object or empty-object. Since for all three visualizations the direction is one or zero, the empty-obj is always returned. Then the empty-obj is compared with the cell on position (x=3,y=0). The result is an input parameter for f4 and f3. The coordinates (3,0) are also input parameters of f5. Then f4 is called which gets the object on position (1, 2). This object is then compared with an empty object in f3 and it is checked whether the other input parameter of f5 is also true. Depending on this result, the object at position Figure 4: Maze: The extracted functions from programs found by using DreamCoder. The function f10 is using seven previous discovered functions (zoom in for better visibility). Figure 5: Maze: The extracted functions from programs found by using LibT5 (zoom in for better visibility). (0,1) or a wall-obj is returned. The returned object is then compared to the position (1,2) and finally the agent decides whether to use the left or the forward action. Currently, the explanations of the policy can be incorrect, as we do not have a complete policy extraction algorithm and only imitate sub-trajectories collected from an oracle. Without imitating the complete trajectories, the created explanations can be wrong; As such, the programs found for longer action sequences are more reliable as they explain more of the policy. ### Discussion & Limitations In our experiments, we have demonstrated that DreamCoder and LibT5 are able to learn a library of functions for a navigation task and two game environments with a discrete state and action space. By traversing the program call graph of synthesized programs, we created visual explanations for the agent's decision-making process. We concluded our experiments with an analysis of the generated libraries for the given domains and discussed the implications. While learning a library of concepts showed promising results for grid-based environments with a small observation space, we need to further improve our framework for medium-sized and large observation spaces. We have shown that it is possible for the MinAtar environments to learn a library and imitate short sequences using the CodeT5 model, but the library was not used effectively and the synthesized programs were too complicated compared to the data to be imitated. Therefore, we need to further investigate how the library learning module can benefit neural program synthesizers without compromising its ability to imitate shorter state-action sequences. If this is possible, this opens up other interesting domains, such as AlphaGo [10], where humans struggle to comprehend the reasoning process of strategies discovered by artificial agents through self-play. Additionally, for these environments, it is straightforward to define the functional primitives for the agent's perceptions and actions. However, that becomes challenging for continuous- state and action spaces, or when an image represents the state. For images, we could use an object detection model which parses the images before generating text prompts for the program synthesizer, similar to [10], where an image is parsed into a structural representation that is then used in a program. For continuous representations, further research is imperative to verify the effectiveness of this method. ## 6 Conclusion and Future Work In this paper, we adapted the DreamCoder system to learn structured and reusable knowledge in grid-based reinforcement learning environments that allows reasoning about the behavior of black-box agents. We further evaluated the use of a neural network as a program synthesizer and discussed the positive and negative aspects of both methods. The main disadvantage of the proposed framework is its dependence on an oracle for collecting trajectories, whereas it does not depend on much background knowledge except for the initial primitives in the DSL. This work opens many possibilities for future work. The main focus is a policy extraction algorithm that can imitate the entire state-action sequences and not only parts of them. Additionally, we want to evaluate our method on continuous or image-based domains to validate that it is domain-agnostic. Figure 6: Left: The program for a given sub-trajectory synthesized by DreamCoder. Line 1 to 13 show the program with the implementation of the functions. Line 16 shows the same program when calling f5 from the library. Right: The decision-making process, when executing the program on the state-action sequence. We show explanations for three of 24 states of a given sub-trajectory. The grid positions that are checked in the program are yellow. The agent’s position is marked blue and faces to the right. Grey and black indicate walls and empty cells, respectively. The forward action moves the agent one grid cell to the right. The left action only turns the agent \(90^{\circ}\) in the left direction but does not move it. The cell on position (2,1) is checked multiple times, as at first it is checked in f4 and then later in f1. We give a detailed explanation of the program in Section 5. Domain-specific Language Table 3 shows the initial domain-specific language, which contains only the primitives necessary to get different cells on the grid, the control flow structures and Boolean operators. Since we use a typed DSL, we show the types for each function or value. If our primitive is a value, only one type appears in the type column. For functions, multiple types are combined with an arrow \(\rightarrow\). The last type represents the return value of the function. The types before it are the types of the input parameters. The type func represents a function because if-clauses returns a new function to execute depending on the condition since partial programs are also functions in Lisp. To generate random programs, we can specify the types of program to be generated. In our case, we always want programs of type \(\texttt{map}\rightarrow\texttt{direction}\rightarrow\texttt{action}\) or \(\texttt{map}\rightarrow\texttt{action}\), so a random program is always defined from two input parameters of type \(\texttt{map}\) and direction or one input parameter \(\texttt{map}\) and returns a value of type \(\texttt{action}\). We restrict the input parameter types of the \(\texttt{eq-obj}\)? function to mapObject and object so that sampling programs from the grammar always results in the comparison of at least one object from the map. ## Appendix B Text Prompts Text prompts are generated by converting the agent's observation into a string representation and then concatenating the string representation with the action. This is repeated for all state-action pairs until each pair in the sequence is represented as a string. Then all strings are combined into a single text prompt. Figure 7 shows an example of a state-action sequence of length five. On the right of the first line is the 2D array, followed by the string representation and the selected action. On the left is the corresponding maze represented by the 2D array. The final text prompt is then generated by iteratively concatenating all the string representations and actions for the entire sequence. The final text prompt for the state-action sequence is: \(\texttt{22222222212222122221222220}\)left \(\texttt{->}\) 12222222222122222222222223 left \(\texttt{1111211212121211122222222222}\)left \(\texttt{->}\) 2222222222222221111222122111211 forward The \(1\) represents empty grid cells and the \(2\) represents wall objects on the map. ## Appendix C Experimental Setup & Hardware Resources Table 4 shows the hyperparameters for the different methods. We use the same hyperparameters for each iteration. We adopted the hyperparameters from the DreamCoder system to our problem and tried out a few timeouts; since our system is executed iteratively, the runtimes of the experiments are very long, so an extensive hyperparameter search was not possible. For the neural-guided search, we train an encoder-decoder neural network. The encoder for the state observations of the maze was adapted from Parker-Holder et al. (2022). For the MinAtar environments we adapted the encoder from Young and Tian (2019). The decoder, which predicts which functional primitives are used in the program to be synthesized, is generated automatically from the DreamCoder system (Ellis et al., 2021). We train the neural-guided search model for 5000 update steps. Ellis et al. (2021) explain that DreamCoder does not require a large data set because it is based on a search method that is already defined by the DSL. The neural network only improves the search. CodeT5, on the other hand, learns everything from data and therefore requires many more programs. The library learning module is restricted to an arity of three, which means that extracted functions can have up to three input parameters. ## Appendix D Extracted Libraries In this section, we present the full libraries created by DreamCoder and LibT5 for the three evaluated environments. Figure 5 shows the library of LibT5 for the maze en Figure 8: Asterix: The extracted functions from programs found by using LibT5. Figure 7: Text prompts are created by converting the 2D array into a string were all values are concatenated without spaces. vironment. For the Asterix environment both methods could not extract that many functions (see Figure 9 and 8), we think that this shows the difficulty of the game compared to the maze environment. For Space Invaders, LibT5's library in Figure 10 shows a deeper hierarchical structure compared to DreamCoder's extracted functions in Figure 11, but the evaluation has shown that a larger library is not always useful for the program synthesizer, especially for neural program synthesis. ## Acknowledgements The work of S. M. was supported by the German Research Foundation under Grant MA 7111/7-1.
2309.13442
How Do Drivers Behave at Roundabouts in a Mixed Traffic? A Case Study Using Machine Learning
Driving behavior is considered a unique driving habit of each driver and has a significant impact on road safety. Classifying driving behavior and introducing policies based on the results can reduce the severity of crashes on the road. Roundabouts are particularly interesting because of the interconnected interaction between different road users at the area of roundabouts, which different driving behavior is hypothesized. This study investigates driving behavior at roundabouts in a mixed traffic environment using a data-driven unsupervised machine learning to classify driving behavior at three roundabouts in Germany. We used a dataset of vehicle kinematics to a group of different vehicles and vulnerable road users (VRUs) at roundabouts and classified them into three categories (i.e., conservative, normal, and aggressive). Results showed that most of the drivers proceeding through a roundabout can be mostly classified into two driving styles: conservative and normal because traffic speeds in roundabouts are relatively lower than in other signalized and unsignalized intersections. Results also showed that about 77% of drivers who interacted with pedestrians or cyclists were classified as conservative drivers compared to about 42% of conservative drivers that did not interact or about 51% from all drivers. It seems that drivers tend to behave abnormally as they interact with VRUs at roundabouts, which increases the risk of crashes when an intersection is multimodal. Results of this study could be helpful in improving the safety of roads by allowing policymakers to determine the effective and suitable safety countermeasures. Results will also be beneficial for the Advanced Driver Assistance System (ADAS) as the technology is being deployed in a mixed traffic environment.
Farah Abu Hamad, Rama Hasiba, Deema Shahwan, Huthaifa I. Ashqar
2023-09-23T18:02:57Z
http://arxiv.org/abs/2309.13442v1
# How Do Drivers Behave at Roundabouts in a Mixed Traffic? A Case Study Using Machine Learning ###### Abstract Driving behavior is considered a unique driving habit of each driver and has a significant impact on road safety. Classifying driving behavior and introducing policies based on the results can reduce the severity of crashes on the road. Roundabouts are particularly interesting because of the interconnected interaction between different road users at the area of roundabouts, which different driving behavior is hypothesized. This study investigates driving behavior at roundabouts in a mixed traffic environment using a data-driven unsupervised machine learning to classify driving behavior at three roundabouts in Germany. We used a dataset of vehicle kinematics to a group of different vehicles and vulnerable road users (VRUs) at roundabouts and classified them into three categories (i.e., conservative, normal, and aggressive). Results showed that most of the drivers proceeding through a roundabout can be mostly classified into two driving styles: conservative and normal because traffic speeds in roundabouts are relatively lower than in other signalized and unsignalized intersections. Results also showed that about 77% of drivers who interacted with pedestrians or cyclists were classified as conservative drivers compared to about 42% of conservative drivers that did not interact or about 51% from all drivers. It seems that drivers tend to behave abnormally as they interact with VRUs at roundabouts, which increases the risk of crashes when an intersection is multimodal. Results of this study could be helpful in improving the safety of roads by allowing policymakers to determine the effective and suitable safety countermeasures. Results will also be beneficial for the Advanced Driver Assistance System (ADAS) as the technology is being deployed in a mixed traffic environment. D + Footnote †: journal: Journal of Computer Vision [instableable]organization of the University of Texas at Austin [instable]organization of the University of Texas at Austin [instable]organization of the University of Texas at Austin Understanding driving behavior at roundabouts is important for several reasons [8, 9, 10, 11, 12]. First, roundabouts are designed to reduce the severity and frequency of accidents compared to traditional intersections, but the safety benefits depend on drivers following the correct behavior. Understanding how drivers behave at roundabouts can help identify potential safety hazards and inform improvements to the design and operation of roundabouts. Second, roundabouts can improve traffic flow by reducing delays and minimizing the need for traffic signals or stop signs. However, traffic flow can be affected by driver behavior, such as improper lane use or failure to yield to other vehicles. Understanding driving behavior at roundabouts can help identify areas where traffic flow can be improved. Third, efficient use of roundabouts depends on drivers using proper behavior, such as entering and exiting the roundabout in the correct lane, yielding to other vehicles already in the roundabout, and signaling their intentions. Fourth, compliance with traffic rules and regulations is essential for the safe and efficient operation of roundabouts. Understanding driving behavior at roundabouts can help identify areas where drivers are not complying with traffic rules, which can inform enforcement efforts and education campaigns. In this study, we used a dataset from three roundabouts in Germany to classify driving behavior into three styles, namely, conservative, normal, and aggressive, using an unsupervised machine learning for clustering. We also investigated the driving behavior of drivers who interacted with pedestrians and bicycles (VRUs) going through the roundabouts. We compared the resulted behavior of interacted drivers with pedestrians and bicycles and drivers who did not interact with them. To the best of our knowledge, this is the first study in the literature that addresses the two abovementioned contributions. ## 2 Literature Review Understanding the behavior of road users is of vital importance for the development of trajectory prediction systems. Moreover, for a successful market launch of automated vehicles (AVs), proof of their safety is essential. While there has been much research on several datasets and different types of trajectories of road users, bunch of researchers have taken the roundabouts into consideration as it describes a high level of complexity. Measurement data should be collected at a reasonable effort, contain naturalistic behavior of road users, and include all data relevant for a description of the identified scenarios in sufficient quality. Human drivers naturally use their knowledge of other road users' behaviors to improve their driving and the safety of the traffic. A study considered two three-leg junctions and one roundabout to understand how at grade intersections affect driving behavior by comparing the drivers' stress levels using Electrodermal activity. The stress level induced by each type of intersection was evaluated through an Electrodermal Impact Index (EEI). Results suggested that the stress level induced by roundabouts is more than double that induced by standard intersections [13]. Another study used data from five roundabouts in addition to a questionnaire that has been randomly distributed to drivers to explore driving behavior. Results showed that the percentage of drivers breaching at least one traffic regulation is approximately 90% of all drivers. Leaving without flashing and entering the roundabout without giving way were the most frequent violation types [9]. A similar study distributed a questionnaire to obtain the needed information then linking them with the real situation on roundabouts. Analyzing the data showed a large percent of drivers have a good knowledge regarding roundabout rules. A few modifications were done on two roundabouts to compare between before and after based on measures of effectiveness. The analyzed data showed that for vital areas and for traffic volumes greater than 3000 veh/h; the level of service ranges between B and C, and the control delay ranges between 10 s to 30 s. The study helped traffic planners and designers in the decision-making process providing several intersection alternatives between roundabouts and signalized intersection, where the impact of driving behavior should be considered [10]. Detecting risk driving and the prediction of drivers' behavior intentions is necessary to maintain the safety of road users and raise the success rate of driverless vehicles. Many studies have investigated the nature and variation of driving risk in roundabouts, to allow connected vehicles to quickly assess a personalized and real-time level of risk associated with crossing a roundabout. One study recorded time to collision (TTC) at roundabouts, then applied machine learning on the data to assess the probability that a vehicle will choose the upcoming exit. A risk metric was developed based on the TTC data and the probability. The results show a strong relation with the coefficient of variation of TTC values on roundabouts. The obtained risk knowledge has the potential to support driver assistance systems in roundabouts [12]. Another study took into consideration the steering wheel angle, angle velocity, and vehicle position to predict whether the driver will take the upcoming exit or not. They collected data of driving behaviors to model human driving behavior in interaction with roundabouts by using support vector machine - a supervised machine learning model that analyze data for classification and regression analysis. From the experimental results, the vehicles position can be estimated in which the prediction becomes reliable [8]. A study presented two methods to estimate when the driver leaves a roundabout based on the behavior of the drivers. The first method starts with training data to extract typical behavior patterns, then using it to classify other drivers' intentions. The second method does not require a training data. It generates the typical behavior patterns from a precise map and the classification was done on arbitrary roundabouts if the map is available. Results showed that the performance of the map-based approach is comparable to the data-driven approach [14]. Many dynamic factors influence drivers' behaviors such as speed, acceleration, circulating flow of the potentially conflicting vehicles. A study analyzed these factors in addition of driving behavior characteristics then applied Random Forest algorithm to predict the driving behavior. Using a simulator to mimicking real driving conditions using traffic participants with different motion styles, four typical roundabouts were created to collect the data. Random forest model has good performance in predicting the roundabout behaviors of human drivers. Results show that the geometric parameters have little contribution for predicting the driving behavior. The relative velocity between surrounding vehicle and master vehicle are most contributively to human driving behavior [11]. Another study focused on the behavior of drivers at turbo-roundabouts as well as the kinematic parameters of the vehicle (i.e., speed and acceleration). The calibration of traffic microsimulation models or the assignment of behavior parameters to closed-form capacity models can both benefit from empirical evaluations of these parameters. The study's findings revealed that vehicle speeds in entry lanes are quite low (below 25 km/h, 15 m before the yield line), and that ring lane accelerations typically have values below 1.5 m/s\({}^{2}\)[15]. ## 3 Dataset and Methods ### Dataset The aim of this study is to develop a classification model to classify drivers on road user trajectories using rounD dataset. The rounD dataset is a dataset of naturalistic road user trajectories recorded at three roundabouts in Germany. The dataset is applicable on many tasks such as road user prediction, driver modelling, scenario-based safety validation of automated driving systems. The dataset includes vehicles of different classes (i.e., car, truck, trailer, van, bus), pedestrians, bicyclists, and motorcycles. In addition, data is collected for a total of six hours of recording with more than 13,746 road users. Furthermore, the dataset was held at three different recording locations on different roundabout types with typical positioning error \(<\)10 cm as shown in Fig. 1[6]. As [6] pointed out, the dataset was created using a pipeline that extracted data from 24 separate recordings of traffic at three different measurement locations in Aachen, Germany. These recordings captured more than 6 hours of video, Figure 1: Images of the three recording sites included in the rounD dataset. with most of the recordings being made in the mornings to capture high traffic volume and a lot of interaction. From these recordings, the dataset extracted more than 13,000 road users, including cars, trucks, vans, trailers, buses, pedestrians, bicyclists, and motorcycles. The dataset provides detailed information on driver behavior at roundabouts, including how road users enter and exit the roundabouts, how they interact with other road users, and how they signal their intentions. It also includes information on the characteristics of the roundabouts themselves, such as their size, layout, and the presence of lane markings and pedestrian crossings. Importantly, the dataset contains no recorded collisions, indicating that roundabouts can be safe if drivers follow the correct behavior. The dataset is organized into three recording sites located in and around Aachen, Germany. These sites include a four-armed roundabout that connects a highway with Aachen, a roundabout in an urban area of Aachen, and a four-arm roundabout in a suburb of Aachen. Each site has its own unique characteristics, including varying traffic volume, lane markings, and pedestrian crossings. The rounD dataset is a valuable resource for researchers and policymakers seeking to improve roundabout design and operation. By providing detailed information on driver behavior and roundabout characteristics, the dataset can inform efforts to enhance safety, traffic flow, and efficiency at roundabouts. ## Methods To classify driving behavior at roundabouts, a previously developed framework from a study on signalized intersections and another study on work zones was utilized [16, 17]. The framework's main components were utilized, but the study focused on different road infrastructure, namely roundabout. The first step involved extracting features from each driver's trajectory data using volatility measures, shown in Table 1. Volatility measures are significant safety parameters for identifying driver behavior and have been used in many studies [16, 17, 18]. A higher value of volatility measures implies the driver is more unstable and riskier, and hence more aggressive [16, 17, 18]. Thirteen different volatility measures were used. Next, these extracted features were utilized as input for an unsupervised machine learning algorithm to cluster each driver's behavior in the work zone. The K-means algorithm was used in this study, which had been successful in previous studies [16, 17]. \begin{table} \begin{tabular}{l l l} \hline Volatility & Description & Equation \\ Measure & & \\ \hline \(\mathbf{DV_{1}}\) & Standard deviation of speed & \(\sqrt{\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-\overline{A_{Dim_{g}}})^{2}}{N}}\) \\ \(\mathbf{DV_{2}}\) & Standard deviation of longitudinal deceleration or acceleration & \(\sqrt{\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-\overline{A_{Dim_{g}}})^{2}}{N}}\) \\ \(\mathbf{DV_{3}}\) & Coefficient of variation of speed & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim}-\overline{A_{Dim}})^{2}}{N}\) \\ \(\mathbf{DV_{4}}\) & Coefficient of variation of longitudinal acceleration & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-A_{Dim})^{2}}{N}\) \\ \(\mathbf{DV_{5}}\) & Coefficient of variation of longitudinal deceleration & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-\overline{A_{Dim}})^{2}}{N}\) \\ \(\mathbf{DV_{6}}\) & Mean absolute deviation of speed & \(\frac{\sum_{k=1}^{N}(A_{Dim}-\overline{A_{Dim}})}{N}\) \\ \(\mathbf{DV_{7}}\) & Mean absolute deviation of longitudinal acceleration & \(\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-\overline{A_{Dim}})}{N}\) \\ \(\mathbf{DV_{8}}\) & Quantile coefficient of variation of normalised speed & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim_{g}}-A_{Dim})}{N}\), where \(Q_{1}\) and \(Q_{3}\) are the sample \\ \(\mathbf{DV_{9}}\) & Quantile coefficient of variation of longitudinal acceleration & \(25^{th}\) and \(75^{th}\) percentiles. \\ \(\mathbf{DV_{10}}\) & Quantile coefficient of variation of longitudinal deceleration & \(100\times\frac{\overline{A_{Dim_{g}}-A_{Dim_{g}}}}{\overline{A_{Dim_{g}}+A_{Dim _{g}}}}\) \\ \(\mathbf{DV_{11}}\) & Quantile coefficient of variation of longitudinal deceleration & \(100\times\frac{\overline{A_{Dim_{g}}-A_{Dim_{g}}}}{Q_{Dim_{g}+A_{Dim_{g}}}+Q_{ Dim_{g}}}\) \\ \(\mathbf{DV_{12}}\) & Percentage of time the mean normalised speed exceeds the mean plus two standard deviations & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim_{g}}+2^{th})}{N},\alpha=DV_{1}\) \\ \(\mathbf{DV_{12}}\) & Percentage of time the mean of longitudinal acceleration exceeds the mean plus two standard deviations & \(100\times\frac{\sum_{k=1}^{N}(A_{Dim_{g}}+2^{th})}{N},\alpha=DV_{2}\) \\ \hline \end{tabular} \end{table} Table 1: Volatility measures used as inputs to the unsupervised machine learning algorithm. K-means algorithm, which was used in this study, is a popular unsupervised machine learning algorithm used for clustering and data partitioning. The algorithm partitions a dataset into K different clusters based on the similarity between data points. The number of clusters, K, is specified beforehand by the user. The algorithm iteratively assigns data points to the nearest cluster based on the Euclidean distance between the data point and the centroid of the cluster. The centroid is recalculated after each iteration based on the mean value of all the data points in the cluster. The algorithm continues to iterate until there is no significant change in the assignment of data points to clusters. The K-means algorithm is widely used in various applications, including customer segmentation, image compression, and anomaly detection. ## 4 Analysis and Results K-means algorithm was used to cluster driving behavior based on the volatility measures by finding their centroid points. A centroid point is the average of all the data points in the cluster. By iteratively assessing the Euclidean distance between each point in the dataset, each one can be assigned to a cluster. The centroid points are initially assigned randomly and will change each time as the process is carried out. K-means is commonly used in cluster analysis and has been proven to be useful in such cases [16], [17], [19]. The Elbow method is usually used to find the optimal number of clusters for the K-means algorithm using the thirteen volatility measures. The optimal number of clusters can be chosen as two or three clusters as it produces a relatively low total distortion and can be physically interpreted. We tested both cases to compare their results. Each cluster was labeled as 1, 2, or 3 and each of them indicates a classified driving behavior. Determining which driving behavior is assigned to a cluster was based on the mean values of classification features. After performing K-means using all possible features for \(k=2\) and \(k=3\), results are presented in Table 2. \begin{table} \begin{tabular}{l l l l l l} & \multicolumn{2}{c}{\(k=3\)} & \multicolumn{2}{c}{\(k=2\)} \\ \cline{2-6} Volatility & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 1 & Cluster 2 \\ & (Conservative) & (Normal) & (Aggressive) & (Conservative) & (Normal) \\ \hline \(DV_{1}\) & 3.03 & 2.92 & 5.76 & 2.92 & 3.03 \\ \(DV_{2}\) & 0.74 & 0.73 & 0.63 & 0.73 & 0.74 \\ \(DV_{3}\) & 116.80 & -103.26 & -6152.43 & -107.88 & 116.80 \\ \(DV_{4}\) & 71.82 & 69.06 & 75.97 & 69.07 & 71.82 \\ \(DV_{5}\) & -68.78 & -67.45 & -79.26 & -67.46 & -68.78 \\ \(DV_{6}\) & 2.64 & 2.54 & 4.95 & 2.54 & 2.65 \\ \(DV_{7}\) & 0.408 & 0.38 & 0.44 & 0.38 & 0.41 \\ \(DV_{8}\) & 93.56 & -83.72 & -222.80 & -83.82 & 93.56 \\ \(DV_{9}\) & 54.03 & 50.89 & 65.79 & 50.89 & 54.03 \\ \(DV_{10}\) & -53.44 & -52.75 & -41.83 & -52.74 & -53.44 \\ \(DV_{11}\) & 361.17 & -98.33 & 243.79 & -98.06 & 361.72 \\ \(DV_{12}\) & 52.33 & 51.53 & 51.65 & 51.53 & 52.33 \\ \(DV_{13}\) & -10.36 & -10.49 & -8.69 & -10.48 & -10.36 \\ \hline Sample Size & 6967 & 6535 & 5 & 6967 & 6540 \\ \end{tabular} \end{table} Table 2: The scaled cluster centres at roundabouts for \(k=2\) and \(k=3\). According to the clustering results, we found that most of the drivers proceeding through a roundabout can be mostly classified into two driving styles: conservative and normal. This is due to many factors. First, it is usually drivers' responsibility when approaching a roundabout to yield to the traffic already in the roundabout (or pedestrians and bicyclists if there is a crosswalk or a bike lane) and only merge when there is a safe gap in the traffic to do so. Thus, roundabouts are generally considered to reduce traffic crashes, because traffic speeds in roundabouts are relatively lower than in other signalized and unsignalized intersections. It is also considered that there are less conflict points in roundabouts than any other road infrastructure. The other goal of this study is to further investigate the behavior of the drivers that interacted with a VRU (i.e., a pedestrian or a bike). There were 113 VRUs and 13,507 vehicles. Out of those, about 3,681 drivers have interacted with a pedestrian or a bike while proceeding through the roundabouts. We found the interaction between the VRUs and drivers of the other vehicles near the roundabouts by matching the position during a specified interval of time. Results of clustering drivers that interacted with VRUs compared with drivers that did not interact with VRUs is shown in Table 3. Results showed that the percentage of conservative drivers among whom interacted with VRUs (about 77.21%) was significantly higher than conservative drivers that did not interact or the percentage of all conservative drivers. We also found that most of the drivers that were identified as aggressive (about 4 out of 5) were also from the drivers who interacted with VRUs as they proceeded the roundabouts. This means that although drivers tend to slow down as they approach roundabouts, they tend to behave abnormally (conservatively or aggressively) as they interact with VRUs at the roads. This raises a concern as conservative behavior might also increase the risk of crashes, especially rear-end ones. Comparing the effect of the surrounding environment and their interaction with other modes of transportation is crucial for policymakers to determine the effective and suitable safety countermeasures in multimodal intersections. The interaction of vehicles and pedestrians at roundabouts can have a significant impact on traffic flow. Pedestrians and cyclists are vulnerable road users who require special attention from drivers, especially at roundabouts where the flow of traffic can be complex and unpredictable. Drivers need to be aware of the presence of pedestrians and cyclists and yield to them as necessary. The presence of pedestrians and cyclists at roundabouts can cause delays and disruptions to the flow of traffic. Pedestrians may cross the roundabout at unmarked or marked crosswalks, causing drivers to slow down or stop. Cyclists may also use the roundabout, and drivers need to be aware of their movements and adjust their speed accordingly. In addition, the interaction between pedestrians and cyclists can also impact traffic flow, as they may cross each other's paths and cause further delays. Efforts to improve the interaction of vehicles and pedestrians at roundabouts can help to improve traffic flow. For example, providing marked crosswalks and bicycle lanes can help to better define the paths of pedestrians and cyclists, reducing the likelihood of conflicts with vehicles. Improved signage and education for drivers can also help to increase awareness of the presence of pedestrians and cyclists at roundabouts, reducing the likelihood of accidents and delays. Moreover, the interaction of vehicles and pedestrians can impact driving behavior in a number of ways. When drivers encounter pedestrians, they may need to slow down or stop, which can lead to changes in their speed and acceleration. This can affect the flow of traffic and cause congestion, particularly in areas with heavy pedestrian traffic. Additionally, drivers may need to be more attentive to their surroundings when pedestrians are present, which can lead to changes in their driving behavior such as increased lane changes, braking, or steering. Pedestrians can also impact the behavior of other road users, such as bicyclists or motorcyclists, who may need to take extra precautions \begin{table} \begin{tabular}{c c c c c c c} Driving & \multicolumn{2}{c}{All drivers} & \multicolumn{2}{c}{Drivers with no interaction} & \multicolumn{2}{c}{Drivers with interaction} \\ \cline{2-7} Style & Number & Percentage & Number & Percentage & Number & Percentage \\ \hline Conservative & 6967 & 51.58\% & 4125 & 42.06\% & 2842 & 77.21\% \\ Normal & 6535 & 48.38\% & 5692 & 57.93\% & 843 & 22.68\% \\ Aggressive & 5 & 0.04\% & 1 & 0.01\% & 4 & 0.11\% \\ \hline Total & 13507 & & 9826 & & 3681 \\ \hline \end{tabular} \end{table} Table 3: Results of clustering all drivers, drivers with interaction, and with no interaction. to avoid collisions with pedestrians. Overall, the interaction of vehicles and pedestrians can create complex and dynamic traffic scenarios that require careful attention and awareness from all road users. ## 5 Conclusion The issue of identifying driving behavior has emerged as driving habit of drivers has a significant impact on road safety. Roundabouts are particularly intriguing due to the high level of user engagement that a driver or an automated vehicle must consider while proceeding through. Using data-driven unsupervised machine learning to categorize driving behavior at roundabouts, we extracted the volatility measures to classify driving behavior and investigate the effect of interaction between drivers and pedestrians or cyclists. We found that most of the drivers proceeding through a roundabout can be mostly classified into two driving styles: conservative and normal. Roundabouts are generally considered to reduce traffic crashes, because traffic speeds in roundabouts are relatively lower than in other signalized and unsignalized intersections. It is also considered that there are less conflict points in roundabouts than any other road infrastructure. Results also showed that the percentage of conservative drivers among whom interacted with VRUs (about 77.21%) was significantly higher than conservative drivers that did not interact or the percentage of all conservative drivers. Drivers tend to behave abnormally as they interact with VRUs at the roads. This raises a concern as conservative behavior might also increase the risk of crashes, especially rear-end ones. Driving behavior is considered one of the most critical criteria besides having a high uncertainty level in traffic safety studies. Results of this study could be helpful in improving the safety of roads by allowing policymakers to determine the effective and suitable safety countermeasures. Results will also be beneficial for the Advanced Driver Assistance System (ADAS) as the technology is being deployed in a mixed traffic environment.
2309.10163
A faster direct sampling algorithm for equilateral closed polygons and the probability of knotting
We present a faster direct sampling algorithm for random equilateral closed polygons in three-dimensional space. This method improves on the moment polytope sampling algorithm of Cantarella, Duplantier, Shonkwiler, and Uehara (2016) and has (expected) time per sample quadratic in the number of edges in the polygon. We use our new sampling method and a new code for computing invariants based on the Alexander polynomial to investigate the probability of finding unknots among equilateral closed polygons.
Jason Cantarella, Henrik Schumacher, Clayton Shonkwiler
2023-09-18T21:20:42Z
http://arxiv.org/abs/2309.10163v3
# A faster direct sampling algorithm for equilateral closed polygons ###### Abstract We present a faster direct sampling algorithm for random equilateral closed polygons in three-dimensional space. This method improves on the moment polytope sampling algorithm of Cantarella, Duplantier, Shonkwiler, and Uehara [4] and has (expected) time per sample quadratic in the number of edges in the polygon. Equilateral polygons in \(\mathbb{R}^{3}\)--that is, polygonal walks in 3-space forming closed loops and consisting of unit-length steps--provide a standard, if highly simplified, model of ring polymers under "\(\theta\)-conditions" (see, e.g., the survey [22], which gives a number of applications of these models in physics and biology). The closure condition imposes subtle global correlations between edge directions, which means it is not obvious how to generate random equilateral polygons. Indeed, algorithms have been proposed for at least 4 decades [18; 1; 5; 7; 8; 9; 13; 20; 25; 26], though most are numerically unstable or have not been proved to sample from the correct probability distribution. In previous work [4], we introduced the _action-angle method_, which is a numerically stable and provably correct algorithm for generating random equilateral \(n\)-gons in \(\mathbb{R}^{3}\) based on rejection sampling the hypercube. The action-angle method is the fastest extant method: it produces samples in expected time \(\Theta(n^{5/2})\). The purpose of this paper is to give a \(\sqrt{n}\) speedup for the action-angle method, yielding an algorithm which produces random equilateral \(n\)-gons in expected time \(\Theta(n^{2})\). It is based on rejection sampling the same subset of the hypercube as in the action-angle method; the speedup comes from progressively checking the defining inequalities as we generate coordinates rather than checking all the inequalities in a batch. Hence, we call this algorithm the _progressive action-angle method_. The main challenge is to prove that the progressive action-angle method really gives a \(\sqrt{n}\) speedup over the action-angle method, which we do in Proposition 4, Proposition 5, and Proposition 6 by reducing to the computation of the volume of a certain convex polytope. We begin by establishing notation. For an \(n\)-gon in \(\mathbb{R}^{3}\), let \(v_{1},\ldots,v_{n}\in\mathbb{R}^{3}\) be the coordinates of its vertices, and let \(e_{1},\ldots,e_{n}\) be the edge vectors, meaning that \(e_{i}=v_{i+1}-v_{i}\) for \(i=1,\ldots,n-1\) and \(e_{n}=v_{1}-v_{n}\). We will assume throughout that our polygons are equilateral, so that \(|e_{i}|=1\) for all \(i\); equivalently, \(e_{1},\ldots,e_{n}\in S^{2}\), the unit sphere in \(\mathbb{R}^{3}\). The space \(\mathrm{Pol}(n)\) consists of sets of edge vectors in \((S^{2})^{n}\) which obey the closure condition \(\sum_{i=1}^{n}e_{i}=0\). One can show that the set \[\mathrm{Pol}(n)^{\times}\coloneqq\{\vec{e}\in(S^{2})^{n}\,:\,\sum_{i=1}^{n}e_ {i}=0\text{ and for all }i\neq j\text{: }e_{i}\neq e_{j}\,\}\] is a \((2n-3)\)-dimensional submanifold of \((S^{2})^{n}\) and that the \((2n-3)\)-dimensional Hausdorff measure of \(\operatorname{Pol}(n)\setminus\operatorname{Pol}(n)^{\times}\) vanishes. In this sense \(\operatorname{Pol}(n)\) is almost everywhere a submanifold of \((S^{2})^{n}\). We may give it the submanifold metric and corresponding volume; it is equivalent to take the \((2n-3)\)-dimensional Hausdorff measure on \(\operatorname{Pol}(n)\) with respect to the metric on \((S^{2})^{n}\). Since we are interested in shapes of polygons, we focus on the quotient space \(\widehat{\operatorname{Pol}}(n)=\operatorname{Pol}(n)/\operatorname{SO}(3)\). This space has a Riemannian metric--defined by the condition that the quotient map \(\operatorname{Pol}(n)\to\widehat{\operatorname{Pol}}(n)\) is a Riemannian submersion--and hence a natural probability measure after normalizing the Riemannian volume form. Now we introduce some new coordinates on this space. Connecting the vertices \(v_{3},\dots,v_{n-1}\) to \(v_{1}\), as in Figure 1 (far left), produces a collection of \(n-3\) triangles. The shape of the triangulated surface determined by these triangles (and hence also its boundary, which is the \(n\)-gon) is completely determined by the lengths \(d_{i}\) of the diagonals joining \(v_{1}\) and \(v_{i+2}\) and the dihedral angles between triangles meeting at each diagonal. Hence, we can reconstruct the surface (and hence the polygon) up to orientation from the data \(d_{1},\dots,d_{n-3},\theta_{1},\dots,\theta_{n-3}\), and so these give a system of coordinates for \(\widehat{\operatorname{Pol}}(n)\). Indeed, as we have shown [5], these coordinates are natural from the symplectic geometry point of view: in that context, they are called _action-angle coordinates_. Note that, while the dihedral angles can be chosen completely independently, the diagonal lengths cannot: they must obey the system of triangle inequalities \[0\leq d_{1}\leq 2\qquad\begin{array}{c}1\leq d_{i}+d_{i+1}\\ -1\leq d_{i+1}-d_{i}\leq 1\end{array}\qquad 0\leq d_{n-3}\leq 2. \tag{1}\] Let \(\mathcal{P}_{n}\subset[-1,1]^{n-3}\) be the polytope defined by the inequalities (1). If \(T^{n-3}=(S^{1})^{n-3}\) is the \((n-3)\)-dimensional torus realized as the product of unit circles, then the action-angle coordinates are defined on \(\mathcal{P}_{n-3}\times T^{n-3}\), and we have previously shown that the standard probability measure on this space--that is, the one coming from the product of Lebesgue measure on \(\mathcal{P}_{n}\) and the standard product measure on \(T^{n-3}\)--is measure-theoretically equivalent to \(\widehat{\operatorname{Pol}}(n)\): Figure 1: Constructing an equilateral pentagon from diagonals and dihedrals. The far left shows the fan triangulation of an abstract pentagon. Given diagonal lengths \(d_{1}\) and \(d_{2}\) of the pentagon which obey the triangle inequalities, build the three triangles in the triangulation from their side lengths (middle left). Given dihedral angles \(\theta_{1}\) and \(\theta_{2}\), embed these triangles as a piecewise-linear surface in space (middle right). The far right shows the final space polygon, which is the (solid) boundary of this triangulated surface. **Theorem 1** (Cantarella-Shonkwiler [5]).: _The reconstruction map \(\alpha:\mathcal{P}_{n}\times T^{n-3}\to\widehat{\mathrm{Pol}}(n)\) defining action-angle coordinates (i.e., the procedure illustrated in Figure 1) is measure-preserving._ Therefore, to sample points in \(\widehat{\mathrm{Pol}}(n)\) (that is, equilateral \(n\)-gons), it suffices to sample \(\vec{d}\) from Lebesgue measure on \(\mathcal{P}_{n}\) and \(\vec{\theta}\) uniformly from \(T^{n-3}\). Of course, the only challenge is to produce the sample \(\vec{d}\in\mathcal{P}_{n}\). In [4], we showed how to do this efficiently. The key observation is that the consecutive differences \(s_{i}\coloneqq d_{i+1}-d_{i}\) lie in the hypercube \([-1,1]^{n-3}\). Therefore, we can generate points in \(\mathcal{P}_{n}\) by rejection sampling: generate proposed differences \((s_{0},\ldots,s_{n-4})\) uniformly from \([-1,1]^{n-3}\), and simply check whether the proposed diagonal lengths \((d_{1},\ldots,d_{n-3})\) given by \(d_{i+1}=d_{i}+s_{i}\) with \(d_{0}=|v_{2}-v_{1}|=1\) satisfy (1). This is surprisingly efficient: **Theorem 2** (Cantarella-Duplantier-Shonkwiler-Uehara [4]).: _The probability that a random point \((s_{0},\ldots,s_{n-4})\in[-1,1]^{n-3}\) produces a valid collection of diagonal lengths \((d_{1},\ldots,d_{n-3})\in\mathcal{P}_{n}\) is asymptotically equivalent to \(\frac{6\sqrt{6}}{\sqrt{\pi}}\frac{1}{n^{3/2}}\) as \(n\to\infty\)._ In the above and throughout the rest of the paper, we say that \(g(n)\) and \(h(n)\) are _asymptotically equivalent_, denoted \(g(n)\sim h(n)\), if \(\lim_{n\to\infty}\frac{g(n)}{h(n)}=1\). Since the time it takes to generate points in \([-1,1]^{n-3}\) is linear in \(n\), rejection sampling the hypercube yields a valid point in \(\mathcal{P}_{n}\) in expected time \(\Theta(n^{5/2})\). The steps of generating dihedral angles and assembling the \(n\)-gon from \((d_{1},\ldots,d_{n-3})\) and \((\theta_{1},\ldots,\theta_{n-3})\) do not affect the time bound since they are both linear in \(n\). Therefore, this gives a numerically stable algorithm for generating random equilateral \(n\)-gons in expected time \(\Theta(n^{5/2})\) which we called the _action-angle method_. Recently, one of us (Schumacher) created a new implementation of the action-angle sampler which, in practice, seemed to scale quadratically in \(n\) rather than as \(n^{5/2}\). While we were initially worried that there was a mistake in our analysis of the action-angle method, it turns out that some clever programming led to a \(\sqrt{n}\) speedup: our goal now is to explain and justify this. By fixing \(d_{0}=1\) and generating proposed consecutive differences \(s_{i}\) uniformly from \([-1,1]^{n-3}\), the inequalities \(0\leq d_{1}\leq 2\) and \(-1\leq d_{i+1}-d_{i}\leq 1\) for \(i=1,\ldots,n-4\) are automatically satisfied. Of course, the final inequality \(0\leq d_{n-3}\leq 2\) can only be checked at the very end, but the inequalities \(1\leq d_{i}+d_{i+1}\) can be checked one at a time as each \(s_{i}\) is generated, and we can abort and start over as soon as one of these inequalities fails. Naively, one might expect to have to check, on average, some constant fraction of the inequalities. This would speed up the algorithm by a constant factor, but not change the complexity bound. However, as we will show below, the expected number of coordinates that get generated before failure is actually \(\Theta(\sqrt{n})\), yielding an overall time bound of \(\Theta(n^{2})\). Algorithm 1 summarizes our algorithm, which we call the _progressive action-angle method_. A reference C implementation of this algorithm is included in plCurve[2], where it is now the default algorithm for producing random equilateral polygons in \(\mathbb{R}^{3}\). Let \(\mathcal{I}(n)\) be the expected number of iterations in the inner loop of the progressive action-angle method: that is, \(\mathcal{I}(n)\) is the number of coordinates \(s_{i}\) that we expect to generate before failing one of the \(d_{i}+d_{i+1}\geq 1\) inequalities. Since we know from Theorem 2 that the overall acceptance probability is \(\sim\frac{6\sqrt{6}}{\sqrt{\pi}}\frac{1}{n^{3/2}}\), the expected number of iterations of the outermost loop is \(\Theta(n^{3/2})\). Multiplying the time per iteration by the number of iterations yields \(\Theta(n^{3/2}\mathcal{I}(n))\) for the expected time to produce a valid list of diagonals. The postprocessing steps of generating dihedral angles and assembling the polygon are both linear in \(n\), so do not affect the overall time bound. Hence, our goal is to understand how \(\mathcal{I}(n)\) scales. **Definition 3**.: Let \(p(k)\) be the probability that the proposed diagonal lengths generated by a random point in \(\vec{s}\in[-1,1]^{n-3}\) satisfy each of the first \(k\) inequalities \(d_{0}+d_{1}\geq 1,\ldots,d_{k-1}+d_{k}\geq 1\). In other words, \(p(k)\) (thought of as a function of \(k\)) is the survival function for the distribution of the index of the first inequality which fails. For ease of notation, we also declare \(p(0)\coloneqq 1\). By a standard integration by parts argument (see, for example, [11, Exercise 1.7.2]), the expected value of a non-negative random variate is the integral of the survival function, so we see that \[\mathcal{I}(n)=\sum_{k=0}^{n-3}p(k). \tag{2}\] Since our goal is to understand the asymptotics of \(\mathcal{I}(n)\), we will proceed in the following steps: * Find an exact expression for \(p(k)\) (Proposition 4). * Find an asymptotic approximation for \(p(k)\) (Proposition 5). * Plug this into (2) to get the desired asymptotic expression for \(\mathcal{I}(n)\) (Proposition 6). This will give us the desired complexity bound \(\Theta(n^{3/2}\mathcal{I}(n))\) for the progressive action-angle method (Theorem 7). **Proposition 4**.: _For any nonnegative integer \(k\),_ \[p(k)=\frac{2}{\pi}\int_{0}^{\infty}\operatorname{sinc}^{k+1}(t)\,\mathrm{d}t.\] Proof.: For each \(k\), let \(\mathcal{S}_{k}\subset[-1,1]^{k}\) be the subset of points \(\vec{s}\) satisfying the in the definition of \(p(k)\). As a simple calculation confirms, the affine transformation from \(\vec{s}\) to \(\vec{d}\) given by \(d_{i}=1+\sum_{j=0}^{i-1}s_{j}\) is volume-preserving. Let \(\mathcal{Q}_{k}\) be the image of \(\mathcal{S}_{k}\) under this map; i.e., \(Q_{k}\) is the polytope of \((d_{1},\ldots,d_{k})\) satisfying the inequalities \(d_{0}+d_{1}\geq 1,\ldots,d_{k-1}+d_{k}\geq 1\) (again, recall that \(d_{0}=1\), which implies in particular that all the \(d_{i}\) are nonnegative). Since \(p(k)=\frac{\operatorname{Vol}(\mathcal{S}_{k})}{\operatorname{Vol}([-1,1]^{k} )}=\frac{\operatorname{Vol}(\mathcal{S}_{k})}{2^{k}}\), to prove the proposition it suffices to prove that \[\operatorname{Vol}(\mathcal{Q}_{k})=2^{k}\frac{2}{\pi}\int_{0}^{\infty} \operatorname{sinc}^{k+1}(t)\,\mathrm{d}t.\] Notice that the defining inequalities for \(\mathcal{Q}_{k}\) are precisely the inequalities (1) _except_ the last inequality \(0\leq d_{k}\leq 2\). This suggests that these inequalities may just be the diagonal lengths of an equilateral polygonal path in \(\mathbb{R}^{3}\) which is not required to close up. More precisely, let \[\widehat{\operatorname{APol}}(k)\coloneqq\Big{\{}(e_{1},\ldots,e_{k+1})\in(S ^{2})^{k+1}:z_{1}+\cdots+z_{k+1}=0\Big{\}}/\operatorname{SO}(2),\] where \(e_{i}=(x_{i},y_{i},z_{i})\), so that the defining condition says that the path starts and ends in the \(x\)-\(y\)-plane, and \(\operatorname{SO}(2)\) acts by simultaneously rotating all edges around the \(z\)-axis; this is the diagonal subgroup of the \(T^{k+1}=\operatorname{SO}(2)^{k+1}\) action on \((S^{2})^{k+1}\) which rotates edges around the \(z\)-axis. \(\widehat{\operatorname{APol}}(k)\) is the space of _abelian polygons_ introduced by Hausmann and Knutson [12], and we will see that it admits two different effective, Hamiltonian \(T^{k}\) actions, as shown in Figure 2. The first is the residual \(T^{k+1}/\operatorname{SO}(2)\simeq T^{k}\) action above. The moment map for this action simply records the \(z\)-coordinates of the edges; since the defining equation \(z_{1}+\cdots+z_{k+1}=0\) implies that \(z_{k+1}\) is determined by the remaining \(z_{i}\)'s, the last coordinate can be dropped and we can think of the moment map as recording the vector \((z_{1},\ldots,z_{k})\). Let \(\mathcal{H}_{k}\) be the image of this map; that is, the moment polytope for this torus action. Of course, \(-1\leq z_{i}\leq 1\) for all \(i\), and, since \(-1\leq z_{k+1}\leq 1\), we see that the defining inequalities of \(\mathcal{H}_{k}\) are \[-1\leq z_{i}\leq 1\text{ for all }i=1,\ldots,k\quad\text{and}\quad-1\leq z_{1}+ \cdots+z_{k}\leq 1.\] In other words, \(\mathcal{H}_{k}\) is the central slab of the hypercube \([-1,1]^{k}\) of points whose coordinates sum to between \(-1\) and \(1\). This is a well-studied polytope, and its volume has been known at least since Polya [23] to be \[\operatorname{Vol}(\mathcal{H}_{k})=2^{k}\frac{2}{\pi}\int_{0}^{\infty} \operatorname{sinc}^{k+1}(t)\,\mathrm{d}t,\] where \(\operatorname{sinc}(t)=\frac{\sin(t)}{t}\) for \(t\neq 0\) and \(\operatorname{sinc}(0)=1\) is the \(\operatorname{sinc}\) function (see also Borwein, Borwein, and Mares [3] for generalizations of the above formula). On the other hand, we get a \(T^{k}\) action on \(\widehat{\operatorname{APol}}(k)\) which is analogous to the bending flows on \(\widehat{\operatorname{Pol}}(n)\) described above and in more detail in [5]. Specifically, the \(i\)-th \(\operatorname{SO}(2)\) factor acts by rotating the first \(i+1\) edges of the polygonal arm around the \(i\)-th diagonal, which is the axis through \(v_{1}\) and \(v_{i+1}\). Just as in the case of \(\widehat{\operatorname{Pol}}(n)\), the moment map records the lengths \(d_{1},\dots,d_{k}\) of the diagonals, and the image of the moment map is precisely \(\mathcal{Q}_{k}\). But now we've realized \(\widehat{\operatorname{APol}}(k)\) as a toric symplectic manifold in \(2\) ways, with moment polytopes \(\mathcal{H}_{k}\) and \(\mathcal{Q}_{k}\). Since the Duistermaat-Heckman theorem [10] implies that the volume of \(\widehat{\operatorname{APol}}(k)\) must be the product of the volume \((2\pi)^{k}\) of the torus \(T^{k}\) and the volume of either of its moment polytopes, it follows that \[\operatorname{Vol}(\mathcal{Q}_{k})=\operatorname{Vol}(\mathcal{H}_{k})=2^{k} \frac{2}{\pi}\int_{0}^{\infty}\operatorname{sinc}^{k+1}(t)\,\mathrm{d}t,\] as desired. Figure 2: Here we see both torus actions on the space \(\widehat{\operatorname{APol}}(2)\). On the left, we may rotate each of the three edges (independently) around the \(z\)-axis, sweeping out three cones. Then we identify configurations which are the same under the diagonal circle action rotating the entire configuration around the \(z\)-axis (indicated by the dark circle in the \(xy\)-plane). On the right, we may rotate the first two edges around the diagonal joining vertices \(v_{1}\) and \(v_{3}\) or rotate the entire polygon around the diagonal joining \(v_{1}\) and \(v_{4}\). In each case, this is a Hamiltonian \(2\)-torus action on \(\widehat{\operatorname{APol}}(2)\). We have the following estimate for \(p(k)\), which goes back at least to Laplace [16, pp. 172-173] (see also [17]): **Proposition 5**.: \(p(k)=\sqrt{\frac{6}{\pi k}}+O\left(\frac{1}{k^{3/2}}\right).\)__ Now we can easily get the desired asymptotic expression for \(\mathcal{I}(n)\): **Proposition 6**.: \(\mathcal{I}(n)\sim\sqrt{\frac{24n}{\pi}}.\)__ Proof.: By Proposition 5, \(p(k)=\sqrt{\frac{6}{\pi k}}+O\left(\frac{1}{k^{3/2}}\right)\), so, from (2), \[\mathcal{I}(n)=\sum_{k=0}^{n-3}p(k)=1+\sum_{k=1}^{n-3}\left[\sqrt{\frac{6}{\pi k }}+O\left(\frac{1}{k^{3/2}}\right)\right]=1+\sum_{k=1}^{n-3}\sqrt{\frac{6}{\pi k }}+\sum_{k=1}^{n-3}O\left(\frac{1}{k^{3/2}}\right).\] The first sum is asymptotic to \(\sqrt{\frac{24n}{\pi}}\) by the integral test. The rest of this expression has order \(O(1)\) since the second sum is a partial sum of a convergent series, and the result follows. Therefore, \(\Theta(n^{3/2}\mathcal{I}(n))=\Theta(n^{2})\) and we have proved a sharp complexity bound on the progressive action-angle method: **Theorem 7**.: _The progressive action-angle method generates uniform random samples of closed, equilateral \(n\)-gons in expected time \(\Theta(n^{2})\)._ This agrees with the behavior we see in practice; see Figure 3. Figure 3: This plot compares average time per sample for random equilateral \(n\)-gons with \(n=2^{2},2^{3},\ldots,2^{12}\) using the action-angle method (AAM) and the progressive action-angle method (PAAM). The time needed to generate samples scales as predicted by Theorem 2 and Theorem 7. ## Discussion Since it requires listing \(n\) edges (or vertices), the time needed to generate random equilateral \(n\)-gons must be at least linear in \(n\). Theorem 7 shows that the optimal bound is no worse than quadratic in \(n\). It would be interesting to see if even the quadratic bound can be improved. In the proof of Proposition 4, we showed that \(\operatorname{Vol}(\mathcal{Q}_{k})=\operatorname{Vol}(\mathcal{H}_{k})\) by showing that \(\mathcal{Q}_{k}\) and \(\mathcal{H}_{k}\) are moment polytopes for different toric structures on the manifold \(\widehat{\operatorname{APol}}(k)\). In fact, some additional analysis indicates that if one could generate polygons in \(\widehat{\operatorname{APol}}(n-3)\) directly, their diagonals would _automatically_ obey \(0\leq d_{1}\leq 2\), \(1\leq d_{i}+d_{i+1}\), and \(-1\leq d_{i+1}-d_{i}\leq 1\) for \(i\in 1,\ldots,n-2\), but still have probability only \(\sim\frac{6}{n}\) of satisfying the final inequality \(0\leq d_{n-3}\leq 2\) and lying in the moment polytope \(\mathcal{P}_{n}\). This (hypothetical) algorithm would be faster by a constant factor, but still quadratic in \(n\). This may indicate that a new idea is required, but it also seems plausible that the moment polytope \(\mathcal{P}_{n}\) could be transformed more cleverly to occupy a larger fraction of an even smaller polytope. We note that the equivalence \(\operatorname{Vol}(\mathcal{Q}_{k})=\operatorname{Vol}(\mathcal{H}_{k})\) seems potentially interesting as a statement in combinatorics. We could not find a more elementary proof, which leads us to ask: Are there other pairs of polytopes which can be identified as moment polytopes for the same toric symplectic manifold which are difficult to prove equivalent otherwise? As in the case of the action-angle method [24], the progressive action-angle method can easily be modified to give an algorithm for sampling so-called _unit-norm tight frames_ in \(\mathbb{C}^{2}\). Unit-norm tight frames in \(\mathbb{C}^{d}\) are of considerable interest for applications in signal processing, compressed sensing, and quantum information (see [6; 14; 15; 27] for introductions to this area). It would be very interesting to generalize the approach from this paper to the \(d>2\) setting. ###### Acknowledgements. We are very grateful for the On-Line Encyclopedia of Integer Sequences [21], without which progress on this paper would have been much slower. Thanks to Bertrand Duplantier for helping us think asymptotically and to the National Science Foundation (DMS-2107700 to Shonkwiler) and the Simons Foundation (#524120 to Cantarella, #709150 to Shonkwiler) for their support.
2308.00059
Iterative removal of sources to model the turbulent electromotive force
We describe a novel method to compute the components of dynamo tensors from direct magnetohydrodynamic (MHD) simulations. Our method relies upon an extension and generalisation of the standard H\"ogbom CLEAN algorithm widely used in radio astronomy to systematically remove the impact of the strongest beams onto the corresponding image. This generalisation, called the Iterative Removal of Sources (IROS) method, has been adopted here to model the turbulent electromotive force (EMF) in terms of the mean magnetic fields and currents. Analogous to the CLEAN algorithm, IROS treats the time series of the mean magnetic field and current as beams that convolve with the dynamo coefficients which are treated as (clean) images to produce the EMF time series (the dirty image). We apply this method to MHD simulations of galactic dynamos, to which we have previously employed other methods of computing dynamo coefficients such as the test-field method, the regression method, as well as local and non-local versions of the singular value decomposition (SVD) method. We show that our new method reliably recovers the dynamo coefficients from the MHD simulations. It also allows priors on the dynamo coefficients to be incorporated easily during the inversion, unlike in earlier methods. Moreover, using synthetic data, we demonstrate that it may serve as a viable post-processing tool in determining the dynamo coefficients, even when the power of additive noise to the EMF is twice as much the actual EMF.
Abhijit B. Bendre, Jennifer Schober, Prasun Dhang, Kandaswamy Subramanian
2023-07-31T18:29:33Z
http://arxiv.org/abs/2308.00059v2
# Iterative removal of sources to model the turbulent electromotive force ###### Abstract We describe a novel method to compute the components of dynamo tensors from direct magnetohydrodynamic (MHD) simulations. Our method relies upon an extension and generalisation of the standard Hogbom CLEAN algorithm widely used in radio astronomy to systematically remove the impact of the strongest beams onto the corresponding image. This generalisation, called the Iterative Removal of Sources (IROS) method, has been adopted here to model the turbulent electromotive force (EMF) in terms of the mean magnetic fields and currents. Analogous to the CLEAN algorithm, IROS treats the time series of the mean magnetic field and current as beams that convolve with the dynamo coefficients which are treated as (clean) images to produce the EMF time series (the dirty image). We apply this method to MHD simulations of galactic dynamos, to which we have previously employed other methods of computing dynamo coefficients such as the test-field method, the regression method, as well as local and non-local versions of the singular value decomposition (SVD) method. We show that our new method reliably recovers the dynamo coefficients from the MHD simulations. It also allows priors on the dynamo coefficients to be incorporated easily during the inversion, unlike in earlier methods. Moreover, using synthetic data, we demonstrate that it may serve as a viable post-processing tool in determining the dynamo coefficients, even when the power of additive noise to the EMF is twice as much the actual EMF. keywords: galaxies: magnetic fields - dynamo - ISM: magnetic fields - Magnetohydrodynamics (MHD) - methods: data analysis - turbulence ## 1 Introduction Magnetic fields with long-range regularity are present in various astrophysical systems from stars to galaxies. Observational and numerical evidence suggests that the most plausible mechanism for their generation is the large-scale dynamo mechanism (Moffatt, 1978; Beck and Wielebinski, 2013; Rudiger and Hollerbach, 2004; Brandenburg and Subramanian, 2005; Shukurov and Subramanian, 2021). In this process, the magnetic field with scales of regularity much larger than the scale at which turbulence is driven, is generated at the expense of kinetic energy, with the aid of both large-scale shear and helical turbulence. This mechanism can be characterised in terms of mean-field electrodynamics (Krause and Radeler, 1980), by first expressing the dynamical variables, velocity \(\mathbf{U}\) and magnetic field \(\mathbf{B}\), as the sums of their respective mean or large-scale components (\(\overline{\mathbf{U}}\) and \(\overline{\mathbf{B}}\)) and the fluctuating (or the small-scale) components (\(\mathbf{u}\) and \(\mathbf{b}\)). Evolution of the mean magnetic field then depends on the mean velocity and turbulent EMF, i.e. the cross correlation between \(\mathbf{u}\) and \(\mathbf{b}\), \(\overline{\mathcal{E}_{i}}=(\overline{\mathbf{u}\times\mathbf{b}})_{i}\). This EMF is then expressed in terms of the mean magnetic field itself and the system is closed, i.e. \[\overline{\mathcal{E}_{i}}=\alpha_{ij}\overline{B}_{j}-\eta_{ij}\left(\nabla \times\overline{\mathbf{B}}\right)_{j}, \tag{1}\] when mean fields are defined using horizontal averaging1 (see Eq. 2). Here \(\alpha_{ij}\) and \(\eta_{ij}\) are the coefficients of the dynamo tensors, which in general relate to the statistical properties of the background turbulence. In the high conductivity limit, these coefficients are proportional to the specific turbulence properties such as the fluctuating kinetic and magnetic energy densities and their helicity densities. Therefore in the context of various astrophysical systems, these dynamo coefficients dictate how statistical properties of the background turbulence impact the evolution of large-scale magnetic fields. Observational estimates of the dynamo coefficients often rely upon assumptions and phenomenological properties of the turbulence (Krause and Raedler, 1980; Shukurov and Subramanian, 2021). Equally, analytical estimates of the transport coefficients make simplifying assumptions that are often questionable (Cattaneo and Hughes, 1996). Therefore direct magnetohydrodynamic (MHD) simulations of such systems with realistic turbulence driving and wherein the \(\overline{\mathcal{E}}\) and \(\overline{\mathbf{B}}\) are self-consistently generated, provide a very useful tool in estimating the dynamo coefficients. Footnote 1: We also note that we have used a convention such that the repeated indices are summed over. Various methods to compute these dynamo coefficients from direct numerical simulations (DNS) have so far been suggested and tested. The plethora of methods reflects the fact that Eq. 1 represents an under-determined system and one that is not straightforward to invert. For example, Cattaneo & Hughes (1996) used the random magnetic field generated in helically driven turbulence with uniform imposed mean field to calculate the EMF and fit it against the imposed \(\overline{\mathbf{B}}\) to compute the \(\alpha_{ij}\) coefficients, while in Tobias & Cattaneo (2013) the authors used a method of determining the conductivity of solids from material science to determine the magnetic diffusivity. In Brandenburg & Sokoloff (2002) and Kowal et al. (2006), the authors developed an approach to computing the dynamo coefficients by fitting the various moments of the EMF and mean fields against their respective linear relations. This method is designed to handle the additive noise in the EMF and mean magnetic field data assuming that they are uncorrelated. A method with similar capability and an additional advantage of quantifying the auto-correlations between the dynamo coefficients, uses singular value decomposition (SVD), and was applied to several contexts. These include the stellar dynamo simulations in Racine et al. (2011) and Simard et al. (2016), the interstellar medium (ISM) simulations in Bendre et al. (2020), and the global accretion disc simulations in Dhang et al. (2020). The same method was extended to explore the non-local dependence of EMF on mean magnetic fields in Bendre & Subramanian (2022) and to compute the scale-dependent dynamo coefficients in the ISM simulations. As a more direct approach to invert Eq. 1, the kinematic test-field method was introduced by (Schrinner et al., 2005, 2007). In this method the additional test magnetic fields \(\overline{\mathbf{B}}_{T}\) are passively evolved along with the DNS such that they do not affect the turbulent velocity \(\overline{\mathbf{u}}\). The fluctuations (\(\mathbf{b}_{T}\)) generated through tangling of the test-fields by the turbulence, are recorded to further compute the additional EMF components, and so also the dynamo coefficients. The test-field method has an advantage of determining the dynamo coefficients at the length scale of imposed the test-fields, and it has been used in several different contexts, including helically driven turbulence, ISM turbulence, accretion disc turbulence, and even in the simulations of solar and geo-dynamos (Brandenburg, 2005; Sur et al., 2007; Gressel et al., 2008; Kapyli et al., 2009; Bendre et al., 2015; Gressel & Pessan, 2015; Warneke et al., 2016). However, it often proves to be computationally expensive as it relies upon integrating the additional induction equations (associated with each test-field component) along with the rest of the MHD equations. Pros and cons of these methods have also been discussed in the literature (Brandenburg, 2009, 2018; Shukurov & Subramanian, 2021). In this paper we introduce a novel method to determine the dynamo coefficients which is based on post-processing the data from turbulent dynamo DNS. For this we adopt a version of a deconvolution algorithm based on the Hogbom CLEAN method (Hogbom, 1974) called IROS (Iterative Removal Of Sources) (Hammersley et al., 1992) to mitigate some of the issues in previous methods. This method is then tested against mock data and also applied to determine dynamo coefficients from a DNS of ISM simulations, for which comparison can be made to the results from several of the other methods. This paper is structured as follows. In Section 2 we describe briefly the details of the ISM simulations, and the data which we use in the later sections to apply the IROS method. In Section 3.1 we describe the local version of the IROS method and a modification to it (mIROS) wherein we impose prior constraints on the dynamo coefficients. Further in Section 4 we apply these methods to the data of ISM simulations, which is followed by Section 5, where we discuss the conclusions. The paper also has three appendices in which we describe respectively the application of IROS method to invert the non-local representation of EMF (Appendix A), the mIROS method applied to the mock data of EMF with additive noise (Appendix 3.2) and a flowchart representation of the IROS method (Appendix C). ## 2 Simulations As a test case for our new method of determining dynamo coefficients from DNS, we use the data from the direct MHD simulations of a fraction of a disc galaxy. Details of these simulation setups and their outcomes are described in Bendre et al. (2015) and Bendre (2016). We describe them here briefly, for completeness. These were local shearing Cartesian box simulations of a vertically stratified distribution of ISM; performed with the NIRVANA MHD code (Ziegler, 2008). The computational domain represented a patch of the galactic disk with properties consistent with those in the solar neighbourhood. Radial (\(x\)) and azimuthal (\(y\)) extent of this box was roughly 1 kpc by 1 kpc while it extended from \(\sim-2\) to 2 kpc on either sides of galactic midplane. Shearing periodic and periodic boundary conditions were used in \(x\) and \(y\) directions, respectively, to facilitate the shear and approximate axisymmetry. While the outflow conditions were used at the upper and lower boundaries to allow for the gas outflows, but to restrict the gas inflow. A flat rotation curve was implemented by having a radially dependent angular velocity \(\Omega\propto R^{-1}\), such that \(\Omega_{0}\) at the centre of the domain was 100 km s\({}^{-1}\) kpc \({}^{-1}\). Turbulence was driven by the SN explosions modeled as the local injections of thermal energy (\(\sim 10^{51}\) erg per explosion) at a predefined rate of \(\sim 7.5\) kpc \({}^{-2}\) Myr \({}^{-1}\). With this setup the mean magnetic field energy exponentially amplified with an e-folding time of \(\sim 100\) Myr for a Gyr, until it reached Figure 1: In the top panel, time evolution of the mean magnetic energy is shown and kinematic and the dynamical phases of magnetic field evolution are separated by a vertical red dotted line. In the bottom the panel time evolution of the vertical profile of mean magnetic field (\(y\)-component) is shown. Colour-code is normalised with a factor of exp (\(-t/0.2\) Gyr) in the kinematic phase and with exp (\(-t/1.0\) Gyr) in the dynamical phase to compensate for the exponential amplification. equipartition with the turbulent kinetic energy. This growth drastically slowed down afterward. We termed the initial exponential amplification phase (up to \(\sim 1\,\mathrm{Gyr}\) ) as the kinematic phase and the latter as the dynamical phase of magnetic field evolution, wherein the mean magnetic field is dynamically significant to affect the turbulent motions. Vertical profiles of the components of mean magnetic field went through several sign reversals and parity changes throughout the kinematic phase of evolution and achieved a stable mode, symmetric about the galactic mid-plane, with a strength of \(\sim\mu\,\mathrm{G}\) in the mid-plane. In Fig. 1 the time evolution of mean magnetic energy is shown along with the evolution of the vertical profile of y component of mean magnetic field. In Bendre et al. (2015) we have demonstrated that the initial exponential amplification phase of the magnetic field can be described as a solution of an \(\alpha-\Omega\) dynamo, and that the profiles of the dynamo coefficients (as measured using the test-field method) quench during the dynamical phase. Our estimates of the same system using the SVD method (Bendre et al., 2020) also agreed largely with this assertion. Therefore to have a reasonable comparison we use the data from the same model to test this new method of computing the dynamo coefficients. ## 3 Methods for obtaining dynamo coefficients ### The IROS method We describe here the IROS algorithm used to extract the dynamo coefficients associated with aforementioned MHD simulations. In this section, we apply it to the local representation of the EMF as given in Eq. 1. Appendix A generalises it further and applies the algorithm to the non-local representation. For this analysis we first define the mean components by averaging them over the \(x-y\) plane, which leaves only \(z\) as an independent variable, \[\overline{\mathbf{F}}\left(z,t\right)=\frac{1}{L_{x}\,L_{y}}\iint\mathbf{F} \left(x,y,z,t\right)\,\mathrm{d}x\,\mathrm{d}y\,. \tag{2}\] Here \(\mathbf{F}\) represents any dynamical variable \(\mathbf{B}\), \(\mathbf{U}\), \(\mathbf{J}\) (\(\mathbf{J}=\nabla\times\mathbf{B}\)) or \(\overline{\mathbf{\mathcal{E}}}\), while \(L_{x}\) and \(L_{y}\) are the sizes of the numerical domain in radial and azimuthal directions, respectively. We then represent Eq. 1 as an over-determined system of equations, by taking advantage of the fact that the dynamo coefficients do not change appreciably throughout the kinematic phase, since the mean field is not strong enough to affect the turbulence. As such, at any particular \(z=z^{\prime}\) we can write \[\mathbf{y}\left(z^{\prime},t\right)=\mathbf{A}\left(z^{\prime},t\right) \mathbf{x}\left(z^{\prime}\right), \tag{3}\] where the matrices \(\mathbf{y}\) and \(\mathbf{A}\) are comprised of time series (\(t_{1}\,\mathrm{t}\,\mathrm{t}_{N}\)) of both components of the EMF, mean magnetic fields and mean currents, and \(\mathbf{x}\) contains the dynamo coefficients. These matrices are \[\mathbf{y}\left(z^{\prime},t\right)=\begin{bmatrix}\overline{\mathbf{\mathcal{E}} }_{x}\left(z^{\prime},t_{1}\right)&\overline{\mathbf{\mathcal{E}}}_{y}\left(z^{ \prime},t_{1}\right)\\ \overline{\mathbf{\mathcal{E}}}_{x}\left(z^{\prime},t_{2}\right)&\overline{\mathbf{ \mathcal{E}}}_{y}\left(z^{\prime},t_{1}\right)\\ \vdots\\ \overline{\mathbf{\mathcal{E}}}_{x}\left(z^{\prime},t_{N}\right)&\overline{\mathbf{ \mathcal{E}}}_{y}\left(z^{\prime},t_{1}\right)\end{bmatrix}, \tag{4}\] \[\mathbf{A}^{\intercal}\left(z^{\prime},t\right)=\begin{bmatrix}\overline{B} _{x}\left(z^{\prime},t_{1}\right)&\overline{B}_{x}\left(z^{\prime},t_{2} \right)&\ldots&\overline{B}_{x}\left(z^{\prime},t_{N}\right)\\ \overline{B}_{y}\left(z^{\prime},t_{1}\right)&\overline{B}_{y}\left(z^{ \prime},t_{2}\right)&\ldots&\overline{B}_{y}\left(z^{\prime},t_{N}\right)\\ -\overline{J}_{x}\left(z^{\prime},t_{1}\right)&-\overline{J}_{x}\left(z^{ \prime},t_{2}\right)&\ldots&-\overline{J}_{x}\left(z^{\prime},t_{N}\right) \end{bmatrix}, \tag{5}\] and \[\mathbf{x}\left(z^{\prime}\right)=\begin{bmatrix}\alpha_{xx}\left(z^{\prime} \right)&\alpha_{yx}\left(z^{\prime}\right)\\ \alpha_{xy}\left(z^{\prime}\right)&\alpha_{yx}\left(z^{\prime}\right)\\ \eta_{xx}\left(z^{\prime}\right)&\eta_{yx}\left(z^{\prime}\right)\\ \eta_{xy}\left(z^{\prime}\right)&\eta_{yx}\left(z^{\prime}\right)\end{bmatrix}. \tag{6}\] Unlike Eq. 1, this system can be solved for \(\mathbf{x}(z^{\prime})\) by the least-square minimisation using the IROS method described below. #### 3.1.1 Step 1 The IROS scheme we discuss here relies upon the incremental refinements to the estimates of dynamo coefficients. For that we first set all dynamo coefficients \(\alpha_{ij}(z)\) and \(\eta_{ij}(z)\) to zero. Then at any particular \(z=z^{\prime}\), we fit the time series of the \(i^{\text{th}}\) component of the EMF obtained from the simulations, \(\overline{\mathbf{\mathcal{E}}}_{i}(z^{\prime},t)\) with those of \(\overline{B}_{x}(z^{\prime},t)\), \(\overline{B}_{y}(z^{\prime},t)\), \(\overline{J}_{x}(z^{\prime},t)\), and \(\overline{J}_{y}(z^{\prime},t)\), separately (i.e. by keeping only one component of \(\mathbf{x}_{i}\) non-zero and setting other components to zero) and obtain the zeroth level estimates \({}^{0}\alpha_{i\chi}(z^{\prime})\), \({}^{0}\alpha_{i\chi}(z^{\prime})\), \({}^{0}\eta_{i\chi}(z^{\prime})\), \({}^{0}\eta_{i\chi}(z^{\prime})\), and \(\eta_{i\chi}(z^{\prime})\) of dynamo coefficients along with their respective chi-squared errors (\(\chi^{2}_{i\ell}(z^{\prime})\) with \(l=0,1,2,4\)). Note the superscript '0' is used to indicate the zeroth level of refinement to the EMF. The chi-squared errors are defined as; \[\chi^{2}_{i0}(z^{\prime})=\sum_{t}\left(\overline{\mathbf{\mathcal{E}} }_{i}(z^{\prime},t)-\begin{array}{cc}0&\alpha_{i\chi}(z^{\prime})\,\overline{B }_{x}(z^{\prime},t)\end{array}\right)^{2},\] \[\chi^{2}_{i1}(z^{\prime})=\sum_{t}\left(\overline{\mathbf{\mathcal{E}} }_{i}(z^{\prime},t)-\begin{array}{cc}0&\alpha_{i\chi}(z^{\prime})\,\overline{B }_{y}(z^{\prime},t)\end{array}\right)^{2},\] \[\chi^{2}_{i2}(z^{\prime})=\sum_{t}\left(\overline{\mathbf{\mathcal{E}} }_{i}(z^{\prime},t)+\begin{array}{cc}0&\eta_{i\chi}(z^{\prime})\,\overline{J }_{x}(z^{\prime},t)\end{array}\right)^{2},\] \[\chi^{2}_{i3}(z^{\prime})=\sum_{t}\left(\overline{\mathbf{\mathcal{E}} }_{i}(z^{\prime},t)+\begin{array}{cc}0&\eta_{i\chi}(z^{\prime})\,\overline{J }_{y}(z^{\prime},t)\end{array}\right)^{2}. \tag{7}\] #### 3.1.2 Step 2 We then subtract from the EMF of the previous step \(\overline{\mathbf{\mathcal{E}}}_{i}(z^{\prime},t)\), the contribution corresponding to best fitted dynamo coefficient (the one with the least of chi-square error) multiplied a suitable scale-factor \(\epsilon<1\). The factor \(\epsilon\) is referred to as the "loop gain" in radio astronomy. For example, if the chi-square error associated with \({}^{0}\alpha_{i\chi}(z^{\prime})\) in _Step 1_ (i.e. \(\chi^{2}_{i1}(z^{\prime})\)) happens to be the smallest amongst the four, we subtract the contribution \(\epsilon\begin{array}{cc}0&\alpha_{i\chi}(z^{\prime})\,\overline{B}_{y}(z^{ \prime},t)\end{array}\) from \(\overline{\mathbf{\mathcal{E}}}_{i}(z^{\prime},t)\) (i.e. from the EMF of the previous step) and obtain the next level of refinement to the fit of EMF, \({}^{1}\overline{\mathbf{\mathcal{E}}}_{i}(z^{\prime},t)\). #### 3.1.3 Step 3 Only the best fitted zeroth order estimates are retained, while the rest are set zero. For example if \(\chi^{2}_{i1}(z^{\prime})\) from _Step 2_ is the smallest, only \({}^{0}\alpha_{i\chi}(z^{\prime})\) is retained, while \({}^{0}\alpha_{i\chi}(z^{\prime})\), \({}^{0}\eta_{i\chi}(z^{\prime})\) and \({}^{0}\eta_{i\chi}(z^{\prime})\) are set to zero. All dynamo coefficients are then updated by adding to them their zeroth level estimates multiplied by the loop gain. #### 3.1.4 Step 4 _Step 1_ is then repeated with \({}^{1}\overline{\mathbf{\mathcal{E}}}_{i}(z^{\prime},t)\) as the EMF to obtain \({}^{1}\alpha_{i\chi}(z^{\prime})\), \({}^{1}\eta_{i\chi}(z^{\prime})\), and \({}^{1}\eta_{i\chi}(z^{\prime})\), i.e. the first level estimates of dynamo coefficients (along with their respective chi-square errors). The estimates of dynamo coefficients are further refined by adding to them the corresponding first level contribution, which had the least chi-square error, multiplied by the loop gain \(\epsilon\). For example, if the chi-square associated with \({}^{1}\alpha_{ix}(z^{\prime})\) happens to be the least, a factor of \(\epsilon\,^{1}\alpha_{ix}(z^{\prime})\) is added to \(\alpha_{ix}(z^{\prime})\). The corresponding contribution to the EMF is again subtracted after weighting by the loop gain to determine the residual \({}^{2}\overline{\mathcal{E}_{i}}\). #### 3.1.5 Step 5 _Step 1_, _Step 2_ and _Step 3_ are then repeated a suitable number of times (say \(R\)), until the values of the coefficients converge or all chi-squared error values drop below a certain threshold (say \(T\)). Note that the coefficient associated with the least chi-square could be different for different iterations. Effectively, the final estimates for dynamo coefficients become; \[\alpha_{ij}(z^{\prime}) =\epsilon\,\sum_{r=0}^{R}\,{}^{r}\alpha_{ij}(z^{\prime}) \tag{8}\] \[\eta_{ij}(z^{\prime}) =\epsilon\,\sum_{r=0}^{R}\,{}^{r}\eta_{ij}(z^{\prime}) \tag{9}\] The aforementioned steps are then repeated at all \(z=z^{\prime}\), to construct the vertical profiles of all dynamo coefficients. A further clarification of the IROS algorithm is provided in C using a flowchart representation of aforementioned steps. We will describe the vertical profiles obtained using this algorithm in Section 4 below. Before that we also consider how prior constraints can be incorporated in the IROS reconstruction of dynamo coefficients. ### Imposing prior constraints on the dynamo coefficients The problem of inverting the closure for turbulent EMF can be equivalently treated in the framework of Bayesian inference by subjecting the likelihoods of the dynamo coefficients to the relevant priors. Such an approach is justified due to the helical nature of dynamo generated mean fields, which renders the components of mean fields and currents proportional to each other and thus introduces cross-correlations between dynamo coefficients (Bendre et al., 2015). In order to then be able to infer the \(\alpha_{ij}\) and \(\eta_{ij}\) separately it is useful to have reasonable priors for them. Indeed, such physically informed priors on dynamo coefficients can be self consistently incorporated in the framework of IROS by putting constraints on the profiles of the coefficients, without having to invoke the Bayesian approach. This is due to the fact that the criterion for judging the best fitted parameter to the EMF at each level of refinement (for Fig. 2 and Fig. A1 it is the least of chi-squared errors of the individual fits) can be chosen so as to impose the said constraint. For example in the ISM simulations it is conceivable for coefficients \(\alpha_{ij}\) to be anti-symmetric with respect to the galactic mid-plane, owing to the mirror asymmetry of vertically stratified helical turbulence. Similarly for \(\eta_{ij}\) a symmetric vertical profile can be reasonably expected due to statistical symmetry of turbulent kinetic energy on either sides of the mid-plane. Such symmetries are explicitly seen to obtain in closure calculations of the dynamo coefficients (Krause and Raedler, 1980; Radler et al., 2003; Brandenburg and Subramanian, 2005; Shukurov and Subramanian, 2021). Hence, a prior on dynamo coefficients can be formally incorporated in the aforementioned algorithm of IROS described in Section 3.1, and we apply it to the same data of ISM simulations as follows. We will refer to this method as'modified IROS' or'mIROS'. Figure 3: Red solid lines show the vertical profiles of dynamo coefficients computed using the modified mIROS method. Associated errors are shown with orange shaded regions. While the blue dashed lines show the dynamo coefficient profiles computed with the IROS method (same as the blue dashed lines in Fig. 2), along with their associated errors are shown with green shaded regions. Figure 2: Red solid lines show the vertical profiles of various dynamo coefficients obtained using the local SVD method, along with their associated errors shown in orange shades. In blue dashed lines we have shown the same but obtained using the local IROS method along with their corresponding errors shown in light green. Firstly, we set the zeroth level dynamo coefficients \(\alpha_{ij}(z)\) and \(\eta_{ij}(z)\) to zero as before. Then _Step 1_, (Section 3.1.1) of the \(k^{\text{th}}\) refinement of IROS is modified as follows in mIROS: The vertical profiles of the \(k^{\text{th}}\) order) estimates of dynamo coefficients \({}^{k}\alpha_{ij}(z)\) and \({}^{k}\eta_{ij}(z)\), (its full functional form) is first determined by again keeping each of these _functions_ non zero, turn by turn, and using Eq. 1 at each time \(t_{i}\). The functional form of the coefficients \({}^{k}\alpha_{ij}(z)\) are expanded in terms of Legendre polynomials of odd degrees while the \({}^{k}\eta_{ij}\)'s are expanded using ones with even degrees, and the expansion coefficients are used to obtain the fits \[f_{k}^{\alpha_{ij}}\left(z\right)=\sum_{\ell=0}^{n}a_{2\ell+1}\ P_{2\ell+1} \left(z/L_{z}\right), \tag{10}\] and \[f_{k}^{\eta_{ij}}\left(z\right)=\sum_{\ell=0}^{n}a_{2\ell}\ P_{2\ell}\left(z/L _{z}\right). \tag{11}\] Here, \(a_{m}\) are the fitting parameters2, \(P_{m}\) (\(z/L_{z}\)) the Legendre polynomials of degree \(m\), and \(L_{z}\), the vertical extent of the simulation domain in \(z\) direction (\(\sim 2\,\text{kpc}\) ). Thus \(f_{k}^{\alpha_{ij}}\left(z\right)\) is an odd function of \(z\), while \(f_{k}^{\alpha_{ij}}\left(z\right)\) is even in \(z\) as required by our prior. The choice of \(n\), the number of polynomials to be included in the fit, is a free parameter. Then to determine the best fitted dynamo coefficient, the chi-square errors associated with \(f_{k}^{\alpha_{ij}}\left(z\right)\) and \(f_{k}^{\eta_{ij}}\left(z\right)\), denoted as \(\chi_{il}^{2}\), are compared. Chi-square errors here are then defined as Footnote 2: These parameters are determined by using the orthogonality property of Legendre polynomials \[\chi_{l0}^{2} =\sum_{z}\sum_{t}\left(\overline{\mathcal{E}}_{i}(z,t)-f_{k}^{ \alpha_{ix}}(z)\ \overline{B}_{x}(z,t)\right)^{2},\] \[\chi_{l1}^{2} =\sum_{z}\sum_{t}\left(\overline{\mathcal{E}}_{i}(z,t)-f_{k}^{ \alpha_{iy}}(z)\ \overline{B}_{y}(z,t)\right)^{2},\] \[\chi_{l2}^{2} =\sum_{z}\sum_{t}\left(\overline{\mathcal{E}}_{i}(z,t)+f_{k}^{ \alpha_{ix}}(z)\ \overline{J}_{x}(z,t)\right)^{2},\] \[\chi_{l3}^{2} =\sum_{z}\sum_{t}\left(\overline{\mathcal{E}}_{i}(z,t)+f_{k}^{ \alpha_{iy}}(z)\ \overline{J}_{y}(z,t)\right)^{2}. \tag{12}\] Note here, that these errors are not only summed over time but also summed over \(z\) and thus relate to the overall shape of the vertical profiles of dynamo coefficients. The determination of the best fit coefficient in mIROS is not therefore made separately at each \(z^{\prime}\), as opposed to that in previous case give in Eq. 7. Further, in _Step 2_ (Section 3.1.2) the contribution to the EMF associated with the least chi-squared weighted with the loop-gain factor (that is, either \(\epsilon\ f_{k}^{\alpha_{ij}}\overline{B}_{j}\) or \(\epsilon\ f_{k}^{\eta_{ij}}\overline{J}_{j}\)) is subtracted from \(\overline{\mathcal{E}}_{i}\). _Step 3_, _step 4_ and 5 are then performed the same way as described in Section 3.1.3, Section 3.1.4 and Section 3.1.5. Fitting at each \(z\) is now not necessary as the \(z\) dependence of the coefficients is already taken into account in the polynomial fit. The resulting vertical profiles of the dynamo coefficient are \[\alpha_{ij}(z) =\epsilon\ \sum_{r=0}^{R}\ f_{r}^{\alpha_{ij}}(z), \tag{13}\] \[\eta_{ij}(z) =\epsilon\ \sum_{r=0}^{R}\ f_{r}^{\eta_{ij}}(z). \tag{14}\] We discuss the results of this analysis and compare the performance of mIROS on the determination of dynamo coefficients in the following section. Additionally we test these two methods against the mock data of a noisy EMF in B. ## 4 Results ### IROS inversion using the local EMF representation We apply the method discussed in Section 3.1 to the data of MHD simulations of ISM described in Section 2, and obtain the associated dynamo coefficients. We first extract the time series of the EMF, mean-fields and mean currents from the simulations and split them in nine different sections by successively skipping eight points. For each of these realisations we obtain all dynamo coefficients, using the aforementioned IROS method. In Fig. 2 with black solid lines we have shown the average of these nine outcomes, along with associated errors using orange shades, which are obtained from the variance in these nine results. Here we have used the loop gain \(\epsilon\) of 0.3 and a hundred levels of refinements in total (\(R=100\)). Additionally these simulations have already been analysed using local SVD method. We over-plot the estimates of dynamo coefficients obtained using that method with red dashed lines, along with their respective errors obtained in the same way by a light green shade. It can be seen that both these methods give very similar results although employing very different algorithms in detail. We also perform the same exercise for different values of \(\epsilon\) and track the values of refined dynamo coefficients as a function of refinements. It appears that as we refine these coefficients in IROS, they converge to their true values logarithmically. The rate of convergence is unsurprisingly faster for the larger values of loop gain \(\epsilon\). We demonstrate this in Fig. 4, for the local version of IROS method, where we plot the magnitudes of various dynamo coefficients at \(\sim 800\,\text{pc}\), as functions of refinements, with different line styles indicating the various loop gains as indicated in the lower-right hand panel. Figure 4: The obtained values of the dynamo coefficients as a function of the number of refinements. Shown in different colours are different loop gains \(\epsilon\) (10% to 90%); see the colour code in the lower right panel. Dynamo coefficients are calculated at a specific height (\(\sim\)700 pc ), using the local version of IROS. ### mIROS inversions using prior constraints In a similar way we use the modified IROS method (Section 3.2) to invert the local \(\overline{\mathbf{\mathcal{E}}}\) representation. We use the same ISM simulations data discussed in Section 2 and using \(n=40\) even and odd Legendre polynomials to fit the dynamo coefficients during each mIROS iteration, obtain their vertical profiles, which are shown in Fig. 3. With red solid lines we show the profiles obtained with the modified IROS method along their errors shown in the orange shaded region. While with the blue dashed lines we show the same with the unmodified IROS method described in Section 3.1. We see that these two methods of inversions give qualitatively similar results, although dynamo coefficients determined using mIROS after imposing priors are smoother and have smaller errors. We note here, however, that the priors used for \(\alpha_{ij}\) and \(\eta_{ij}\) are specific to the ISM simulations where the parities of the coefficients can be inferred from underlying physics, and any other physically informed constraint specific to a particular system or more general priors can be similarly incorporated. ## 5 Conclusions Estimation of the dynamo coefficients associated with MHD simulations is crucial in understanding the large-scale dynamo effect as well as in quantifying the connection of MHD turbulence to the evolution of large-scale magnetic fields. This is usually accomplished using either the test-field method, which requires multiple additional induction equations to be solved along with the DNS, or using the post-processing methods based on regression which tend to be relatively faster. The SVD method for example directly solves Eq. 1 in post-processing, by assuming the constancy of dynamo coefficients in time and inverting Eq. 3 instead. Problems associated with directly inverting Eq. 1, usually stem from having a statistical correlation between \(\overline{\mathbf{B}}\) and \(\overline{\mathbf{J}}\) (for example in \(\alpha^{2}\)-dynamos, \(\overline{\mathbf{J}}\sim\overline{\mathbf{\mathcal{V}}}\times\overline{ \mathbf{B}}\sim k\,\overline{\mathbf{B}}\), which further introduces correlations in \(\alpha_{ij}\) and \(\eta_{ij}\)), or having one (or more) of the components of \(\overline{\mathbf{B}}\) or \(\overline{\mathbf{J}}\) contribute overwhelmingly to the EMF. The latter issue in particular leads to the underestimation the dynamo coefficients associated with the components of the mean field or current that contribute negligibly to the EMF or are comparatively noisier. In this paper we present a new post processing tool, IROS, which mitigates this later problem, apart from having a number of other advantages. In IROS the components of EMF are successively fit against the individual mean field and current components, and the estimates of dynamo coefficients obtained in the fits are refined iteratively by removing the contribution of the best fitted parameters from the EMF, thus circumventing the aforementioned issue. Following a similar algorithm we further extended this method to also invert the non-local representation of EMF to determine the components of the non-local kernel \(\mathcal{K}_{ij}\), as described in A. Significantly, we also show that it is possible in IROS to impose reasonable prior constraints on the dynamo coefficients. This modified IROS method ("mIROS"), makes an appropriate modification to the criterion of the best fit at each iteration of the IROS refinement. As an example, we impose the priors on the vertical profiles of dynamo coefficients, such that \(\alpha_{ij}(z)\) and \(\eta_{ij}(z)\) with respect to the mid-plane (\(z=0\)) are antisymmetric and symmetric respectively. This was done simply by expanding the dynamo coefficient profiles in terms of either odd or even Legendre polynomials and modifying the definitions of chi-squared errors to measure the goodness of the EMF fit with respect to these Legendre expansions, as described in Section 3.2. We note here that a different set of prior constraints specific to the system can be incorporated in the same way with appropriate definitions of chi-squared errors. Moreover a probabilistic framework can also be adopted to impose the said priors, self consistently. To have a reasonable validation of the IROS method we applied it to the data of ISM simulations, which has been analysed before using the test-field method as well as the local and non-local variants of the SVD method. The vertical profiles of the dynamo coefficients recovered from the IROS and the SVD method are found to be largely consistent with each other (as shown in Fig. 2 and Fig. A1). We also applied this method to the synthetic data with predefined vertical profiles of dynamo coefficients chosen such that only a few of the coefficients contributed largely to the EMF, and with varying levels of additive noise. We demonstrated that even with the noise level as high as 200% of the EMF, using IROS we were still able to recover the dynamo coefficients with a reasonable accuracy, and considerably better than the SVD method. With this analysis we have shown that the IROS method could serve as a viable method to determine the dynamo coefficients. As a post-processing tool, it firstly has an advantage of being extremely computationally efficient compared to the test-field method, while also being more robust in handling the additive noise than the SVD. A conceivable disadvantage in the standard IROS (and also in the SVD), however, is that the determined dynamo coefficients get correlated when the components of mean-field and current are correlated as well. This is avoided in the test-fields method since the additional linearly independent test magnetic fields are also evolved along with the MHD simulations. In this respect, it is useful to have a possibility, as in the mIROS, of imposing prior constraints on dynamo coefficients to break the degeneracy between \(\alpha_{ij}\) and \(\eta_{ij}\). It would also be useful to extend the the mIROS to invert the integral or kernel representation of EMF and to impose prior constraints on the coefficients of the non-local kernel. ## Acknowledgements We thank Dipankar Bhattacharya for valuable insights and for a detailed tutorial on IROS. We also thank Neeraj Gupta, Maarit J. Korpi-Lagg, and Matthias Rheinhardt for their valuable insights and explanations. J.S. and A.B. acknowledge the support by the Swiss National Science Foundation under Grant No. 185863. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2310.00213
LSOR: Longitudinally-Consistent Self-Organized Representation Learning
Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called Longitudinally-consistent Self-Organized Representation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, N=632), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.
Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk, Kilian M. Pohl
2023-09-30T01:31:24Z
http://arxiv.org/abs/2310.00213v1
# LSOR: Longitudinally-Consistent Self-Organized Representation Learning ###### Abstract Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called Longitudinally-consistent Self-Organized Representation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, \(N=632\)), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at [https://github.com/ouyangjiahong/longitudinal-som-single-modality](https://github.com/ouyangjiahong/longitudinal-som-single-modality). ## 1 Introduction The interpretability of deep learning models is especially a concern for applications related to human health, such as analyzing longitudinal brain MRIs. To avoid interpretation during post-hoc analysis [6, 14], some methods strive for an interpretable latent representation [9]. One example is self-organizing maps (SOM) [5], which cluster the latent space so that the SOM representations (i.e., the'representatives of the clusters) can be arranged in a discrete (typically 2D) grid while preserving high-dimensional relationships between clusters. Embedded in unsupervised deep learning models, SOMs have been used to generate interpretable representations of low-resolution natural images [3, 8]. Intriguing as it sounds, we found their application to (longitudinal) 3D brain MRIs unstable during training and resulted in uninformative SOMs. These models get stuck in local minima so that only a few SOM representations are updated during backpropagation. The issue has been less severe in prior applications[3, 8] as their corresponding latent space is of much lower dimension than the task at hand, which requires a high dimension latent space so that it can accurately encode the fine-grained anatomical details in brain MRIs[17, 12]. To ensure all SOM representations can be updated during backpropagation, we propose a soft weighing scheme that not only updates the closest SOM representation for a given MRI but also updates all other SOM representations based on their distance to the closest SOM representation [3, 8]. Moreover, our model relies on a stop-gradient operator [16], which sets the gradient of the latent representation to zero so that it only focuses on updating the SOM representations. It is especially crucial at the beginning of the training when the (randomly initialized) SOM representations are not good representatives of their clusters. Finally, the latent representations of the MRIs are updated via a commitment loss, which encourages the latent representation of an MRI sample to be close to its nearest SOM representation. In practice, these three components ensure stability during the self-supervised training of the SOM on high-dimensional latent spaces. To generate SOMs informative to neuroscientists, we extend SOMs to the longitudinal setting such that the latent space and corresponding SOM grid encode brain aging. Inspired by [12], we encode pairs of MRIs from the same longitudinal sequence (i.e., same subject) as a trajectory and encourage the latent space to be a smooth trajectory (vector) field. We enforce smoothness by computing for each SOM cluster a reference trajectory, which represents the average aging of that cluster with respect to the training set. The reference trajectories are updated by the exponential moving average (EMA) such that, in each iteration, it aggregates the average trajectory of a cluster with respect to the corresponding training batch (i.e., batch-wise average trajectory). In doing so, the model ensures longitudinal consistency as the (subject-specific) trajectories of a cluster are maximally aligned with the reference trajectory of that cluster. Named Longitudinally-consistent Self-Organized Representation learning (LSOR), we evaluate our method on a longitudinal T1-weighted MRI dataset of 632 subjects from ADNI to encode the brain aging of Normal Controls (NC) and patients diagnosed with static Mild Cognitive Impairment (sMCI), progressive Mild Cognitive Impairment (pMCI), and Alzheimer's Disease (AD). LSOR clusters the latent representations of all MRIs into 32 SOM representations. The resulting 4-by-8 SOM grid is organized by both chronological age and cognitive measures that are indicators of brain age. Note, such an organization solely relies on longitudinal MRIs, i.e., without using any tabular data such as age, cognitive measure, or diagnosis. To visualize aging effects on the grid, we compute (post-hoc) a 2D similarity grid for each MRI that stores the similarity scores between the latent representation of that MRI and all SOM representations. As the SOM grid is an encoding of brain aging, the similarity grid indicates the likelihood of placing the MRI within the "spectrum" of aging. Given all MRIs of a longitudinal scan, the change across the corresponding similarity grids over time represents the brain aging process of that individual. Furthermore, we infer brain aging on a group-level by first computing the average similarity grid for an age group and then visualizing the difference of those average similarity grids across age groups. With respect to the downstream tasks of classification (sMCI vs. pMCI) and regression (i.e., estimating the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) on all subjects), our latent representations of the MRIs is associated with comparable or higher accuracy scores than representations learned by other state-of-the-art self-supervised methods. ## 2 Method As shown in Fig. 1, the longitudinal 3D MRIs of a subject are encoded as a series of trajectories (blue vectors) in the latent space. Following [12, 17], we consider a pair of longitudinal MRIs (that corresponds to a blue vector) as a training sample. Specifically, let \(\mathcal{S}\) denote the set of image pairs of the training cohort, where the MRIs \(x^{u}\) and \(x^{v}\) of a longitudinal pair \((x^{u},x^{v})\) are from the same subject and \(x^{v}\) was acquired \(\Delta t\) years after \(x^{u}\). For simplicity, \(\times\) refers to \(u\) or \(v\) when a function is separately applied to both time points. The MRIs are then mapped to the latent space by an encoder \(F\), i.e., \(z^{\times}:=F(x^{\times})\). On the latent space, the trajectory of the pair is denoted as \(\Delta z:=(z^{v}-z^{u})/\Delta t\), which represents morphological changes. Finally, decoder \(H\) reconstructs the input MRI \(x^{\times}\) from the latent representation \(z^{\times}\), i.e., \(\tilde{x}^{\times}:=H(z^{\times})\). Next, we Figure 1: Overview of the latent space derived from LSOR. All trajectories (\(\Delta z\)) form a trajectory field (blue box) modeling brain aging. SOM representations in \(\mathcal{G}\) (orange star) are organized as a 2D grid (orange grid). As shown in the black box, reference trajectories \(\Delta\mathcal{G}\) (collection of all \(\Delta g\), green arrow) are iteratively updated by EMA using the aggregated trajectory \(\Delta h\) (purple arrow) across all trajectories of the corresponding SOM cluster within a training batch. describe LSOR, which generates interpretable SOM representations, and the post-hoc analysis for deriving similarity grids. ### Isor Following [3, 8], SOM representations are organized in a \(N_{r}\) by \(N_{c}\) grid (denoted as SOM grid) \(\mathcal{G}=\{g_{i,j}\}_{i=1,j=1}^{N_{r},N_{c}}\), where \(g_{i,j}\) denotes the SOM representation on the \(i\)-th row and \(j\)-th column. This easy-to-visualize grid preserves the high-dimensional relationships between the clusters as shown in by the orange lines in Fig. 1. Given the latent representation \(z^{\times}\), its closest SOM representation is denoted as \(g_{\epsilon^{\times}}\), where \(\epsilon^{\times}:=argmin_{(i,j)}\parallel z^{\times}-g_{i,j}\parallel_{2}\) is its 2D grid index in \(\mathcal{G}\) and \(\parallel\cdot\parallel_{2}\) is the Euclidean norm. This SOM representation is also used to reconstruct the input MRI by the decoder, i.e., \(\tilde{x}_{g}^{\times}=H(g_{\epsilon^{\times}})\). To do so, the reconstruction loss encourages both the latent representation \(z^{\times}\) and its closet SOM representation \(g_{\epsilon^{\times}}\) to be descriptive of the input MRI \(x^{\times}\), i.e., \[L_{recon}:=\mathbb{E}_{(x^{u},x^{v})\sim\mathcal{S}}\left(\sum_{\times\in\{x, v\}}\parallel x^{\times}-\tilde{x}^{\times}\parallel_{2}^{2}+\parallel x^{ \times}-\tilde{x}_{g}^{\times}\parallel_{2}^{2}\right), \tag{1}\] where \(\mathbb{E}\) defines the expected value. The remainder describes the three novel components of our SOM representation. **Explicitly regularizing closeness.** Though \(L_{recon}\) implicitly encourages close proximity between \(z^{\times}\) and \(g_{\epsilon^{\times}}\), it does not inherently optimize \(g_{\epsilon^{\times}}\) as \(z^{\times}\) is not differentiable with respect to \(g_{\epsilon^{\times}}\). Therefore, we introduce an additional 'commitment' loss explicitly promoting closeness between them: \[L_{commit}:=\mathbb{E}_{(x^{u},x^{v})\sim\mathcal{S}}\left(\parallel z^{u}-g_ {\epsilon^{u}}\parallel_{2}^{2}+\parallel z^{v}-g_{\epsilon^{v}}\parallel_{2} ^{2}\right).\] **Soft Weighting Scheme.** In addition to update \(z^{\times}\)'s closest SOM representation \(g_{\epsilon^{\times}}\), we also update all SOM representations \(g_{i,j}\) by introducing a soft weighting scheme as proposed in [10]. Specifically, we design a weight \(w_{i,j}^{\times}\) to regularize how much \(g_{i,j}\) should be updated with respect to \(z^{\times}\) based on its proximity to the grid location \(\epsilon^{\times}\) of \(g_{\epsilon^{\times}}\), i.e., \[w_{i,j}^{\times}:=\delta\left(e^{-\frac{\parallel z^{\times}-(i,j)\parallel _{2}^{2}}{2\tau}}\right), \tag{2}\] where \(\delta(w):=\frac{w}{\sum_{i,j}w_{i,j}}\) ensures that the scale of weights is constant during training and \(\tau>0\) is a scaling hyperparameter. Now, we design the following loss \(L_{som}\) so that SOM representations close to \(\epsilon^{\times}\) on the grid are also close to \(z^{\times}\) in the latent space (measured by the Euclidean distance \(\parallel z^{\times}-g_{i,j}\parallel_{2}\)): \[L_{som}:=\mathbb{E}_{(x^{u},x^{v})\sim\mathcal{S}}\left(\sum_{g_{i,j}\sim \mathcal{G}}\left(w_{i,j}^{u}\cdot\parallel z^{u}-g_{i,j}\parallel_{2}^{2}+w_{ i,j}^{v}\cdot\parallel z^{v}-g_{i,j}\parallel_{2}^{2}\right)\right). \tag{3}\] To improve robustness, we make two more changes to Eq. 3. First, we account for SOM representations transitioning from random initialization to becoming meaningful cluster centers that preserve the high-dimensional relationships within the 2D SOM grid. We do so by decreasing \(\tau\) in Eq. 2 with each iteration so that the weights gradually concentrate on SOM representations closer to \(g_{\epsilon^{\times}}\) as training proceeds: \(\tau(t):=N_{r}\cdot N_{c}\cdot\tau_{max}\left(\frac{\tau_{min}}{\tau_{max}} \right)^{t/T}\) with \(\tau_{min}\) being the minimum and \(\tau_{max}\) the maximum standard deviation in the Gaussian kernel, and \(t\) represents the current and \(T\) the maximum iteration. The second change to Eq. 3 is to apply the stop-gradient operator \(sg[\cdot]\)[16] to \(z^{\times}\), which sets the gradients of \(z^{\times}\) to \(0\) during the backward pass. The stop-gradient operator prevents the undesirable scenario where \(z^{\times}\) is pulled towards a naive solution, i.e., different MRI samples are mapped to the same weighted average of all image representations. This risk of deriving the naive solution is especially high in the early stages of the training when the SOM representations are randomly initialized and may not accurately represent the clusters. **Longitudinal Consistency Regularization.** We derive a SOM grid related to brain aging by generating an age-stratified latent space. Specifically, the latent space is defined by a smooth trajectory field (Fig. 1, blue box) characterizing the morphological changes associated with brain aging. The smoothness is based on the assumption that MRIs with similar appearances (close latent representations on the latent space) should have similar trajectories. It is enforced by modeling the similarity between each subject-specific trajectory \(\Delta z\) with a reference trajectory that represents the average trajectory of the cluster. Specifically, \(\Delta g_{i,j}\) is the reference trajectory (Fig. 1, green arrow) associated with \(g_{i,j}\) then the reference trajectories of all clusters \(\mathcal{G}_{\Delta}=\left\{\Delta g_{i,j}\right\}_{i=1,j=1}^{N_{r},N_{c}}\) represent the average aging of SOM clusters with respect to the training set. As all subject-specific trajectories are iteratively updated during the training, it is computationally infeasible to keep track of \(\mathcal{G}_{\Delta}\) on the whole training set. We instead propose to compute the exponential moving average (EMA) (Fig. 1, black box), which iteratively aggregates the average trajectory with respect to a training batch to \(\mathcal{G}_{\Delta}\): \[\Delta g_{i,j} \leftarrow\begin{cases}\Delta h_{i,j}&t=0\\ \Delta g_{i,j}&t>0\text{ and }|\Omega_{i,j}|=0\\ \alpha\cdot\Delta g_{i,j}+(1-\alpha)\cdot\Delta h_{i,j}&t>0\text{ and }|\Omega_{i,j}|>0 \end{cases}\] \[\text{with }\Delta h_{i,j} :=\frac{1}{|\Omega_{i,j}|}\sum_{k=1}^{N_{bs}}\mathbb{1}[\epsilon_{ k}^{u}=(i,j)]\cdot\Delta z_{k}\text{ and }|\Omega_{i,j}|:=\sum_{k=1}^{N_{bs}}\mathbb{1}[\epsilon_{k}^{u}=(i,j)].\] \(\alpha\) is the EMA keep rate, \(k\) denotes the index of the sample pair, \(N_{bs}\) symbolizes the batch size, \(\mathbb{1}[\cdot]\) is the indicator function, and \(|\Omega_{i,j}|\) denotes the number of sample pairs with \(\epsilon^{u}=(i,j)\) within a batch. Then in each iteration, \(\Delta h_{i,j}\) (Fig. 1, purple arrow) represents the batch-wise average of subject-specific trajectories for sample pairs with \(\epsilon^{u}=(i,j)\). By iteratively updating \(\mathcal{G}_{\Delta}\), \(\mathcal{G}_{\Delta}\) then approximate the average trajectories derived from the entire training set. Lastly, inspired by [12, 11], the longitudinal consistency regularization is formulated as \[L_{dir}:=\mathbb{E}_{(x^{u},x^{v})\sim\mathcal{S}}\left(1-cos(\theta[\Delta z, sg[\Delta g_{e^{u}}]])\right),\] where \(\theta[\cdot,\cdot]\) denotes the angle between two vectors. Since \(\Delta g\) is optimized by EMA, the stop-gradient operator is again incorporated to only compute the gradient with respect to \(\Delta z\) in \(L_{dir}\). **Objective function.** The complete objective function is the weighted combination of the prior losses with weighing parameters \(\lambda_{commit}\), \(\lambda_{som}\), and \(\lambda_{dir}\): \[L:=L_{recon}+\lambda_{commit}\cdot L_{commit}+\lambda_{som}\cdot L_{som}+ \lambda_{dir}\cdot L_{dir}\] The objective function encourages a smooth trajectory field of aging on the latent space while maintaining interpretable SOM representations for analyzing brain age in a pure self-supervised fashion. ### SOM Similarity Grid During inference, a (2D) similarity grid \(\rho\) is computed by the closeness between the latent representation \(z\) of an MRI sample and the SOM representations: \[\rho:=softmax(-\parallel z-\mathcal{G}\parallel_{2}^{2}/\gamma)\text{ with }\gamma:=std(\parallel z-\mathcal{G}\parallel_{2}^{2})\] \(std\) denotes the standard deviation of the distance between \(z\) to all SOM representations. As the SOM grid is learned to be associated with brain age (e.g., represents aging from left to right), the similarity grid essentially encodes a "likelihood function" of the brain age in \(z\). Given all MRIs of a longitudinal scan, the change across the corresponding similarity grids over time represents the brain aging process of that individual. Furthermore, brain aging on the group-level is captured by first computing the average similarity grid for an age group and then visualizing the difference of those average similarity grids across age groups. ## 3 Experiments ### Experimental Setting **Dataset.** We evaluated the proposed method on all 632 longitudinal T1-weighted MRIs (at least two visits per subject, 2389 MRIs in total) from ADNI-1 [13]. The data set consists of 185 NC (age: 75.57 \(\pm\) 5.06 years), 193 subjects diagnosed with sMCI (age: 75.63 \(\pm\) 6.62 years), 135 subjects diagnosed with pMCI (age: 75.91 \(\pm\) 5.35 years), and 119 subjects with AD (age: 75.17 \(\pm\) 7.57 years). There was no significant age difference between the NC and AD cohorts (p=0.55, two-sample \(t\)-test) as well as the sMCI and pMCI cohorts (p=0.75). All MRI images were preprocessed by a pipeline including denoising, bias field correction, skull stripping, affine registration to a template, re-scaling to 64 \(\times\) 64 \(\times\) 64 volume, and transforming image intensities to z-scores. **Implementation Details.** Let C\({}_{k}\) denote a Convolution(kernel size of \(3\times 3\times 3\), Conv\({}_{k}\))-BatchNorm-LeakyReLU(slope of 0.2)-MaxPool(kernel size of 2) block with \(k\) filters, and CD\({}_{k}\) an Convolution-BatchNorm-LeakyReLU-Upsample block. The architecture was designed as C\({}_{16}\)-C\({}_{32}\)-C\({}_{64}\)-C\({}_{16}\)-Conv\({}_{16}\)-CD\({}_{64}\)-CD\({}_{32}\)-CD\({}_{16}\)-CD\({}_{16}\)-Conv\({}_{1}\), which results in a latent space of 1024 dimensions. The training of SOM is difficult in this high-dimensional space with random initialization in practice, thus we first pre-trained the model with only \(L_{recon}\) for 10 epochs and initialized the SOM representations by doing k-means of all training samples using this pre-trained model. Then, the network was further trained for 40 epochs with regularization weights set to \(\lambda_{recon}=1.0\), \(\lambda_{commit}=0.5\), \(\lambda_{som}=1.0\), \(\lambda_{dir}=0.2\). Adam optimizer with learning rate of \(5\times 10^{-4}\) and weight decay of \(10^{-5}\) were used. \(\tau_{min}\) and \(\tau_{max}\) in \(L_{som}\) were set as 0.1 and 1.0 respectively. An EMA keep rate of \(\alpha=0.99\) was used to update reference trajectories. A batch size \(N_{bs}=64\) and the SOM grid size \(N_{r}=4,N_{c}=8\) were applied. **Evaluation.** We performed five-fold cross-validation (folds split based on subjects) using 10% of the training subjects for validation. The training data was augmented by flipping brain hemispheres and random rotation and translation. To quantify the interpretability of the SOM grid, we correlated the coordinates of the SOM grid with quantitative measures related to brain age, e.g., chronological age, the percentage of subjects with severe cognitive decline, and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog). We illustrated the interpretability with respect to brain aging by visualizing the changes in the SOM similarity maps over time. We further visualized the trajectory vector field along with SOM representations by projecting the 1024-dimensional representations to the first two principal components of SOM representations. Lastly, we quantitatively evaluated the quality of the representations by applying them to the downstream tasks of classifying sMCI vs. pMCI and ADAS-Cog prediction. We measured the classification accuracy via Balanced accuracy (BACC) and Area Under Curve (AUC) and the prediction accuracy via Coefficient of Determination (R2) and root-mean-square error (RMSE). The classifier and predictor were multi-layer perceptrons containing two fully connected layers of dimensions Figure 2: The color at each SOM representation encodes the average value of (a) chronological age, (b) % of AD and pMCI, and (c) ADAS-Cog score across the training samples of that cluster; (d) Confined to the last row of the grid, the average MRI of 20 latent representations closest to the corresponding SOM representation. 1024 and 64 with a LeakyReLU activation. We compared the accuracy metrics to models using the same architecture with encoders pre-trained by other representation learning methods, including unsupervised methods (AE, VAE [4]), self-supervised method (SimCLR [1]), longitudinal self-supervised method (LSSL [17]), and longitudinal neighborhood embedding (LNE [12]). All comparing methods used the same experimental setup (e.g., encoder-decoder, learning rate, batch size, epochs, etc), and the method-specific hyperparameters followed [12]. ### Results **Interpretability of SOM representations.** Fig. 2 shows the stratification of brain age over the SOM grid \(\mathcal{G}\). For each grid entry, we show the average value of chronological age (Fig. 2(a)), % of AD & pMCI (Fig. 2(b)), and ADAS-Cog score (Fig. 2(c)) over samples of that cluster. We observed a trend of older brain age (yellow) from the upper left towards the lower right, corresponding to older chronological age and worse cognitive status. The SOM grid index strongly correlated with these three factors (distance correlation of 0.92, 0.94, and 0.91 respectively). Fig. 2(d) shows the average brain over 20 input images with representations that are closest to each SOM representation of the last row of the grid (see Supplement Fig. S1 for all rows). From left to right the ventricles are enlarging and the brain is atrophying, which is a hallmark for brain aging effects. **Interpretability of similarity grid.** Visualizing the average similarity grid \(\rho\) of the NC and AD at each age range in Fig. 3, we observed that higher similarity (yellow) gradually shifts towards the right with age in both NC and AD (see Supplemental Fig. S2 for sMCI and pMCI cohorts). However, the shift is faster for AD, which aligns with AD literature reporting that AD is linked to accelerated brain aging[15]. Furthermore, the subject-level aging effects shown in Supplemental Fig. S3 reveal that the proposed visualization could capture subtle morphological changes caused by brain aging. **Interpretability of trajectory vector field.** Fig. 4 plots the PCA projections of the latent space in 2D, which shows a smooth trajectory field (gray arrows) and reference trajectories \(\mathcal{G}_{\Delta}\) (blue arrows) representing brain aging. This projection also preserved the 2D grid structure (orange) of the SOM representations suggesting that aging was the most important variation in the latent space. Figure 3: The average similarity grid \(\rho\) over subjects of a specific age and diagnosis (NC vs AD). Each grid encodes the likelihood of the average brain age of the corresponding sub-cohort. Cog denotes the average ADAS-Cog score. **Downstream Tasks.** To evaluate the quality of the learned representations, we froze encoders trained by each method without fine-tuning and utilized their representations for the downstream tasks (Table 1). On the task of sMCI vs. pMCI classification (Table 1 (left)), the proposed method achieved a BACC of 69.8 and an AUC of 72.4, a comparable accuracy (\(p>0.05\), DeLong's test) with LSSL [17] and LNE [12], two state-of-the-art self-supervised methods on this task. On the ADAS-Cog score regression task, the proposed method obtained the best accuracy with an R2 of 0.32 and an RMSE of 6.31. It is worth mentioning that an accurate prediction of the ADAS-Cog score is very challenging due to its large range (between 0 and 70) and its subjectiveness resulting in large variability across exams [2] so that even larger RMSEs have been reported for this task [7]. Furthermore, our representations were learned in an unsupervised manner so that further fine-tuning of the encoder would improve the prediction accuracy. ## 4 Conclusion In this work, we proposed LSOR, the first SOM-based learning framework for longitudinal MRIs that is self-supervised and interpretable. By incorporating a soft SOM regularization, the training of the SOM was stable in the high-dimensional latent space of MRIs. By regularizing the latent space based on longitudinal consistency as defined by longitudinal MRIs, the latent space formed a smooth trajectory field capturing brain aging as shown by the resulting SOM grid. The interpretability of the representations was confirmed by the correlation between the SOM grid and cognitive measures, and the SOM similarity map. When evaluated on downstream tasks sMCI vs. pMCI classification and ADAS-Cog prediction, LSOR was comparable to or better than representations learned from other state-of-the-art self- and un-supervised methods. In conclusion, LSOR \begin{table} \begin{tabular}{c|c|c||c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c||}{sMCI/pMCI} & \multicolumn{2}{c}{ADAS-Cog} \\ \cline{2-5} & BACC & AUC & R2 & RMSE \\ \hline AE & 62.6 & 65.4 & 0.26 & 6.98 \\ VAE [4] & 61.3 & 64.8 & 0.23 & 7.17 \\ SimCLR [1] & 63.3 & 66.3 & 0.26 & 6.79 \\ LSSL [17] & 69.4 & 71.8 & 0.29 & 6.49 \\ LNE [12] & **70.6** & 72.1 & 0.30 & 6.46 \\ \hline LSOR & 69.8 & **72.4** & **0.32** & **6.31** \\ \hline \end{tabular} \end{table} Table 1: Supervised downstream tasks using the learned representations \(z\) (without fine-tuning the encoder). LSOR achieved comparable or higher accuracy scores than other state-of-the-art self- and un-supervised methods. Figure 4: 2D PCA of the LSOR’s latent space. Light gray arrows represent \(\Delta z\). The orange grid represents the relationships between SOM representations and associated reference trajectory \(\Delta\mathcal{G}\) (blue arrow). is able to generate a latent space with high interpretability regarding brain age purely based on MRIs, and valuable representations for downstream tasks. ## Acknowledgement This work was partly supported by funding from the National Institute of Health (MH113406, DA057567, AA017347, AA010723, AA005965, and AA028840), the DGIST R&D program of the Ministry of Science and ICT of KOREA (22-KUJoint-02), Stanford's Department of Psychiatry & Behavioral Sciences Faculty Development & Leadership Award, and by Stanford HAI Google Cloud Credit.
2309.14305
Future Challenges For Event Generators
In this talk I present a personal perspective on what the current and future challenges are for Monte Carlo event generators. I focus in particular on those aspects of Monte Carlo event generators that have not, historically, received the same scrutiny and level of advancements which will be mandatory in future, cleaner and more precise experimental set-ups than current day LHC.
Davide Napoletano
2023-09-25T17:23:33Z
http://arxiv.org/abs/2309.14305v1
# Future Challenges For Event Generators ###### Abstract In this talk I present a personal perspective on what the current and future challenges are for Monte Carlo event generators. I focus in particular on those aspects of Monte Carlo event generators that have not, historically, received the same scrutiny and level of advancements which will be mandatory in future, cleaner and more precise experimental set-ups than current day LHC. CC-BY-4.0 licence ## 1 Introduction The main goal of a Monte Carlo event generator (MCEG) [1, 2, 3] is to describe, as accurately as possible, physical events occurring in various experimental set-ups1. Even in the simplest of cases, this may look like an almost impossible task, as an event consists of many interleaved effects that are difficult to tackle all at once. Nevertheless, we have had great success in comparing theoretical predictions obtained through MCEGs to experimental data. The main drive for this success is the fact that while it is true that a series of effects all take place in a physical event (such as ISR, MPI, hadronisation, hadron decay) other than the hard scattering, these various effects happen at different energies regimes (times), and can thus be considered as independent from one another. The price one pays to make this approximation can be estimated by scaling arguments, and one expects that the correlation effects amongst these various aspects is suppressed by some power of their characteristic energy scales with respect to that of the hard scattering. Footnote 1: Please note that here and in the following MCEGs refers to general purpose generators. This means the references to providers of the underlying hard scattering process and matching are not listed here. The main strength of this approach is that it allows for a complete separation of ingredients, where each of the separate building blocks can be improved independently from one another, as long as they are then properly matched. It is thus only natural that the main community effort has been devoted to improving the accuracy of the hard scattering, which represents, from the Monte Carlo perspective, the initial conditions that needs to be dressed with all other ingredients. Among these ingredients there is the parton shower, which incidentally is an aspect of Monte Carlo event generation that has received a relatively large attention over the years. In the following, I quickly review the main focus of theoretical developments relative to Monte Carlo event generation, which in a sense represents the succeeded challenges the community has faced. Then I describe what in my opinion are aspects that need to be developed to a similar standard in order to succeed for the challenges ahead. Lastly I conclude with some remarks on challenges the Monte Carlo community specifically (and to some extent the broader high energy physics community) needs to address in order not to repeat some of the mistakes that, in my opinion, we have made. The main take home of this talk is that while it is true that we can rely on factorisation, none of the single pieces of a Monte Carlo work on their own when the aim is precision physics. ## 2 Hard Scattering The description of the hard scattering process represents the core of MCEGs. This is often referred to as fixed order, and, in general, momentum and color information coming from the hard matrix element is then used to feed the parton shower and hadronisation. As they represent the starting point of the perturbative expansion - in the sense that any other aspect of MCEG can be seen as attaching higher orders in either the coupling constants or in powers of \(\Lambda_{\rm QCD}/Q\) to it - it is only natural that this is the aspect that has received the most attention over the years, and has seen a lot of success. This success can essentially be split up in two ingredients: calculation of higher order matrix elements and development of subtractions. The former is relatively straightforward in its idea, while technically complicated, and requires the calculation of multi-loop diagrams. Indeed for one-loop calculations we now know that there exists a finite set of "base" scalar integrals, and the only complication of performing a one-loop calculation lies in finding the "coefficient" multiplying such base integrals. Extensions of this to two- and higher-loops are not available yet in full generality, nevertheless we have now two-loops calculations for \(2\to 2,3\) scattering and three-loops for \(2\to 1\) processes. Subtraction is, on the other hand, only a technical complication due to the fact that the cancellation of infrared singularities (for both QCD and QED) does not happen trivially, but only after the integration over the emission phase-space, and computer programs cannot deal with this in an effective manner. There is the additional complication that ideally one wants subtraction terms that are both effective in subtracting the infrared singularities of the matrix elements, and, at the same time, easy to integrate analytically, such as to avoid numerical integration of a logarithmically enhanced (divergent) phase-space. These complications have, however, been overcome in a variety of approaches at both NLO and NNLO. Equipped with both higher order calculations and subtraction, we can try and compare to some experimental data. Take, \(Z+j\) production in Drell-Yan, for example, as depicted in Fig. 1. As it can be seen, although the fixed order description captures well the behaviour at large transverse momentum of the \(Z\)-boson, at low \(p_{\perp}\) fixed order alone in not enough, and one would need an infinite amount of higher orders to accurately describe data. This is precisely what the parton shower does in this case, and indeed one can see that after matching we recover a good description of data across the entire spectrum. ## 3 Parton Shower, Matching, Merging Parton showering can be seen in a variety of different ways. Here I present it in the most pragmatic approach possible: it evolves - by _radiating_ - particles produced in Figure 1: The importance of including some sort of evolution to low scales, in this case achieved by the parton shower, even when higher orders are included [4]. the hard scattering to lower energies, energies closer to the hadronisation scale. Note that I am being purposefully vague here as describing this process in detail requires much more time and space than a proceeding allows. The core idea behind parton showers dates back to the '80s [5], and has to a large extent remain unchanged since. It consists of dressing with radiation, ordered in a suitably defined variable such as to reflect time ordering, the hard matrix element. The implementation details of this vary to a large extent, as one can envision angular ordering, energy ordering, invariant mass ordering, transverse momentum (in various definition) ordering and so on. On top of this, how exactly the kinematics of the splittings is implemented, at what scale the coupling attached to a given splitting is evaluated all constitute the "implementation details" of a given shower algorithm. For about 20 or so years, it was believed that no matter what one did with most of these choices, as long as you had an algorithm capable of describing coherence, you evaluated \(\alpha_{s}\) in the so-called CMW scheme, with leading order splitting functions, you would get at least a leading-logarithmic accurate description of all observables, which could even be next-to-leading logarithmic accurate for a special class of them. This belief was dismantled when it was shown that, due exactly to those unimportant implementation details, dipole showers break NLL accuracy and LL accuracy beyond leading colour. This has started a new series of more accurate shower algorithms, and the aim is to be able achieve NNLL for most observables. Nevertheless, based on the discussion around Fig 1 one would expect that the shower description would, by itself, give a fairly good description of data. However, when comparing for example to LEP data, as done in Fig. 2, one can see that to actually match data across the entire spectrum, one needs to include hadronisation and subsequent fragmentation effects, on top of shower emissions. Note that this goes beyond the accuracy of the shower, as the physics behind these two regimes Figure 2: The importance of including non-perturbative corrections, even when shower and fixed-order corrections are included. is intrinsically different - one is perturbative and the other non-perturbative - thus the same argument can be equally applied to more accurate showers. However there is a non-trivial subtlety in Fig. 2 and the following discussion. Hadronisation models, and more in general most models involving the non-perturbative side of event generation, undergo tuning, _i.e._ they match data because they were fitted so. ## 4 Non-Perturbative models The core idea behind hadronisation models [6, 7] and more broadly non-perturbative models used in MCEGs, is that they are phenomenological models based on a - tipically large - number of parameters that control various aspects of non-perturbative physics, such as the mean and distribution of charged particle multiplicities, the yield of individual hadron species, the fragmentation of heavy quarks to heavy hadrons and so on. Different models, such as "cluster" or "strings", eventually differ in how partons, coming from the parton shower, are combined together to form hadrons and their subsequent decays products, but they share the main logic explained above. The hope, and the common belief - although no proof or disproof exists as of yet - is that, similarly to parton distribution functions (PDF) which are extracted from DIS data and used for any process requiring them, one can tune these parameters in a clean environment such as LEP, and then, provided they are universal to some extent, re-use them for other set ups. However, one can see how this might fail. Indeed take the PDF example: they are essentially one dimensional functions of a given parton's momentum fraction, \(x\), and that is the only non-perturbative part of these functions. Their scale dependence is completely determined perturbatively - a.k.a. DGLAP equations. This means that the dependence on the point (factorisation scale) where one decides arbitrarily to divide the non-perturbative and perturbative regions is completely known and cancels between hard partonic matrix elements and PDF evolution at any fixed-order. In contrast, non-perturbative models' parameters have no scale dependence - apart from that induced by tuning them at a given scale - and their evolution is not known theoretically, as such it is easy to imagine that extracting them at a given scale would not give the same result as extracting them at another energy scale, or in another experimental set-up. Indeed, novel tunings based on H1 data and clean DIS observables are starting to show evidence of incompatibility with LEP data tunes. An additional issue is that although higher order calculations are available tunes that include such perturbative effects as to try and reduce as much as possible tuning away higher order effects are still scarce. All in all, given that - as seen for example in Fig. 2 - non-perturbative effects can give extremely large corrections in some regions of phase-space, we need to be able to at least understand these models better to better asses uncertainties on theoretical calculations. ## 5 Miscellaneous and Conclusions As MCEGs cover all aspects of a collider event, and that the majority of people doing research on these topics focuses on either the hard matrix elements or the shower and their interplay, it is easy to imagine that the list of understudied topics can become quite long. Here I list a few of them, keeping in mind that this does not have the aim of being an exhaustive and complete list by any mean. Heavy Quark Mass effectsOne important aspect of Monte Carlo simulations that needs to be theoretically under control is the inclusion of heavy quark mass effects, specifically in parton showers. Historically the approach has been that of replacing massless with massive ingredients and hope for the best, with the idea that mass effects are in any case beyond the current accuracy claimed by parton showers. However, there are two important aspects to note here. First, while this may be true in general, as we move towards more exclusive observables there is a non-trivial scale interplay between the mass of the heavy quark and the energy - in the broader sense - of a given observable, which may lead to large effects which have to be controlled. Second, as the accuracy of parton showers increases, these have to be included to claim the full higher order accuracy claimed. In addition to these two aspects, there is the subtle thread of how to estimate an uncertainty of a Monte Carlo simulation. If we trust that including or not mass effects is only a matter of factorisation scheme dependence - up to purely kinematical effects of course - then the difference between including them or not, or how these are included, should be treated as a pure theoretical uncertainty, which would lead to much larger uncertainties that the ones currently reported. Electroweak CorrectionsElectroweak corrections have had, in recent years, an important role. Implementations of fully automated subtractions at NLO have been implemented in most MCGs and one-loop matrix elements from loop providers are readily available. In addition, a variety of implementations of these corrections in approximations were also implemented and are now fully automated, including their matching to QCD higher orders and parton shower, including merging. There are however two important aspects, in my opinion, that have not seen the same level of attention, and that I think need to be considered amongst the set of future challenges. The first is the availability - theory papers on the topic and implementation exist but it is not entirely clear in which shape these are - of EW parton showers. The second, which is tied to the first in some sense, is the need to revisit our current concept of what an EW final state really is. This is a potentially long point to elaborate in full detail, but in short, when talking about EW corrections we typically only refer to virtual corrections, as the argument is that we can, experimentally, distinguish between a \(Z\) boson or a photon or a \(W\) boson and a final state with or without an additional massive vector boson. This is certainly true at the LHC, but to what extent this remains true at higher energies - where virtual electroweak corrections would play a relatively bigger role - remains to be seen. The point is that the inclusion of real radiation at high energies scales almost exactly like the virtuals with opposite sign, leaving only a mass-suppressed mis-cancellation. At the same time, even if it remains true that only virtual EW corrections contribute, at higher energies we need to developed frameworks capable of handling the resummation of Sudakov logarithms. ConclusionsMonte Carlo event generators are a fundamental tool to compare theoretical predictions to experimental data, as a theoretical tool for phenomenological studies. Most of the successful challenges of the past have been dedicated to develop technologies to be able to include higher order calculations and to match those with parton showers. More recently, parton showers have received a lot of attention which is thus leading towards having more accurate parton showers. Technologies on how to then match higher order accuracies parton showers with higher order calculations will have to be then developed, and in some sense this will likely be a crucial not-so-distant-future challenge. On top of this, a variety of non-perturbative and/or power-suppressed aspects of Monte Carlo event generators need to be put on more solid theoretical grounds. This is important both to provide more accurate results, and because by developing these aspects allows us to estimate theoretical uncertainties in a fairer way. In order for the future of Monte Carlo Event Generators to remain as bright as it is been, on top of physics challenges, I personally think we have to tackle more pragmatic ones as well. In general codes developed for wider uses, such as MCEGs, are huge and highly complicated pieces of code which not only take a long time to write and maintain, but also to learn and master to a point where developments are possible. In addition, their broad use requires that developers invest a lot of their work time in maintaining the code and supporting users. Our current way of evaluating scientific success, based on pure metric, does not help in this sense. New ideas take a lot of time to be thought of, and even more time to be transformed into practical algorithms, and as such are disfavored. Furthermore neither maintenance nor user support lead to publications, which are the only way to get positions and grants. To successfully tackle this future challenges, which will require even more new ideas on the one hand, and more advanced coding and maintenance and support on the other hand, we need to re-assess as a community how to measure in a fairer way the scientific value of people's work.
2309.04116
Aggregation of financial markets
We present a formal framework for the aggregation of financial markets mediated by arbitrage. Our main tool is to characterize markets via utility functions and to employ a one-to-one correspondence to limit order book states. Inspired by the theory of thermodynamics, we argue that the arbitrage-mediated aggregation mechanism gives rise to a market-dynamical entropy, which quantifies the loss of liquidity caused by aggregation. As a concrete guiding example, we illustrate our general approach with the Uniswap v2 automated market maker protocol used in decentralized cryptocurrency exchanges, which we characterize as a so-called ideal market. We derive its equivalent limit order book representation and explicitly compute the arbitrage-mediated aggregation of two liquidity pools of the same asset pair with different marginal prices. We also discuss future directions of research in this emerging theory of market dynamics.
Georg Menz, Moritz Voß
2023-09-08T04:29:51Z
http://arxiv.org/abs/2309.04116v2
# Aggregation of financial markets ###### Abstract. We present a formal framework for the aggregation of financial markets mediated by arbitrage. Our main tool is to characterize markets via utility functions and to employ a one-to-one correspondence to limit order book states. Inspired by the theory of thermodynamics we argue that the arbitrage-mediated aggregation mechanism gives rise to a _market-dynamical entropy_, which quantifies the loss of liquidity caused by aggregation. We also discuss future directions of research in this emerging theory of market dynamics. 2020 Mathematics Subject Classification: 91G15, 91B50. _JEL Classification._ C02, D53. ###### Contents * 1 Introduction * 2 Axiomatic description of markets * 3 Temperature, iso-utils and equilibrium prices * 4 Limit order book markets * 4.1 Supply and demand measures and limit order books * 4.2 Adiabatic and iso-util clearing of a limit order book * 4.3 Equivalence of iso-utils and limit order books * 5 Aggregate markets * 6 The emerging theory of market dynamics, open problems and conclusion Acknowledgment ## 1. Introduction The theme of this manuscript is to study and formulate mathematical tools that are useful to describe stationary financial markets in equilibrium. There are various types of financial markets, e.g., limit order book exchanges, dark pools, alternative trading systems, decentralized exchanges, and automated market makers. Therefore, we strive for a fairly general definition of a market. When defining a market, there are two main perspectives: The first one is simplistic and reduces a market to a device or mechanism that allows to exchange one good into another. Hence, in order to describe a market one would only have to characterize how much of one good could be exchanged on the market into another good. The second perspective understands a market as a place where traders meet and interact. Describing a market from this perspective is a lot more subtle, as one needs to describe and understand how traders interact, influence each other and come to an agreement. To meet both views, we use utility functions (see Section 2). The superlevel set of the utility function describes the set of all possible trades with that market, fully characterizing the market as an exchange mechanism. The behavior of rational traders can also be described via utility functions. A trader is only willing to trade if her utility increases (or at least does not decrease). The utility function of the market can then be interpreted as the aggregation of the utility functions of the individual traders present in the market. The main goal of this article is to describe a natural aggregation mechanism in financial markets. The described mechanism is quite universal. It applies whenever arbitrageurs are present in the market, assuming they are willing and able to close arising arbitrage opportunities. As real financial markets are highly fragmented (see, e.g., [1]) this is an important tool to describe the global market. Specifically, the mechanism allows aggregation of conventional markets like centralized limit order book exchanges with non-conventional markets like automated market makers. If done correctly, the aggregate market will reflect the preferences and opinions of all market participants, representing the true state of the global financial market. Moreover, the National Best Bid Offer (NBBO) price execution requirement in equity markets practically forces brokers to act on the aggregated market on behalf of their clients (see Regulation National Market System [1]). Smart routing in order to minimize price impact also effectively trades on an aggregated market. Deriving the utility function of the aggregated system from the utility functions of the individual agents is a common theme in many areas of research, like in economics or social welfare theory. When aggregating financial markets one faces several challenges. The first challenge is that the theory must be general enough to bridge from microeconomics to macroeconomics. For example, for an individual trader money might be used as an absolute measure of value, which is not possible anymore on a macroeconomic level. Another difficulty is that the process of aggregation of utility functions is highly nontrivial. The aggregation process models how different agents interact, influence each other, and come to an agreement. As there are many different ways to come to an agreement, it is not surprising that there is a multitude of aggregation mechanisms. For instance, in social welfare theory there are utilitarian, Nash bargaining, Bergson-Samuelson, and many more aggregation mechanisms. Ultimately, there is no ideal canonical mechanism which follows from Arrow's impossibility theorem [1]. The theorem states that it is impossible to construct a social welfare function that satisfies a small number of reasonable conditions like, e.g., unanimity, non-dictatorship, and independence of irrelevant alternatives. Our situation differs substantially from Arrow's setting. On the one hand, we face a more subtle aggregation problem as traders can choose between a continuum of choices and not between a finite number of alternatives. On the other hand, this complexity is compensated by the regularity of utility functions, i.e., continuity or monotonicity, which makes optimal aggregation easier to achieve. In the end, we pursue a different objective. Instead of trying to find and characterize the best theoretical aggregation mechanism, we just strive to identify and characterize a natural mechanism that reflects how financial markets are aggregated in real life, especially in limit order book markets. Inspired by smart routing, the presented aggregation mechanism is simple and straightforward (see Section 5). Aggregation takes place on the level of limit order books and relies on the following observations: * Utility functions are characterized up to a monotone transformation by their indifference curves. Therefore, it suffices to describe the present indifference curve of the aggregated market (see Section 3). * Every participant of a market, i.e., trader, is described via a utility function that encodes the preferences for different portfolios and characterizes the possible trades with that trader (see Section 2). More precisely, the indifference curves of the trader's utility function characterize all possible trades with her. We call indifference curves _iso-utils_ because the utility of the trader's portfolio is constant on these curves. * Iso-utils are in one-to-one correspondence to limit order book states (see Section 4). * On the level of limit order books aggregation is straightforward: the limit orders of the individual traders are combined into one joint limit order book. After aggregating the limit order book one faces the following challenge: As traders have different opinions, the aggregated limit order book might be unsettled. This means that the buy and sell side of the joint book might overlap, and therefore cannot describe a convex iso-util as it is needed in order to retrieve a characterization of the market's utility function. The way out is to settle the limit order book which effectively results in a natural convexification procedure. This process is mediated by arbitrageurs as the overlap opens an arbitrage opportunity. They will counter-trade the overlapping limit buy and sell orders, settle the limit order book and make a risk-free profit. This settlement mechanism can also be observed in real financial markets via cross-exchange price arbitrage. To the settled limit order book it is then possible to associate a convex iso-util which describes the aggregated utility function of the market. In the aggregation mechanism described above there is the freedom to decide how traders will react to the counter-trading of the arbitrageur. For this, we distinguish between two types of traders: * The _adiabatic_ trader only trades if her utility is strictly increased. This type of trader is not willing to undo any trade. * The _iso-util_ trader is always willing to trade as soon as their utility is not decreased. This trader is willing to undo any trade which keeps her underlying utility invariant. This leads to two canonical settlement mechanisms: * In adiabatic settlement, only adiabatic traders are present. Hence, the overlapping limit orders vanish out of the limit order book after the clearing process. * In iso-util settlement, only iso-util traders are present. As a consequence, overlapping limit orders reappear on the other side of the limit order book after the clearing process. Aggregation in real markets is often comprised of a mixture of both settlement mechanisms, reflecting that both adiabatic and iso-util traders might be present in the market. Even if adiabatic aggregation seems to be more natural, iso-util aggregation plays an important role in financial markets. For example, in the Uniswap protocol (see, e.g., [1]), the iso-utils of the individual liquidity providers are in fact aggregated in an iso-util fashion to determine how the protocol provides liquidity for liquidity takers. As individual traders are never in the exact same equilibrium, we have the following central observation: _When markets aggregate some liquidity is lost to arbitrage._ Inspired by thermodynamics we call this observation the _fundamental law of market dynamics_. The lost liquidity is called _market-dynamical entropy_. This manuscript is envisaged to be the first of a sequence of articles with the goal to formulate a new theory of _market dynamics_ inspired by the theory of thermodynamics. For more details about the inspiration from thermodynamics and the program to develop the theory of market dynamics we refer to Section 5 and 6. The aggregation mechanism described in this article is not restricted to financial markets. It applies whenever an arbitrageur is present and able to close arising arbitrage opportunities. In Section 5 we describe how this type of market aggregation can be applied to consumer and producer markets, like the car market. In this setting, the car dealership is playing the role of the arbitrageur. Market dynamic entropy denotes how many cars are sold to customers and how much money got transferred to the producers and the car dealership. That opens a new role of market-dynamical entropy: It allows to measure economic activity. Therefore, arbitrage-mediated aggregation could in principle be used to describe how microscopic agents aggregate into one global aggregated market; and how economic activity results from this aggregation process. When applying this aggregation mechanism to real financial markets one faces the challenge of _hidden information_ and _hidden liquidity_ (see, e.g., Section 1.4 in [1]). As we mentioned before, financial markets are highly fragmented: in decentralized finance by design and in classical finance by evolution; cf. [15]. Specifically, the equities market in the US is comprised of several exchanges, broker-dealers, alternative trading systems and dark pools; and most of them are not directly observable. In this article, we make the assumption that market participants are transparent, in the sense that they communicate their iso-utils via limit orders to the public (see Definition 4.21 below). Under this assumption, the arbitrage-mediated aggregation is Pareto optimal. This means that after aggregation, no trader can improve her utility without decreasing another trader's utility. Transparency is a very strong assumption and obviously not satisfied in real financial markets. In reality, traders are non-transparent as there is no need to reveal their iso-util. There is even a game-theoretic incentive not to act transparent. If a trader wants to increase her utility more than it would be possible by transparent aggregation she needs to hide her information. As an analogy, if a player wants to win in a highly competitive card game the player also does not show her hand to her opponents. As a consequence, the financial market will never be transparent. Even if a trader has the intention to sell at a certain price level she might not make her intention visible to other market participants by posting a corresponding limit order. Real world examples are iceberg orders that are not visible and fill-or-kill or immediate-or-cancel orders which try to avoid revealing any intentions; and of course trading in dark pools. The challenge is even bigger: The utility function of an individual exists but might not even be known to the individual1. Many retail traders rely on their intuition, sentiments, etc. The individual utility function is also heavily influenced by the utility function of others, as they look at the overall market in order to derive the value of their assets. To account for this, one approach is to split up the utility function in a visible and a hidden part. Aggregating over the utility functions would then yield two markets: Footnote 1: This might explain the immense value of data gathering via smartphones, social media, web browsing, etc. Actions reveal preferences which might even be unknown to the individual herself. Through those actions, hidden utility functions can be learned and then monetized. * An _aggregated hidden market_ which is comprised of the aggregation of the complete utility functions. This market is not observable and unsettled. * An _aggregated visible market_ which is comprised of the aggregation of the visible part of the utility functions. This market is observable and settles via arbitrage. As a consequence, the hidden aggregated market contains _hidden arbitrage opportunities_. Estimating the hidden aggregated market is a subtle but rewarding problem as it might uncover those opportunities. This might explain the long-term success of certain players on the financial market: They are just better card players. Sentiment analysis on social media, which was used very successfully to generate trading signals, can be interpreted as an attempt to estimate the hidden part of the utility function of a typical retail trader. This perspective also sheds a new light on price discovery as the hidden utility function becomes visible and settles around the newly discovered price. Hidden arbitrage also challenges the Efficient Market Hypothesis, which argues that all available information is already incorporated into the asset price. Therefore, fluctuations would only result from new information entering the market. After aggregation, only visible but no hidden information is incorporated in the present price and the true price coming from all information is hidden to the market participants. The overlap of the hidden limit order book yields an additional source of price fluctuations as it means that traders do not agree on the correct price. Only if _all_ market participants are able to learn the hidden information faster than it changes, then this source of fluctuation vanishes. Otherwise, one will observe a market with large persistent price fluctuations that will be taken advantage of by arbitrageurs. In this article we model the financial market as a static, transparent, pure exchange economy. This is obviously too simplistic, as even in the static framework it does not include production and consumption processes as for example issuance and buy-backs of stocks. Surprisingly, the model still allows for the complexity of real world examples. Protocols of automated market makers are the most prominent example as they provide liquidity according to an aggregated iso-util. The assumption of static utility functions also applies to high-frequency situations. The framework of utility functions has enough complexity to grasp the subtle and nonlinear relationships between different assets. We only make very weak assumptions on the regularity of utility functions to make the framework as general as possible. We just assume that a utility function is continuous but not necessarily differentiable everywhere. Differentiable utility functions would make the mathematics a lot easier. However, they would rule out main examples like limit order book exchanges. Moreover, important aspects would get lost as one would not be able to distinguish between best bid and best ask prices anymore. Without smoothness we have to use generalized inverses and sub-gradients. To keep the presentation as simple as possible we will concentrate on markets where only a pair of assets is traded. Extending the aggregation mechanism to multiple assets is straightforward if one assumes the existence of a numeraire, i.e., an asset in which prices are quoted. Considering multiple assets is crucial as for example the joint market of the assets dollar, copper, tin, and gold can describe the interrelations between those assets a lot better than three separate markets of the asset pairs dollar/copper, dollar/tin, and dollar/gold. We see two main directions to further develop the theory. The first direction is to develop a game theory of financial markets. The goal would be to precisely describe the financial market as a mathematical game of traders. Then, one could ask what is the optimal strategy? Are there gains revealing information? Can this be used to influence other traders? Can this be used to an advantage by misleading competitors? Order spoofing might be one of those strategies. The other direction would be to develop a theory of market dynamics and we refer to Section 6 for more details. ## 2. Axiomatic description of markets In this section we review the mathematical framework to describe markets. Our approach is straightforward. We describe the financial market as a pure exchange economy of agents trying to improve the utility of their endowed portfolio through trading. Let us look at one individual participant of a market subsequently called trader \(l\). Trader \(l\) owns a portfolio of assets \((X_{1},\ldots,X_{n})\) given by a vector \(x=(x_{1},\ldots,x_{n})\in[0,\infty)^{n}\). Obviously, trader \(l\) prefers certain portfolios over others. Those individual preferences can be encoded with the help of a utility function \(U_{l}:[0,\infty)^{n}\to\mathbb{R}\) which assigns to any portfolio \(x\) a utility \(U_{l}(x)\). For a precise definition of utility functions we refer to Definition 2.2 below. As a rational trader, \(l\) will only trade if it increases (or at least not decreases) its present utility. More precisely, trader \(l\) is willing to trade the portfolio \(x\) for the portfolio \(\tilde{x}\) if \(U_{l}(x)\leq U_{l}(\tilde{x})\). As a consequence the utility function \(U_{l}\) does not only encode the preferences of the trader \(l\) but also encodes the set of possible trades \(\Delta x=\tilde{x}-x\) with trader \(l\) via the superlevel set \[\left\{\tilde{x}\in[0,\infty)^{n}\ |\ U_{l}(\tilde{x})\geq U_{l}(x)\right\}.\] Let us now turn to markets. What is a market? Oversimplifying, a market is just a device or mechanism that allows one to exchange one good for another. From this perspective, even a single trader \(l\) can be regarded as a market. All possible trades can be derived from her utility function \(U_{l}\), and the trader's present portfolio \(x\) describes the maximal quantities one can purchase from this atomic market. In common language, a market denotes a place where traders meet and trade goods. Therefore, let us now assume that a couple of individual traders enumerated by \(1\) to \(k\) meet at a market place. It is a central question of economics how the traders, each equipped with individual preferences, come to an overall agreement. Mathematically, this amounts to the question of how the individual utility functions \((U_{1},\ldots,U_{k})\) are aggregated into one joint utility function \(U=U_{1}\bigtriangleup\cdots\bigtriangleup U_{k}\) of the market. In Section 5 below we propose a simple arbitrage-mediated aggregation mechanism. In this section, let us assume that the traders reached agreement. Then the market comprised of the traders \(1,\ldots,k\) can again be described by a utility function \(U\) and a portfolio \(x\). The portfolio \(x\) describes the maximal quantities that are supplied on the market and will be called _supply level_ further on. The set of possible trades with this market is again characterized via the superlevel set of the utility function \[\left\{\tilde{x}\in[0,\infty)^{n}\ |\ U(\tilde{x})\geq U(x)\right\}.\] Let us iterate the meaning of utility \(U(x)\) one more time: It represents the value and preference the market puts on the portfolio \(x\) of assets. **Assumption 2.1**.: _In this article we only consider transparent stationary markets. This means that the utility functions of traders and markets do not change over time and are observable._ Let us now turn to the precise definition of a market. **Definition 2.2** (Markets, supply levels and utility functions).: _A market \(\mathcal{M}\) of the assets \((X_{1},\ldots,X_{n})\) is defined via supply levels \((x_{1},\ldots,x_{n})\in[0,\infty)^{n}\) and a utility function \(U:[0,\infty)^{n}\to\mathbb{R}\cup\{-\infty\}\). The utility function assigns to every supply level \((x_{1},\ldots,x_{n})\) a utility \(U(x_{1},\ldots,x_{n})\). It has to satisfy the following conditions:_ 1. **(Continuity)** _The function_ \(U\) _is continuous._ 2. **(Quasi concavity)** _The function_ \(U\) _is quasi-concave, i.e., for every_ \(T>0\) _the sets_ \[\left\{(x_{1},\ldots,x_{n})\in[0,\infty)^{n}\ |\ U(x_{1},\ldots,x_{n})\geq\log T\right\}\] _are convex._ 3. _(_**Strict monotonicity)** _For any asset_ \(i\in\{1,\ldots,n\}\) _and supply levels_ \((x_{1},\ldots,x_{n})\) _the function_ \(x_{i}\mapsto U(x_{1},\ldots,x_{i},\ldots,x_{n})\) _is strict monotone increasing._ 4. **(Unbounded from above)** _For any asset_ \(i\in\{1,\ldots,n\}\) _it holds_ \(\lim_{x_{i}\to\infty}U(x_{1},\ldots,x_{n})=+\infty\) _._ The utility function determines many aspects of a market: * The set of possible trades is described via superlevel sets of the utility function; * The existence of arbitrage opportunities (see Section 4); * The equilibrium price is a function of the utility landscape (see Section 3); * The price impact is a function of the utility landscape (see Section 3). **Remark 2.3**.: _To keep the presentation simple and accessible we restrict ourselves to markets with only two assets \((X,Y)\). The supply levels are denoted by \((x,y)\)._ In this manuscript we focus on financial markets, though Definition 2.2 also describes non-financial markets. To have a concrete example of an asset pair in mind one could think of the asset pair \((X,Y)\) as (US dollar, gold) where the unit of gold is measured in ounces. **Definition 2.4** (Ideal market).: _A market associated to the utility function \(U(x,y)=\log x+\log y\) is called ideal market (see Figure 1 and Figure 2). For the origin of this terminology see Section 6._ **Example 2.5**.: _The decentralized cryptocurrency exchange Uniswap uses the ideal market for its automated market maker protocol; see [2]._ **Example 2.6**.: _Another example of a utility function is the Cobb-Douglas production function \(U(x,y)=Ax^{\beta}y^{\alpha}\) for some \(A>0\) and \(0<\alpha,\beta<1\)._ **Remark 2.7** (Discrete supply levels).: _Often supply levels must be discrete as it is not possible to buy fractions of certain assets. Definition 2.2 can be extended to discrete supply levels by introducing the set of admissible supply states as a discrete subset \(\mathcal{P}\subset[0,\infty)\times[0,\infty)\). In this scenario, the utility function \(U\) would still be defined on a continuum but supply levels must take values in \(\mathcal{P}\)._ Figure 1. Contour plot of the utility function \(U(x,y)=\log x+\log y\). **Remark 2.8** (Utility functions of real markets).: _In real markets utility functions are often not observable. To address this complexity, we propose the following classification of markets:_ * _In_ implicit _markets one cannot directly observe the utility function. However, as we explain later, it is possible to deduce some information about the utility function from observing trades, prices, volume and activity of the market. An example would be Over-the-Counter (OTC) markets._ * _In_ semi-explicit _markets only certain features of the utility function can be observed. An example are centralized Limit Order Book (LOB) exchanges like stock exchanges or foreign exchange markets. It is possible to read off the present so-called iso-utility from the current limit order book state (see Section_ 4_)._ * _In_ explicit _markets the utility function of the market is explicitly given. An example are Automated Market Makers (AMMs). For instance, the original Uniswap protocol uses the utility function_ \(U(x,y)=\log(x)+\log(y)\)_. For more information and further examples we refer to_ _[_1_]__._ **Remark 2.9** (Interpretation of the supply level \((x,y)\)).: _Tied to a market \(\mathcal{M}\) is the notion of supply levels \((x,y)\) of the asset pair \((X,Y)\) which has the following interpretation:_ * _The number_ \(y\) _may denote the maximal number of units of the asset_ \(Y\) _that can be traded for the asset_ \(X\)_. The number_ \(x\) _may denote the maximal number of units of the asset_ \(X\) _that can be traded for the asset_ \(Y\) _(see Section_ 4_)._ * _The level_ \((x,y)\) _may represent the equilibrium point of the supply and demand curve of the asset pair_ \((X,Y)\)_. For more details we refer to Remark_ 2.10_._ **Remark 2.10** (Determination of the current supply level).: _The current supply \((x,y)\) describes the current equilibrium of the market \(\mathcal{M}\). How this equilibrium is determined is traditionally addressed in microeconomics through the analysis of Figure 2. 3d-plot of the utility function \(U(x,y)=\log x+\log y\) supply and demand curves. For a typical illustration of a demand and supply curve we refer to Figure 3. The demand curve reflects the consumer's perspective (or buyer's perspective), illustrating how the quantity demanded diminishes as prices escalate and vice versa. In contrast, the supply curve represents the producer's (or seller's) viewpoint, revealing how the quantity supplied expands as prices climb and contracts when prices decline. The intersection of these two curves, known as the equilibrium point, signifies the optimal price and quantity levels harmonizing supply and demand forces within the market. This equilibrium point determines the current supply level \(y\) and price \(P_{\frac{x}{y}}\). The supply level of \(x\) is calculated via the formula \(x=P_{\frac{x}{y}}\cdot y\). The equilibrium point exhibits stability, as the market will gravitate back to it if any imbalance occurs. For instance, if the price surpasses the equilibrium point, an excess of goods supplied over those demanded will emerge, causing the price to drop until equilibrium is reestablished._ **Remark 2.11** (Money as an absolute versus relative measure of value).: _In traditional economic theory, the role of money is paramount in establishing the interplay of supply and demand. Often one employs money as an absolute measure of value in the sense that one dollar always has the exact same value. This works well in microeconomics due to the relatively small quantities involved. The use of money as an absolute measure of value becomes problematic when attempting to bridge the gap between micro- and macroeconomics. When developing a theory of markets that can seamlessly transition from micro- to macroeconomic markets via aggregation it becomes necessary to employ money as a relative measure of value, i.e., the value of one US dollar is relative and depends on many factors; e.g., on the overall money supply or on the overall supply of consumer goods. For that reason, our framework assigns fiat money (US dollar), usually denoted as \(X\), the same role as any other asset or good, usually denoted as \(Y\). The value of money is derived from Figure 3. Example of a supply and demand curve where the price is plotted on the vertical axis. its use, namely that it can be easily exchanged for any other asset or good on a market \((X,Y_{1},\ldots,Y_{n})\)._ **Remark 2.12** (Equilibrium description of markets).: _Definition 2.2 allows to analyze markets from an equilibrium perspective. It assumes that the market is in equilibrium at current supply level \((x,y)\). In this article, the exact value of the utility \(U(x,y)\) at supply level \((x,y)\) does play a minor role only. More important is to compare the utilities \(U(x,y)\) and \(U(\tilde{x},\tilde{y})\) of different supply levels \((x,y)\) and \((\tilde{x},\tilde{y})\), i.e., to identify which supply levels have higher utility._ We note that the conditions in Definition 2.2 have some important structural consequences on the utility function \(U\). **Lemma 2.13** (Regularity of the utility function \(U\)).: _Consider a market \(\mathcal{M}\) with utility function \(U\). Then the utility function \(U\) is almost everywhere differentiable with_ \[\infty>\partial_{x}U>0\qquad\text{and}\qquad\infty>\partial_{y}U>0\qquad\text {for a.e. }(x,y). \tag{1}\] Proof.: It is a well known fact that quasi-convex functions are differentiable almost everywhere (see, e.g., comment after Theorem 5.3 in [2]). This implies the existence of the partial derivatives \(\partial_{x}U\) and \(\partial_{y}U\). The lower bound in (1) follows directly from the fact that a utility function \(U\) is strict monotone increasing by Definition 2.2. **Remark 2.14** (Origins of the Definition 2.2).: _In [1] a similar approach was used to describe automated market makers via utility functions. In this work, we use utility functions to describe general markets. Our assumptions on the utility function \(U\) given in Definition 2.2 are less restrictive compared to [1]. Specifically, we excluded condition_ \[\text{\bf(Unbounded from below)}\qquad\lim_{x\to 0}U(x,y)=\lim_{y\to 0}U(x,y)=-\infty.\] _Otherwise, Definition 2.2 would not include a very important example, namely limit order book markets (see Section 4). Condition_ **(Unbounded from below)** _forces level sets - called indifference curves or iso-utils in our manuscript - to always have an unbounded domain \((0,\infty)\). However, when considering an iso-util coming from a (finite) limit order book, the domain of the iso-util is bounded. For more details we refer to Section 3, Proposition 3.6, and Section 4._ **Remark 2.15** (Conditions in Definition 2.2).: _The condition_ **(Quasi concavity)** _plays an important role. It means that the traders who constitute a market reached consensus, and hence rules out price arbitrage opportunities (cf. Remark 4.51 below). If_ **(Quasi concavity)** _is not satisfied we say that the market is not settled or equilibrated. Under relatively weak regularity assumptions, quasi concavity can be strengthened to concavity, after a smooth, strict monotone reordering of preferences (see Theorem 3 in [1]). The condition_ **(Strict monotonicity)** _implies that more supply always implies more utility._ **(Unboundedness from above)** _corresponds to unsaturability. The combination of_ **(Quasi concavity)** _and_ **(Strict monotonicity)** _entails that on a market with higher utility more assets can be exchanged with less price impact. If one would use_ **(Strict concavity)** _instead of_ **(Quasi concavity)** _then distributed portfolios would have higher utility than concentrated portfolios._ **Remark 2.16** (Additional conditions on the utility function \(U\)).: _There are several additional conditions that can be added to the definition of a utility function, as for example Inada, Inada+ or single-crossing conditions. Those conditions serve several purposes; for instance, to ensure existence of solutions to optimization problems or path independence of a trading loop. We refer to [1] for more details._ ## 3. Temperature, iso-utils and equilibrium prices In this section we review how (marginal) prices are calculated in markets. **Definition 3.1** (Temperature and mean activity).: _We consider a market \(\mathcal{M}\) with utility function \(U\) at supply level \((x,y)\). Then we call_ \[T:=e^{U(x,y)}\] _the temperature of the market \(\mathcal{M}\) at supply level \((x,y)\). We call \(A:=\sqrt{T}\) the mean activity of the market \(\mathcal{M}\) at supply level \((x,y)\)._ **Remark 3.2**.: _We call \(T\) temperature because of an analogy to thermodynamics. Moreover, we call \(\sqrt{T}\) mean activity because it coincides with the mean arrival rate of buyers and sellers in a canonical microscopic model of a market, modeling the arrivals of buyers and sellers via Poisson processes. The details of this model and the motivation from meta principles of thermodynamics will be covered in forthcoming sequel._ The Definition 3.1 of temperature motivates the defintion of iso-utils which identify supply levels \((x,y)\) with the same temperature. In microeconomics, iso-utils are known under the name _indifference curves_ (see, e.g., [1]). We prefer to use the name iso-util to point out similarities to iso-thermals in thermodynamics. **Definition 3.3** (Iso-utils).: _We consider a market \(\mathcal{M}\) with utility function \(U\). The iso-utils of \(\mathcal{M}\) are defined as the level sets of the utility function \(U\), i.e., the iso-util \(I\) of \(U\) associated to the temperature \(T\in\mathbb{R}_{+}\) is defined by_ \[I:=\left\{(x,y)\ |\ U(x,y)=\log T\right\}.\] **Example 3.4** (Graphs of Iso-utils).: _In Figure 4, we plot the iso-utils of an ideal market. In Figure 11, we illustrate the iso-util of a limit order book. In Section 5 we discuss the iso-utils of a single consumer (see Figure 16), the iso-utils of a single producer (see Figure 17), and the iso-utils of an aggregate consumer-producer market that can either be unsettled (see Figure 19) or settled (see Figure 20)._ **Remark 3.5** (Role of iso-utils).: _Iso-utils play a central role in our theory of market dynamics as they characterize utility functions up to strict monotone transformations. Together with the current supply level iso-utils determine the best possible trades in a market, and therefore (marginal) prices; see Definition 3.13 below._ Iso-utils can always be described as a graph of a function. **Proposition 3.6** (Function representation of an iso-util).: _Consider a market \(\mathcal{M}\) with utility function \(U\). For arbitrary \(T\in(0,\infty)\) we consider a non-empty iso-util_ \[I=\left\{(x,y)\ |\ U(x,y)=\log T\right\}\neq\emptyset.\] _Then there exist a number \(d_{f}\in\mathbb{R}\cap\left\{\infty\right\}\) and a function \(f:D_{f}\to[0,\infty)\) defined on \(D_{f}=(0,d_{f})\) such that:_ 1. _The iso-util_ \( 1. _The function_ \(f\) _is convex._ 2. _It holds_ \(\lim_{x\to d_{f}}f(x)=0\)_._ 3. _The iso-util_ \(I\) _is the graph of_ \(f\)_, i.e.,_ \[I=\left\{(x,f(x))\ |\ x\in D_{f}\right\}.\] _The function \(f\) is called function representation of the iso-util \(I=\left\{U=\log T\right\}\). Moreover, the left and right derivatives \(f^{\prime}_{-}\) and \(f^{\prime}_{+}\) of the function \(f\) exist everywhere on \(D_{f}\) and \(f\) is differentiable almost everywhere (a.e.) on \(D_{f}\). It also holds that_ \[f^{\prime}(x)=-\frac{\partial_{x}U(x,f(x))}{\partial_{y}U(x,f(x))}\qquad\text{ a.e.} \tag{2}\] _The functions \(f^{\prime}_{-}\), \(f^{\prime}\), and \(f^{\prime}_{+}\) are non-decreasing and satisfy_ \[-\infty<f^{\prime}_{-}\leq f^{\prime}_{+}<0 \tag{3}\] _everywhere and_ \[-\infty<f^{\prime}_{-}\leq f^{\prime}\leq f^{\prime}_{+}<0\qquad\text{a.e.} \tag{4}\] _If the utility function \(U\) is unbounded from below then the domain of the function \(f\) is given by \(D_{f}=(0,\infty)\)._ Proof of Proposition 3.6.: The aim is to define a function \(f:(0,d_{f})\to[0,\infty)\) which satisfies the desired conditions. We start with observing that by definition of quasi concavity the iso-util \(\left\{U=\log T\right\}\) is the boundary of the convex set \(\left\{U\geq\log T\right\}\). Let us consider the set \(D_{f}\) as given by the projection of \(I\) onto the \(x\)-axis. Because the set \(I\) is convex it follows that \(D_{f}=(0,d_{f})\) for some \(d_{f}\in\mathbb{R}\cap\left\{\infty\right\}\). Both observations imply that the normalized tangent vector \(\vec{v}(x,y)\) of the iso-util at point \((x,y)\) exists almost everywhere and is orthogonal to the gradient \(\nabla U(x,y)\) Figure 4. Iso-utils of an ideal market, i.e., with respect to the utility function \(U(x,y)=\log x+\log y\) We thus have \[\vec{v}(x,y)=\frac{1}{|\nabla U(x,y)|}\left(\partial_{y}U(x,y),-\partial_{x}U(x,y) \right)^{\top}. \tag{5}\] By (1) it holds \(\partial_{y}U>0\) and hence \(\vec{v}\) can be written as \[\vec{v}(x,y)=\frac{1}{\left|\left(1,-\frac{\partial_{x}U}{\partial_{y}U} \right)^{\top}\right|}\left(1,-\frac{\partial_{x}U}{\partial_{y}U}\right)^{ \top}.\] Moreover, we also obtain by (1) that \[-\infty<-\frac{\partial_{x}U}{\partial_{y}U}<0, \tag{6}\] which implies that the tangent vector \(\vec{v}(x,y)\) always points to the lower right quadrant. Therefore, for any \(x_{0}\in D_{f}\) the iso-util \(I\) contains exactly one element \((x_{0},y)\in I\) with \(x\)-coordinate given by \(x_{0}\). This is enough to show that the iso-util \(I\) can be written as the graph of a function \(f\). Let us now verify the claimed properties of the function \(f\). First, it must hold that \(\lim_{x\to\infty}f(x)=0\), otherwise we would have a contradiction to the fact that the utility function \(U\) is unbounded from above. Next, we observe that the epigraph of the function \(f\) is given by the superlevel set \(\{(x,y)\ |\ U(x,y)\geq\log T\}\), which is convex by the quasi concavity of the utility function \(U\). Therefore, the function \(f\) is convex; and this implies existence and monotonicity of \(f^{\prime}_{-}\), \(f^{\prime}_{+}\) everywhere; as well as the existence and monotonicity of \(f^{\prime}\) a.e. The desired identity (2) follows from (5) and the desired inequality (3) follows from a combination of (2) and (6); as well as from the convexity of the function \(f\). In addition, the outer estimates in (4) follow from (3) and the observation that the function \(f\) is differentiable a.e; the inner inequality in (4) follows again directly from the convexity of the function \(f\). Finally, it remains to show that \(D_{f}=(0,\infty)\) if the utility function is unbounded from below. To achieve this it suffices to show that for any \(x>0\) it holds \(x\in D_{f}\). Since the utility function \(U\) is unbounded from below, i.e., \(\lim_{y\to 0}U(x,y)=-\infty\), unbounded from above, i.e., \(\lim_{y\to\infty}U(x,y)=\infty\), and continuous it follows that there exists a \(y>0\) such that \(U(x,y)=e^{T}\). But this implies \(x\in D_{f}\). **Remark 3.7**.: _The same statement as in Proposition 3.6 is true with the roles of \(x\) and \(y\) interchanged. Specifically, the iso-util \(I\) can also be written as the graph of a function \(g\) via_ \[I=\left\{(g(y),y)\ |\ y\in D_{g}\right\}.\] **Remark 3.8**.: _Let \(d_{f}\in\mathbb{R}_{+}\cup\{\infty\}\). We observe that as soon as a function \(f:(0,d_{f})\to[0,\infty]\) is convex and satisfies \(\lim_{x\to d_{f}}f(x)=0\), then the graph \(I:=\left\{(x,f(x))\ |\ x\in(0,d_{f})\right\}\) defines an iso-util of some utility function \(U\)._ We distinguish between two parts of the iso-util, the ask part and the bid part. The reason for this becomes apparent in Section 4 below. **Definition 3.9**.: _Let us consider a non-empty iso-util_ \[I=\{(x,y)\ |\ U(x,y)=\log T\}\neq\emptyset.\] _Assume that the present supply level of the market is given by \((x_{0},y_{0})\). Then the set_ \[I_{a}:=\{(x,y)\in I\ \ |\ x\leq x_{0}\}\] _is called the ask part of the iso-util \(I\), and the set_ \[I_{b}:=\{(x,y)\in I\ \ |\ x\geq x_{0}\}\] _is called the bid part of the iso-util \(I\)._ For an illustration we refer to Figure 11. Therein, the bid part of the iso-util is colored in blue and the ask part is colored in red. **Example 3.10** (Iso-utils of an explicit market).: _As it is described in [1], Uniswap protocol iso-utils are described by the equations \(x\cdot y=T\) for all \(T\in\mathbb{R}_{+}\). They are illustrated in Figure 4._ **Example 3.11** (Iso-util of a limit order book market).: _A limit order book can be understood as defining an iso-util of the underlying utility function and vice versa. For more details, we refer to Section 4._ **Remark 3.12** (Iso-utils and automated market makers).: _As mentioned above, an example of an explicit market are automated market maker liquidity pools. These are protocols that facilitate trading on blockchains. Simplifying, they work in the following way. The protocol provides two urns: The first one is filled with \(x_{0}\) units of the asset \(X\) and the second with \(y_{0}\) units of the asset \(Y\). Hence, the current supply level of the market is given by \((x_{0},y_{0})\). When a trader wants to exchange \(x\) units of \(X\) into the asset \(Y\) via the protocol, the protocol will take the \(x\) units of the trader and add them to the first urn. Then, it determines via a formula how many units \(y\) of \(Y\) the trader receives out of the second urn. After the exchange (or asset swap) there are \(x_{0}+x\) many units of \(X\) in the first urn, and \(y_{0}-y\) many units of \(Y\) in the second urn. Therefore, the new supply level of this market is given by \((x_{0}+x,y_{0}-y)\). How many units \(y\) the trader receives from the protocol is usually calculated via the help of an iso-util, taking care that after the trade the overall utility does not decrease. As described in [1], this is achieved via the formula_ \[y=\max\left\{y\ |\ U(x_{0}+x,y_{0}-y)\geq U(x_{0},y_{0})\right\}.\] _If the utility function is sufficiently regular, the maximum \(y\) is obtained on an iso-util, i.e., \(U(x_{0}+x,y_{0}-y)=U(x_{0},y_{0})\)._ Let us now explain how iso-utils determine prices in a market. **Definition 3.13** (The realized price, marginal price, and price impact of the asset \(Y\)).: _Assume that in a market \(\Delta x>0\) units of \(X\) got exchanged into \(\Delta y>0\) units of \(Y\). Then the realized price \(P_{\frac{x}{y}}\) of one unit of \(Y\) in terms of units of \(X\) in this trade is given by_ \[P_{\frac{x}{y}}(\Delta x)=\frac{\Delta x}{\Delta y}.\] _The realized price \(P_{\frac{y}{x}}\) in terms of units of \(Y\) in this trade is given by_ \[P_{\frac{x}{x}}(\Delta x)=P_{\frac{x}{y}}(\Delta x)^{-1}=\frac{\Delta y}{ \Delta x}.\] _If this trade caused an iso-util supply change, i.e., the supply level \((x,y)\) before the trade and the supply level \((x+\Delta x,y-\Delta y)\) after the trade are on the same iso-util \(I=\{U=\log T\}\), then \(P_{\frac{y}{x}}(\Delta x)\) is given by_ \[P_{\frac{y}{x}}(\Delta x)=\frac{f(x)-f(x+\Delta x)}{\Delta x}\] _where \(f(x)\) denotes the function representation of the iso-util \(I\) (see Proposition 3.6). Sending the trading amount \(\Delta x\downarrow 0\) yields the marginal price. More precisely, we call_ \[P_{\frac{y}{x}}:=\lim_{\Delta x\downarrow 0}P_{\frac{y}{x}}(\Delta x)=-f^{ \prime}_{+}(x)\] _and_ \[P_{\frac{y}{y}}:=-\frac{1}{f^{\prime}_{+}(x)}\] _the marginal prices of \(Y\) at supply level \((x,y)\). Here, \(f^{\prime}_{+}(x)\) denotes the right derivative of the function representation \(f(x)\), which exists everywhere. The difference between the realized price and the marginal price is called price impact._ In economics, marginal prices are known under the name marginal rate of substitution. **Remark 3.14** (Interpretation of the marginal prices \(P_{\frac{y}{y}}\) and \(P_{\frac{y}{x}}\)).: _Consider a market at supply level \((x,y)\). Then:_ * \(P_{\frac{y}{x}}\) _expresses how many units of_ \(X\) _one has to pay for one unit of_ \(Y\) _if the trade would be infinitesimal small._ * \(P_{\frac{y}{x}}\) _expresses how many units of_ \(Y\) _one gets for one unit of_ \(X\) _if the trade would be infinitesimal small._ **Remark 3.15** (The realized price, marginal price, and price impact of the asset \(X\)).: _Similar to Definition 3.13 one can define the realized price, marginal price, and price impact of the asset \(X\). Then, the marginal price of the asset \(X\) is given by_ \[\hat{P}_{\frac{y}{x}}=-f^{\prime}_{-}(x)\quad\text{and}\quad\hat{P}_{\frac{y}{ y}}=-\frac{1}{f^{\prime}_{-}(x)}.\] _Here, \(f^{\prime}_{-}\) denotes the left derivative of \(f(x)\). The marginal price of \(X\) has following interpretation:_ * \(\hat{P}_{\frac{y}{x}}\) _expresses how many units of_ \(X\) _one receives for selling one unit of_ \(Y\) _if the trade would be infinitesimal small._ * \(\hat{P}_{\frac{y}{x}}\) _expresses how many units of_ \(Y\) _one needs to sell in order to receive one unit of_ \(X\) _if the trade would be infinitesimal small._ **Remark 3.16** (Choice of numeraire).: _When quoting prices one needs to choose a numeraire, i.e., the standard by which value is measured. In the US stock market the numeraire is the US dollar, as prices of stocks are expressed as an equivalent amount of US dollars. Note that the difference between \(P_{\frac{y}{x}}\) and \(P_{\frac{y}{x}}\) is the choice of the numeraire. For \(P_{\frac{y}{y}}\), the numeraire is the asset \(X\) (which we usually think of as representing US dollar), and for \(P_{\frac{y}{x}}\) the numeraire is the asset \(Y\)._ **Example 3.17**.: _Let us consider Figure 11 below. It shows an iso-util of a limit order book market. We observe that the iso-util is piecewise linear and has kinks. At the supply levels corresponding to the kinks, the marginal prices of \(X\) and \(Y\) do not coincide. More precisely, at the current supply level \((x,y)\) the best ask price of the limit order book corresponds to \(P_{\frac{x}{y}}\) which is equal to the inverse of the absolute value of the right derivative of \(f(x)\) at the current supply level. The best bid price of the limit order book corresponds to \(\hat{P}_{\frac{x}{y}}\) and is equal to the inverse of the absolute value of the left derivative of \(f(x)\). The difference \(|P_{\frac{x}{y}}-\hat{P}_{\frac{x}{y}}|\) of both prices describes the current bid-ask spread. If the iso-util is smooth, marginal prices of \(X\) and \(Y\) are given by the inverse of the absolute value of the slope of the tangent line through the current supply level \((x,y)\). For more details we refer to Section 4._ The next proposition shows how marginal prices can be directly computed via the utility function \(U\). **Proposition 3.18**.: _We consider a market \(\mathcal{M}\) with utility function \(U\). Let_ \[\nabla_{\min}U=(\partial_{x,\min}U,\partial_{y,\min}U)^{\top}\quad\text{and} \quad\nabla_{\max}U=(\partial_{x,\max}U,\partial_{y,\max}U)^{\top}\] _denote the minimal and maximal element of the subgradient \(\partial U\). Then the marginal price of the asset \(X\) at supply level \((x,y)\) is given by_ \[\hat{P}_{\frac{x}{y}}=\frac{\partial_{y,\min}U(x,y)}{\partial_{x,\min}U(x,y)} \tag{7}\] _and the marginal price of the asset \(Y\) at supply level \((x,y)\) is given by_ \[P_{\frac{x}{y}}=\frac{\partial_{y,\max}U(x,y)}{\partial_{x,\max}U(x,y)}. \tag{8}\] _If \(U\) is differentiable at \((x,y)\) then the marginal prices of \(Y\) and \(X\) coincide and are given by_ \[P_{\frac{x}{y}}=\hat{P}_{\frac{x}{y}}=\frac{\partial_{y}U(x,y)}{\partial_{x}U(x,y)}. \tag{9}\] **Remark 3.19** (Interpretation of formula (9)).: _We observe that by (9) a small \(\partial_{x}U\) and a large \(\partial_{y}U\) results in a high price. The reason is that more supply of \(X\) only yields a small gain in utility, whereas more supply of \(Y\) yields a large gain of utility. So traders exchange excess of \(X\) directly in \(Y\) resulting in a higher price for \(Y\)._ Proof of Proposition 3.18.: Let us focus on deducing the formula in (8). The desired forumla in (7) follows from a similar argument. Let the function \(f\) parameterise the iso-util of the market \(\mathcal{M}\) with supply level \((x,y)\). Then by Definition 3.13 the price \(P_{\frac{x}{x}}\) is given by \[P_{\frac{x}{x}}=-f^{\prime}_{+}(x),\] where \(f_{+}\) denotes the right derivative of \(f\). Because the function \(f\) is convex it holds that \[f^{\prime}_{+}=-\frac{\partial_{x,\max}U}{\partial_{y,\max}U}.\] Using the formula \(P_{\frac{x}{x}}=\left(P_{\frac{x}{x}}\right)^{-1}\) yields the desired result. Finally, suppose that \(U\) is differentiable. Then the subgradient consists of exactly one element, namely the gradient, and the desired formula in (9) readily follows. **Corollary 3.20** (Monotonicity of prices).: _We consider a market \(\mathcal{M}\) with utility function \(U\). If the utility function \(U\) is concave, then the marginal price \(P_{\frac{x}{y}}\) is increasing in \(x\) and decreasing in \(y\). If the the utility function \(U\) is strictly concave then the marginal price \(P_{\frac{x}{y}}\) is strictly increasing in \(x\) and strictly decreasing in \(y\)._ Proof of Corollary 3.20.: We observe that because the utility function \(U\) is concave it follows that for fixed \(y\) the function \(x\mapsto\partial_{x}U(x,y)\) is decreasing as a function of \(x\), and similarly that \(y\mapsto\partial_{y}U(x,y)\) is a decreasing as a function of \(y\). Hence, the formula (9) yields the desired statement. **Remark 3.21**.: _The statement of Corollary 3.20 does not hold if the assumptions are relaxed to quasi concave utility functions \(U\). However, this is not very restrictive as in many situations quasi concavity can be strengthened to concavity (see also Remark 2.15)._ **Remark 3.22** (Information contained in utility functions that is not contained in the iso-utils).: _The additional information that is contained in the utility function is how much utility is gained or lost when moving from one iso-util to another. However, as in the field of economics, often the actual numerical values assigned by a utility function are not inherently meaningful on their own. That is, the fact that a utility function assigns a utility of 10 to one outcome and 20 to another does not necessarily mean that the second outcome is twice as good as the first. What matters is the ranking of different outcomes and how changes in the outcomes affect this ranking. Therefore, we concentrate on iso-utils when describing a market and not on its utility function. This is also the reason why utility functions in Definition 2.2 are defined as quasi concave and not as concave functions._ ## 4. Limit order book markets As discussed in Remark 2.8, limit order book markets are semi-explicit. The utility function is not observable but the current iso-util can be derived from a snapshot of the limit order book. It turns out that this correspondence is one-to-one. This section is partly inspired by [20], where the relation between automated market makers and limit order book markets is studied for smooth iso-utils. As limit order books are naturally discrete objects and give rise to piecewise linear iso-utils, we extend the analysis to non-smooth iso-utils. This is important because the jumps of the slopes in non-smooth iso-utils contain important information, e.g., the current bid-ask spread. For that purpose, we define the limit order book via supply and demand measures covering the smooth and non-smooth case simultaneously. ### Supply and demand measures and limit order books Supply and demand measures model the supply and demand of a certain asset for a given price range. We will use them to define limit order books. The advantage of this approach is that it can distinguish between settled and unsettled limit order books. In unsettled limit order books the matching of overlapping buy and sell limit orders is not yet executed. Unsettled limit order books arise often. For example, the opening and closing auction at a stock exchange results in an unsettled limit order book. When aggregating markets (see Section 5 below) unsettled limit order books also appear naturally. **Remark 4.1**.: _In this section, we fix a market \(\mathcal{M}\) of the asset pair \((X,Y)\). We think of the asset \(X\) as the numeraire, e.g., US dollar (see Remark 3.16)._ **Definition 4.2** (Supply and demand measures \(\mu_{s}\) and \(\mu_{d}\) and the limit order book \(\mathscr{L}\).).: _A positive Borel measure \(\mu_{s}\) on the space \((0,\infty)\) is called supply measure if_ \[\lim_{p\to 0}\mu_{s}((0,p])=0. \tag{10}\] _A positive Borel measure \(\mu_{d}\) on the space \((0,\infty)\) is called demand measure if_ \[\lim_{p\to\infty}\mu_{d}((p,\infty))=0. \tag{11}\] _The associated unsettled limit order book \(\mathscr{L}\) is given by the ordered pair \(\mathscr{L}=(\mu_{d},\mu_{s})\). We say that the limit order book is settled if there is a number \(m\in[0,\infty)\), called mid-price, such that \(\operatorname{supp}(\mu_{d})\subset(0,m)\) and \(\operatorname{supp}(\mu_{s})\subset(m,\infty)\). Here \(\operatorname{supp}(\mu)\) denotes the support of the measure \(\mu\). We also define the best bid price as \(p_{b}:=\sup\operatorname{supp}\mu_{d}\) and the best ask price as \(p_{a}:=\inf\operatorname{supp}\mu_{s}\). In particular, the limit order book is settled if and only if \(p_{a}\leq p_{b}\)._ The measures \(\mu_{s}\) and \(\mu_{d}\) are also sometimes referred to as volume profile or depth profile in the literature. **Remark 4.3** (Interpretation of supply and demand measures \(\mu_{s}\) and \(\mu_{d}\)).: _For a Borel-measurable subset \(A\subset\mathcal{B}\) the number \(\mu_{d}(A)\geq 0\) represents the total buy-side volume of asset \(Y\) at prices in the set \(A\). Similarly, the number \(\mu_{s}(A)\geq 0\) represents the total sell-side volume of asset \(Y\) at prices in the set \(A\). The interpretation of (10) is that as the price decreases to zero the supply of the asset \(Y\) vanishes. The interpretation of (11) is that as the price increases to infinity the demand for the asset \(Y\) vanishes._ **Example 4.4** (Limit order book).: _We give a hypothetical example of a settled discrete limit order book in Table 1 and of an unsettled limit order book in Table 2. The unsettled values in Table 2 are chosen to be extreme on purpose in order to get better illustrations. Let us focus on Table 1. Each limit order consists of three parts: First, the sign or direction of the order (i.e., buy or sell order); second, the quantity or volume of the asset; and third, the limit price. Limit buy orders appear \begin{table} \begin{tabular}{l c c c} \hline \hline & Bid & & Ask \\ \hline Price & Quantity & Price & Quantity \\ \hline §100 & 12 & §110 & 12 \\ §94 & 10 & §140 & 20 \\ §80 & 20 & §170 & 30 \\ §40 & 30 & §250 & 50 \\ §10 & 50 & §500 & 50 \\ \hline \hline \end{tabular} \end{table} Table 1. Example of a settled limit order book which results from the adiabatic clearing introduced in Proposition 4.13 below of the unsettled limit order book in Table 2. on the bid side and limit sell orders appear on the ask side. The demand measure \(\mu_{d}\) of the limit order book in Table 1 is given by_ \[\mu_{d}=50\delta_{10}+30\delta_{40}+20\delta_{80}+10\delta_{94}+12\delta_{100},\] _where \(\delta_{x}\) denotes the Dirac measure at the point \(x\). The supply measure \(\mu_{s}\) is given by_ \[\mu_{s}=12\delta_{110}+20\delta_{140}+30\delta_{170}+50\delta_{250}+50\delta_{ 500}.\] _The limit order book is settled as the support of the demand and supply measures satisfy_ \[\operatorname{supp}(\mu_{d})\subset(0,100]\qquad\text{and}\qquad\operatorname {supp}(\mu_{s})\subset[110,\infty).\] _Therefore, the mid-price \(m\) can be chosen as any number in the bid-ask spread \(m\in(100,110)\) with best bid price at 100 and best ask price at 110; i.e., the mid-price \(m\) is not unique. Often, the convention is to use the midpoint of the bid-ask spread, which in our case would correspond to choosing \(m=105\)._ The conditions (10) and (11) allow an alternative characterization of supply and demand measures via the _Remaining Demand Function_ (RDF) and the _Remaining Supply Function_ (RSF). **Definition 4.5** (Remaining demand and remaining supply functions).: _A function \(F_{d}:(0,\infty)\to(0,\infty)\) is called remaining demand function (RDF) if it satisfies the following conditions:_ * _The function_ \(F_{d}\) _is non-increasing._ * _The function_ \(F_{d}\) _is right-continuous._ * _The function_ \(F_{d}\) _vanishes at infinity, i.e.,_ \(\lim_{p\to\infty}F_{d}(p)=0\)_._ _A function \(F_{s}:(0,\infty)\to(0,\infty)\) is called remaining supply function (RSF) if it satisfies the following conditions:_ * _The function_ \(F_{s}\) _is non-decreasing._ * _The function_ \(F_{s}\) _is right-continuous._ * _The function_ \(F_{s}\) _vanishes at_ \(0\)_, i.e.,_ \(\lim_{p\to 0}F_{s}(p)=0\)_._ _We also use the notation \(RDF(p):=F_{d}(p)\) and \(RSF(p):=F_{s}(d)\). \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{Bid} & \multicolumn{2}{c}{Ask} \\ \hline Price & Quantity & Price & Quantity \\ \hline \$300 & 10 & \$50 & 10 \\ \$135 & 20 & \$100 & 12 \\ \$110 & 19 & \$105 & 14 \\ \$100 & 12 & \$110 & 25 \\ \$94 & 10 & \$140 & 20 \\ \$80 & 20 & \$170 & 30 \\ \$40 & 30 & \$250 & 50 \\ \$10 & 50 & \$500 & 50 \\ \hline \hline \end{tabular} \end{table} Table 2. Example of an unsettled limit order book. The RSF and RDF play a similar role as the cumulative distribution function and the complementary cumulative distribution function (or tail distribution function) which are known to characterize Borel probability measures on the real line. **Proposition 4.6** (Characterization of supply measures via remaining supply functions).: _Consider a Borel measure \(\mu_{s}\) on the space \((0,\infty)\). Then \(\mu_{s}\) is a supply measure if and only if the function \(F_{s}:(0,\infty)\to(0,\infty)\) given by \(F_{s}(p):=\mu_{s}((0,p])\) is a remaining supply function in the sense of Definition 4.5._ Proof of Proposition 4.6.: The proof is straightforward, namely one reproduces the arguments that probability measures on the real line are characterized by cumulative distribution functions. The main difference is that one needs to use condition (10) instead of the fact that probability measures have finite overall mass. We leave the details as an exercise. Similarly, we also have following statement. **Proposition 4.7** (Characterization of demand measures via remaining demand functions).: _Consider a Borel measure \(\mu_{d}\) on the space \((0,\infty)\). Then \(\mu_{d}\) is a demand measure if and only if the function \(F_{d}:(0,\infty)\to(0,\infty)\) given by \(F_{d}(p):=\mu_{d}((p,\infty))\) is a remaining demand function in the sense of Definition 4.5._ The proof is very similar to the proof of Proposition 4.6 and left out as an exercise. **Remark 4.8** (Interpretation of RDF and RSF).: _Note that the remaining demand function \(F_{d}(p)\) represents the total volume of limit buy orders with a price strictly greater than \(p\). The remaining supply function \(F_{s}(p)\) represents the total volume of limit sell orders with a price less than or equal to \(p\)._ Using Proposition 4.6 and Proposition 4.7, we get the following immediate characterization of settled limit order books. **Corollary 4.9**.: _A limit order book \(\mathscr{L}\) is settled if and only if there is a number \(m\in(0,\infty)\) such that \(F_{d}(m)=F_{s}(m)=0\), where \(F_{d}\) denotes the remaining demand function and \(F_{s}\) denotes the remaining supply function of the limit order book \(\mathscr{L}\). In this case the number \(m\) is a mid-price of the settled limit order book. Moreover, due to monotonicity the remaining demand function \(F_{d}\) and remaining supply function \(F_{s}\) have disjoint support._ **Example 4.10** (Unsettled and settled limit order book).: _In Figures 5 and 6, we give an example of an unsettled limit order book, which follows from the fact that the remaining demand function and remaining supply function overlap. In Figures 7 and 8, we provide their settled counterparts after the adiabatic clearing introduced in Proposition 4.13 below._ * _the function_ \(G\) _is non-increasing on_ \((0,m)\)_;_ * _it holds_ \(G(m)=0\)_;_ * _the function_ \(G\) _is non-decreasing_ \((0,m)\)_._ ### Adiabatic and iso-util clearing of a limit order book Clearing describes the process to transform an unsettled limit order book \(\mathscr{L}\) into a settled limit order book \(\mathscr{L}_{\sigma}\). When the limit order book is unsettled there are two canonical ways for clearing it. We call the first mechanism **adiabatic clearing** and the second mechanism **iso-util clearing**. Let us first consider the adiabatic clearing process. Figure 5. Example of an unsettled limit order book. Figure 6. Illustration of the piecewise linear \(RDF\) (blue) and \(RSF\) (red) of the unsettled limit order book in Table 2. In this mechanism overlapping buy and sell limit orders are matched and then removed from the limit order book. We need some preparation. Let \(p>0\) and recall that the left-limit \(\lim_{\hat{p}\uparrow p}RDF(\hat{p})\) denotes the number of units of \(Y\) that can be sold to limit buy orders with a Figure 8. Illustration of the piecewise linear \(RDF_{a}\) (blue) and \(RSF_{a}\) (red) of the settled limit order book in Table 1 after the adiabatic clearing from Proposition 4.13 of the unsettled limit order book in Table 2 and Figure 6. Figure 7. Example of a settled limit order book which results from the adiabatic clearing from Proposition 4.13 of the unsettled limit order book in Figure 5. price higher than or equal to \(p\) (note that \(RDF\) is defined to be right-continuous). Similarly, \(RSF(p)\) denotes the number of units that can be bought from limit sell orders with a price lower than or equal to \(p\). We define the quantity \[Z(p):=\min\left\{\lim_{\hat{p}\uparrow p}RDF(\hat{p}),RSF(p)\right\}.\] In a settled limit order book it holds \(Z(p)=0\) for all \(p>0\). If the market is unsettled, some traders misprice the asset \(Y\) in the sense that some are willing to buy at higher prices than those at which others are willing to sell and we have \(Z(p)>0\) for some \(p>0\). This means that \(Z(p)\) denotes the total volume that can be offset by an arbitrageur for a risk-less profit via a simple buy low and sell high strategy. In an unsettled market the arbitrageur will seek to maximize her profit. This corresponds to finding \[p_{e}:=\arg\max_{p}Z(p), \tag{12}\] and then to trade all limit sell orders with a lower price than \(p_{e}\) against all limit buy orders with a price higher than \(p_{e}\). This is straightforward if the maximizer \(p_{e}\) is unique. The next lemma studies the case where the maximizer \(p_{e}\) is not unique. **Lemma 4.12**.: _We consider an unsettled limit order book \(\mathscr{L}\) with an unsettled remaining demand function \(RDF_{u}\) and an unsettled remaining supply function \(RSF_{u}\). Let_ \[p_{d}:=\inf\left\{p>0\ |\ RDF_{u}(p)\leq RSF_{u}(p)\right\}\] _denote the smallest possible price such that supply overcomes demand and_ \[p_{s}:=\sup\left\{p>0\ |\ RDF_{u}(p)\geq RSF_{u}(p)\right\}\] _the highest possible price such that demand overcomes supply._ _Then \(p_{d}\leq p_{s}\) and \(p_{d}<p_{s}\) implies that_ \[RDF_{u}(p)=RSF_{u}(p)\quad\text{for all}\quad p_{d}<p<p_{s}. \tag{13}\] _It also holds that_ \[Z(p_{s})=Z(p_{d})=Z(p)\quad\text{for all}\quad p_{d}<p<p_{s} \tag{14}\] _and_ \[Z(p_{s})=\max_{p}Z(p). \tag{15}\] _We call \(Z:=Z(p_{s})\) the clearing volume of the asset \(Y\)._ Proof of Lemma 4.12.: The statements \(p_{d}\leq p_{s}\) and (13) follow directly from the observation that the function \(RDF\) is non-increasing, the function \(RSF\) is non-decreasing, and the definitions of \(p_{d}\) and \(p_{s}\) are equivalent to \[p_{d}=\sup\left\{p>0\ |\ RDF_{u}(p)>RSF_{u}(p)\right\}\] and \[p_{s}=\inf\left\{p>0\ |\ RDF_{u}(p)<RSF_{u}(p)\right\}.\] Let us now turn to the identity in (14). If \(p_{d}=p_{s}\) there is nothing to show. Hence, we assume \(p_{d}<p_{s}\). The desired statements in (14) and (15) then follow from a straightforward argument using right-continuity and monotonicity of \(RDF_{u}\) and \(RSF_{u}\), as well as the identity in (13). **Proposition 4.13** (Adiabatic clearing a limit order book).: _We consider the situation of Lemma 4.12 with an unsettled limit order book \(\mathscr{L}\). Then_ \[RDF_{a}(p):=\begin{cases}\max(RDF_{u}(p)-Z,0)&\text{for }p<p_{d},\\ 0&\text{for }p\geq p_{d},\end{cases} \tag{16}\] _and_ \[RSF_{a}(p):=\begin{cases}0&\text{for }p<p_{s},\\ \max(RSF_{u}(p)-Z,0)&\text{for }p\geq p_{s},\end{cases} \tag{17}\] _define a remaining demand function and a remaining supply function of the settled limit order book \(\mathscr{L}_{\sigma}\)._ The best way to understand the formulas (16) and (17) is to observe that the quantity \(Z\) denotes the total volume of \(Y\) which was matched and offset by the arbitrageur's trading. As a consequence, this amount has to be subtracted from the limit order book. The proof of Proposition 4.13 is straightforward and left as an exercise. **Definition 4.14**.: _The procedure described in Proposition 4.13 is called adiabatic settlement or clearing, and we say that \(\mathscr{L}_{\sigma}\) is the adiabatic settled limit order book of the unsettled limit order book \(\mathscr{L}\)._ **Example 4.15**.: _The settled limit order book in Table 1 is the adiabatic clearing of the unsettled limit order book in Table 2. The corresponding unsettled and adiabatic settled remaining demand and supply functions are depicted in Figures 6 and 8, respectively. The adiabatic clearing procedure is also described in Table 3. Therein, the clearing volume is \(Z=49\) and all crossed-out limit buy and sell orders are offset by the arbitrageur._ \begin{table} \begin{tabular}{l c c c c} \hline \hline & buy orders & \(RDF_{u}\) & \(RSF_{u}\) & sell orders \\ \hline §500 & 0 & 0 & 211 & 50 \\ §300 & \(\mathscr{L}\)0 & 0 & 161 & 0 \\ §250 & 0 & 10 & 161 & 50 \\ §170 & 0 & 10 & 111 & 30 \\ §140 & 0 & 10 & 81 & 20 \\ §135 & \(\mathscr{L}\)0 & 10 & 61 & 0 \\ §110 & \(\mathscr{L}\)0 & 30 & 61 & 25 12 \\ §105 & 0 & 49 & 36 & 14 0 \\ §100 & 12 & 49 & 22 & 12 0 \\ §94 & 10 & 61 & 10 & 0 \\ §80 & 20 & 71 & 10 & 0 \\ §50 & 0 & 91 & 10 & 10 0 \\ §40 & 30 & 91 & 0 & 0 \\ §10 & 50 & 121 & 0 & 0 \\ §80 & 0 & 171 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 3. Adiabatic clearing procedure of the unsettled limit order book in Table 2. Crossed-out orders are matched by the arbitrageur. **Remark 4.16** (Meaning of adiabatic clearing).: _As the limit order book is unsettled the asset is mispriced in the sense that the highest price among buy orders exceeds the lowest price among sell orders. Arbitrageurs will realize this and make a riskless profit by offsetting those overlapping limit orders, effectively eliminating them from the limit order book._ **Remark 4.17** (Comparison of adiabatic clearing to opening and closing auctions).: _The adiabatic clearing mechanism is very similar to the procedure of an opening and closing auction on a stock exchange where an algorithm computes the auction price \(p_{e}\) in (12) in order to maximize the total clearing volume when overlapping buy and sell orders are matched. In contrast to the adiabatic settlement, all overlapping buy and sell orders are then executed at the same auction price \(p_{e}\)._ The iso-util clearing mechanism adds to the adiabatic clearing mechanism one more step. After adiabatic clearing, the matched limit orders reappear on the opposite side of the limit order book. **Proposition 4.18** (Iso-util clearing a limit order book).: _We consider an unsettled limit order book \(\mathscr{L}\) with an unsettled remaining demand function \(RDF_{u}\) and an unsettled remaining supply function \(RSF_{u}\) and its adiabatic settled counterparts \(RDF_{a}\) and \(RSF_{a}\). Then_ \[\text{RDF}_{i}(p)=\begin{cases}\text{RDF}_{a}(p)+Z-\min\left\{\text{RSF}_{u}( p),Z\right\}&\text{if }p<p_{d},\\ 0&\text{if }p\geq p_{d}\end{cases}\] _and_ \[\text{RSF}_{i}(p)=\begin{cases}0&\text{if }p<p_{s},\\ \text{RSF}_{a}(p)+Z-\min\left\{\text{RDF}_{u}(p),Z\right\}&\text{if }p\geq p_{s}\end{cases}\] _define a remaining demand function and a remaining supply function of a limit order book \(\mathscr{L}_{\sigma}\). Moreover, \(p_{d}\) and \(p_{s}\) denote the best bid and best ask price, respectively._ The proof is straightforward and left as an exercise. **Definition 4.19**.: _The procedure described in Proposition 4.18 is called iso-util settlement or clearing, and we say that \(\mathscr{L}_{\sigma}\) is the iso-util settled limit order book of the unsettled limit order book \(\mathscr{L}\)._ **Example 4.20**.: _The limit order book in Table 4 is the iso-util clearing of the unsettled limit order book in Table 2. Observe that all crossed-out limit buy and sell orders from the adiabatic clearing in Table 3 reappear on the opposite side of the book. The corresponding remaining demand and supply functions are potted in Figure 9. We want to point out that the iso-util clearing must not necessarily result in a settled limit order book. Indeed, as it is the case in our example, there might be bid and ask limit orders at the same price. Figure 10 illustrates the iso-util clearing of the unsettled limit order book in Figure 5._ Let us now explain the background and motivation for these two clearing procedures. For this we introduce the following notion. **Definition 4.21** (Transparent trader).: _We call a market participant a transparent trader if the trader communicates her current iso-util by posting the associated limit orders in the limit order book._ **Definition 4.22** (Iso-util trader).: _We call a market participant an iso-util trader if the trader is willing to do any trade that does not decrease her utility._ First, let us consider what happens if a (buy or sell) limit order of a transparent iso-util trader gets filled. We want to remind the reader that in this manuscript only the stationary case is studied. This means that the iso-util of the trader is unaffected by the fact that her limit order got filled. As the trader acts iso-util, she would be willing to undo the last trade as it would leave her utility invariant. As the trader is transparent, she would communicate this intention by issuing a limit order of the same price and same volume of the opposite type (i.e., sell or buy). \begin{table} \begin{tabular}{l r r r} \hline \hline & Bid & & Ask \\ \hline Price & Quantity & Price & Quantity \\ \hline \$110 & 13 & \$110 & 12 + 19 = 31 \\ \$105 & 14 & \$135 & 20 \\ \$100 & 12 + 12 = 24 & \$140 & 20 \\ \$94 & 10 & \$170 & 30 \\ \$80 & 20 & \$250 & 50 \\ \$50 & 10 & \$300 & 10 \\ \$40 & 30 & \$500 & 50 \\ \$10 & 50 & & \\ \hline \hline \end{tabular} \end{table} Table 4. This limit order book is the iso-util clearing of Table 2. The colores indicate limit orders that got flipped from one side of the limit order book to the other. Figure 9. Illustration of the piecewise linear \(RDF_{i}\) (blue) and \(RSF_{i}\) (red) of the limit order book in Table 4 after the iso-util clearing of the unsettled limit order book in Table 2. Let us now consider a market comprised of transparent iso-util traders. If the associated limit order book is not settled the asset is mispriced. As in the adiabatic case, an arbitrageur will take advantage and match overlapping orders for a risk-less profit. The main difference to the adiabatic settlement is that the trades of the arbitrageur fill limit orders of transparent iso-util traders. Hence, as explained above those filled limit orders will reappear on the opposite side of the limit order book, explaining the formula of the \(RDF_{i}\) and \(RSF_{i}\) in Proposition 4.18. Because the arbitrageur acted the same in the adiabatic and the iso-util setting, the profit of the arbitrageur is the same for both clearing mechanisms. Let us now shed more light onto the adiabatic settlement process. **Definition 4.23** (Adiabatic trader).: _We call a trader \(\varepsilon\)-adiabatic if the trader is only willing to trade if it increases her utility by at least \(\varepsilon>0\). We call a trader adiabatic if the trader is not willing to undo a trade._ **Remark 4.24** (Inspiration from thermodynamics).: _The names adiabatic and iso-util traders are inspired from thermodynamics. There, the term iso-thermal is used to describe expansion and compression that does not change the temperature of the system. The term adiabatic is used to described expansion and compression that changes the temperature. So, we use the term iso-util and adiabatic trader to distinguish between traders that are conserving or increasing their utility, respectively._ We describe how a transparent \(\varepsilon\)-adiabatic trader acts. Such a trader will only communicate limit orders which increase her utility. Even if small, the trader increases with each trade her utility. Undoing a trade would correspond to a decrease in utility which a rational trader would not allow. Therefore, if a limit order of Figure 10. Iso-util clearing of the unsettled limit order book of Figure 5. The iso-util cleared RDF consists of the adiabatic cleared RDF plus the flipped overlap of the RSF. Similar, the iso-util cleared RSF consists of the adiabatic cleared RSF plus the flipped overlap of the RDF. an \(\varepsilon\)-adiabatic trader gets filled it will disappear from the limit order book of the exchange. We want to note that if the gain of utility \(\varepsilon\) vanishes, the transparent adiabatic trader will communicate her iso-util to the exchange. However, as every trade still represents an infinitesimal increase of utility the filled limit orders still disappear in the limit \(\varepsilon\to 0\). This means that the adiabatic clearing process given by Proposition 4.13 applies to an unsettled market of adiabatic traders. As every trader strives to increase her utility, the adiabatic clearing process is usually observed in reality. This brings us to the question whether iso-util traders exist at all? We tend to say the answer is no. Certainly, traders might act iso-util between good colleagues to get social credit, which actually is just a different form of utility increase. However, as trading involves effort there must be some incentive to overcome inertia, and therefore a rational trader would have to act adiabatic. This poses the question that if traders act adiabatic, how can iso-util be a tool to study financial markets? Obviously, iso-util is very useful when considering the case of vanishing utility gain. The trading curve of an \(\varepsilon\)-adiabatic trader approximates her iso-util as \(\varepsilon\to 0\). A practical example are liquidity providers to an automated market maker liquidity pool. They effectively act as transparent iso-util traders, acting on the iso-util prescribed in the protocol, with exception of collected fees, which represent the marginal gain of utility for the liquidity provider. We claim that even in the case of non-small \(\varepsilon\), iso-utils are a great tool to study the behavior of adiabatic traders as it may be possible to interpret the trading curve of an adiabatic trader as an iso-util of a new utility function, just that trades are not reversible on this iso-util. The next statement determines the profit from arbitrage when unsettled markets clear. **Proposition 4.25** (Arbitrage profit in an unsettled market).: _Let us consider an unsettled limit order book given by the remaining supply and demand functions \(\text{RSF}(p)\) and \(\text{RDF}(p)\). We define the functions_ \[A_{d}(p):=\min\{\text{RDF}(p),Z\}\quad\text{and}\quad A_{s}(p):=\min\{\text{ RSF}(p),Z\}.\] _When clearing the market, both in an adiabatic or iso-util way, the profit from arbitrage is given by_ \[P=\int_{[p_{d},\infty)}p\ dA_{d}(p)-\int_{[0,p_{s}]}p\ dA_{s}(p). \tag{18}\] _Here, \(dA_{s}(p)\) and \(dA_{d}(p)\) denote the Lebesgue-Stieltjes integral with respect to \(A_{s}(p)\) and \(A_{d}(p)\)._ Proof of Proposition 4.25.: Arbitrage arises from matching mispriced limit orders. This is independent from the fact that those limit orders vanish out of the limit order book or re-appear on the other side of the book. Hence, the profit from arbitrage is independent of adiabatic or iso-util settlement. The second integral in equation (18) denotes the amount of money needed to buy the supply from the under-priced limit sell orders. The first integral denotes the amount of money made from selling this supply to the over-priced limit buy orders. **Example 4.26**.: _The arbitrage profit from clearing the unsettled limit order book in Table 2 is given by \((10\cdot 300+20\cdot 135+19\cdot 110)-(10\cdot 50+12\cdot 100+14\cdot 105+13\cdot 110)=7790-4600 =3190\); compare also with Table 3._ We conclude this section by introducing the terms _adiabatic_ and _iso-util entropy_. **Definition 4.27** (Adiabatic and iso-util entropy).: _We consider an unsettled limit order book \(\mathscr{L}\). The iso-util entropy \(S_{i}\) is given by the vector_ \[S_{i}:=(P,0),\] _where \(P\) denotes the arbitrage profit given in (18). The adiabatic entropy \(S_{a}\) is given by the vector_ \[S_{a}:=\left(\int_{[p_{d},\infty)}p\ dA_{d}(p),Z\right)\] _where \(Z\) denotes the clearing volume from Lemma 4.12._ The adiabatic and iso-util entropy \(S_{a}\) and \(S_{i}\) denote the amount of liquidity that is lost in an unsettled limit order book market by adiabatic or iso-util clearing. We observe that \(S_{i}\leq S_{a}\). The reason is that in adiabatic settlement, all matched limit orders vanish out of the limit order book. The number \(Z\) denotes how many units of \(Y\) were sold in the clearing process, and the number \(\int_{p_{d}}^{\infty}p\ dA_{d}(p)\) denotes how many dollars were payed for it. **Example 4.28**.: _For the unsettled limit order book market in Table 2 we have \(S_{i}=(3190,0)\) and \(S_{a}=(7790,49)\)._ **Proposition 4.29**.: _Consider an unsettled limit order book market \(\mathscr{L}\) with remaining supply and demand functions RSF and RDF. Let \((x_{u},y_{u})\) denote the supply levels of the assets \((X,Y)\) before clearing, i.e.,_ \[x_{u}:=\int_{[0,\infty)}p\ dR\text{DF}(p)\quad\text{and}\quad y_{u}:=\lim_{p \to\infty}\text{RSF}(p).\] _After clearing the market the supply levels \((x_{s},y_{s})\) of the settled market are given by_ \[(x_{s},y_{s})=(x_{u},y_{u})-S_{a}\] _in the case of adiabatic clearing and by_ \[(x_{s},y_{s})=(x_{u},y_{u})-S_{i}\] _in the case of iso-util clearing._ The proof consists of a straightforward calculation and is omitted. Using the term _entropy_ to denote the lost of liquidity by clearing is inspired by the second law of thermodynamics. For more details on this we refer to Section 5 below. **Example 4.30**.: _The supply levels of the asset pair \((X,Y)\) in the unsettled limit order book market in Table 2 are given by \(x_{u}=300\cdot 10+135\cdot 20+110\cdot 19+100\cdot 12+94\cdot 10+80\cdot 20+40 \cdot 30+10\cdot 50=13230\) and \(y_{u}=10+12+14+25+20+30+50+50=211\). Consequently, it follows that \((x_{s},y_{s})=(13230,211)-(7790,49)=(5440,162)\) for the adiabatic clearing and \((x_{s},y_{s})=(13230,211)-(3190,0)=(10040,211)\) for the iso-util clearing; compare also with the corresponding limit order books in Tables 1 and 4, respectively._ ### Equivalence of iso-utils and limit order books In this section we will describe how to associate to an iso-util a limit order book and vice versa. For educational purposes we start with explaining the procedure first for a simple example of a finite and discrete limit order book. Later, we give general formulas extending the ones in [13]. **Example 4.31** (Associating a limit order book to a piecewise linear iso-util).: _We consider the simplest situation, namely the case of a piecewise linear iso-util. The procedure is quite elementary. Therefore, instead of giving general formulas we prefer to look at a specific example. The iso-util given in Figure 11 is the piecewise linear interpolation of the supply levels given in Table 5. The current supply level of the market is given by the 6th data point \((5440,162)\). The bid part of the iso-util is marked in blue and the ask part is marked in red (see Definition 3.9). Observe that with an iso-util trade one could move the supply level from the 6th data point to the 7th data point, implying that the supply level of \(X\) would increase by \(6760-5440=1320\) dollars and the supply level of \(Y\) would decrease by \(162-150=12\) units of \(Y\). In other words, one could buy \(12\) units of \(Y\) for \(1320\) dollars. Consequently, the first limit sell order in the limit order book would be \(12\) units of \(Y\) for \(1320/12=110\) dollars each. Similarly, moving from the 7th data point to the 8th data point would correspond to a limit sell order of \(150-130=20\) units of \(Y\) at the price of \(\frac{9560-6760}{20}=140\) dollars for each unit. Continuing with the remaining data points \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Point & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \(x\) & 0 & 500 & 1700 & 3300 & 4240 & 5440 & 6760 & 9560 & 14660 & 27160 & 52160 \\ \hline \(y\) & 284 & 234 & 204 & 184 & 174 & 162 & 150 & 130 & 100 & 50 & 0 \\ \hline \end{tabular} \end{table} Table 5. Supply levels of the limit order book given in Table 1 Figure 11. Iso-util generated from the limit order book given in Table 1. It is the adiabatic clearing of the iso-util given in Figure 14. on the ask part of the iso-util would recover the ask side of the limit order book given in Table 1. Recovering the bid side of the limit order book from the bid part of the iso-util is very similar. An iso-util trade moving the supply levels from the 6th data point to the 5th data point would imply that \(174-162=12\) units of \(Y\) could be sold for \(5440-4240=1200\) dollars in total; corresponding to a limit buy order of \(12\) units of \(Y\) for the price of \(100\) dollars per unit. Continuing in this way, one would recover the bid side of the limit order book given in Table 1._ **Example 4.32** (Associating an iso-util and supply levels to a discrete limit order book).: _Again, we explain the procedure using a specific example, namely the limit order book given in Table 1. First, we need to determine the current supply level \((x,y)\). The supply \(y\) is given by the total number of shares one can buy in the limit order book, which corresponds to \(y=12+20+30+50+50=162\) shares. The supply \(x\) is given by the total amount of dollars one can receive from selling as much of the asset \(Y\) as possible. In our example, this amounts to \(y=100\cdot 12+94\cdot 10+80\cdot 20+40\cdot 30+10\cdot 50=5440\) dollars. Thus, one can observe that the current supply level \((5440,162)\) corresponds to the current liquidity of the asset pair \((X,Y)\) in the limit order book. We will first determine the pivotal supply levels on the iso-util (see Table 5). The corresponding iso-util \(I\) is then given by linear interpolation of these points. For example, with the first limit sell order, one can buy 12 units of \(Y\) for \(110\cdot 12=1320\) dollars total. Hence, the point \(x=5440+1320=6760\) and \(y=162-12=150\) is also one the iso-util, recovering the 7th data point. Similarly, one can recover all data points of Table 5, and therefore recover the iso-util given in Figure 11._ **Remark 4.33**.: _We summarize how a discrete limit order book is associated to a piecewise linear iso-util:_ * _Each line segment of the iso-util corresponds to a limit order._ * _If the line segment is in the ask part (i.e., to the right of the current supply level), then the limit order also appears on the ask side of the book._ * _If the line segment is in the bid part (i.e., to the left of the current supply level), then the limit order also appears on the bid side of the book._ * _The slope of the line segment corresponds to_ \(-P_{\frac{y}{x}}\)_, i.e., the negative of the inverse of the price of_ \(Y\) _of the limit order._ * _The height loss of a segment corresponds to the volume of the limit order._ * _The current supply level_ \(y\) _corresponds to the sum of the volumes of all limit sell orders, and the current supply level_ \(x\) _corresponds to the sum of the dollar volumes of all limit buy orders._ **Remark 4.34**.: _Notice that the slope of the line segment on the left hand side of the marked supply level \((x,y)\) in Figure 11 is not the same as the slope of the line segment on the right hand side of \((x,y)\). The difference of the slopes is in one-to-one relation to the bid-ask spread in the associated limit order book._ After understanding the simple case of a bounded and discrete limit order book let us now turn to the general case. We need some preparations and start with introducing the generalized inverse of a function. **Definition 4.35** (Generalized inverse).: _For a non-decreasing function \(f:(0,\infty)\to\mathbb{R}\) the generalized inverse \(f^{-1}:\mathbb{R}\to[0,\infty)\) is defined by_ \[f^{-1}(y):=\sup\left\{x\in(0,\infty)\ |\ f(x)\leq y\right\},\] _where we use the convention \(\sup\emptyset=0\). For a non-increasing function \(f:(0,\infty)\to\mathbb{R}\) the generalized inverse \(f^{-1}:\mathbb{R}\to(0,\infty)\) is defined by_ \[f^{-1}(y):=\sup\left\{x\in(0,\infty)\ |\ f(x)>y\right\},\] _where we again use the convention \(\sup\emptyset=0\)._ **Remark 4.36**.: _In contrast to the usual inverse, the generalized inverse of a function always exists. If the function is invertible then the generalized inverse coincides with the usual inverse. There exist many different definitions of a generalized inverse, which are always tailored to the specific application. We choose this specific definition because it implies that the generalized inverse is always right-continuous. For an illustration of a generalized inverse we refer to Figure 12. For more information about generalized inverses we refer to [1]._ **Lemma 4.37**.: _The generalized inverse defined in Definition 4.35 satisfies the following properties:_ 1. _The generalized inverse_ \(f^{-1}\) _of a non-decreasing (non-increasing) function is non-decreasing (non-increasing)._ 2. _The generalized inverse_ \(f^{-1}\) _is right-continuous._ 3. _If the function_ \(f\) _is strictly increasing or strictly decreasing, then the generalized inverse coincides with the proper inverse._ The proof of the last lemma is standard (see for example Proposition 1 in [1] for a similar statement for a slightly different definition of the generalized inverse). Proof of Lemma 4.37.: We only consider the case where the function \(f\) is non-decreasing. The case \(f\) non-increasing follows either from similar arguments or can be reduced to the first case by observing that \(-f\) is non-decreasing. Figure 12. Illustration of the generalized inverse. The generalized inverse \(f^{-1}\) is obtained by reflecting the graph of the function \(f\) at the diagonal \(y=x\). There is one degree of freedom, namely for choosing the value of \(f^{-1}\) at jump points. We use the convention to choose the value which makes \(f^{-1}\) right-continuous. Argument for (1): We observe that for \(y_{1}<y_{2}\) it follows that \(\left\{x\mid f(x)\leq y_{1}\right\}\subset\left\{x\mid f(x)\leq y_{y}\right\}\) and therefore \[f^{-1}(y_{1})=\sup\left\{x\mid f(x)\leq y_{1}\right\}\leq\sup\left\{x\mid f(x) \leq y_{2}\right\}=f^{-1}(y_{2}).\] Argument for (2): To show right-continuity we need to show that for any monotonically decreasing sequence \((y_{n})_{n\in\mathbb{N}}\) such that \(y_{n}\downarrow y\) for \(n\to\infty\), it holds that \(x_{n}:=f^{-1}(y_{n})\to x:=f^{-1}(y)\) for \(n\to\infty\). Because of property (1), it follows that the sequence \((x_{n})_{n\in\mathbb{N}}\) is monotone decreasing. Because the sequence \(x_{n}\geq x\) is also bounded from below, it converges \(x_{n}\to\tilde{x}\geq x\) to some limit \(\tilde{x}\). It is left to show that \(\tilde{x}=x\). We proceed by contradiction and assume that \(x<\tilde{x}\) and set \(\varepsilon=\frac{\tilde{x}-x}{2}\). We observe that by definition \[x+\varepsilon=\tilde{x}-\varepsilon\leq x_{n}-\varepsilon.\] We therefore get by the definition of the generalized inverse and the monotonicity of \(f\) that \[f(x+\varepsilon)\leq f(x_{n}-\varepsilon)\leq y_{n}.\] Passing to the limit \(n\to\infty\) in the last inequality yields \[f(x+\varepsilon)\leq\lim_{n\to\infty}y_{n}=y.\] By using again the definition of the generalized inverse we estimate the left hand side of the last inequality as \[y<f(x+\varepsilon)\leq\lim_{n\to\infty}y_{n}=y,\] which is a contradiction. Argument for (3): It directly follows from the definition that \(f^{-1}(f(x))\geq x\). Let us additionally assume that \(f\) is strictly increasing, which means that for \(z>x\) it holds \(f(z)>f(x)\). This implies that \(f^{-1}(f(x))\geq x\) which completes the argument. **Proposition 4.38** (Associating a limit order book to an iso-util).: _Consider an iso-util \(I\) with current supply level \((x_{0},y_{0})\). Let \(f\) denote the function representation of \(I\) (see Proposition 3.6), and let \(f^{\prime-1}\) denote the generalized inverse of its derivative. Then for \(p\in(0,\infty)\) it holds that_ \[F_{s}(p):=\max\left\{y_{0}-f(f^{\prime-1}(-p^{-1})),0\right\}\] _defines the remaining supply function and_ \[F_{d}(p):=\max\left\{f(f^{\prime-1}(-p^{-1}))-y_{0},0\right\}\] _defines the remaining demand function of a settled limit order book._ Proof of Proposition 4.38.: To show that \(F_{d}\) is a remaining demand function we need to verify the conditions of Definition 4.5, namely that \(F_{d}\) is non-increasing, right-continuous and satisfies \(\lim_{p\to\infty}F_{d}(p)=0\). We start with showing that the remaining demand function \(F_{d}\) is non-increasing and right-continuous. We observe that the function \(i:p\mapsto-p^{-1}\) is strictly increasing. Because the function \(f\) is convex, the derivative \(f^{\prime}\) is non-decreasing. Therefore, by Lemma 4.37 the generalized inverse \(f^{\prime-1}\) is also non-decreasing. We also observe that the function \(f\) is non-increasing. Hence the combination \(f\circ f^{\prime-1}\circ i\) is also non-increasing. This yields that the remaining demand function \(F_{d}(p)\) is non-increasing. Next, we verify that the remaining demand function \(F_{d}\) is right-continuous. We observe that the function \(i:p\mapsto-p^{-1}\) is strictly increasing. Also, by Lemma 4.37 the generalized inverse is right-continuous. Additionally, as we have seen above, the generalized inverse \(f^{\prime-1}\) is non-decreasing. Because the function \(f\) is continuous and non-increasing, this implies by verifying the definition that the combination \(f\circ f^{\prime-1}\) is right-continuous, which yields that the remaining demand function \(F_{d}(p)\) is also right-continuous. Let us now show that \(\lim_{p\to\infty}F_{d}(p)=0\). We start with observing that the function \(F_{d}(p)\) is non-increasing and non-negative, i.e., \(F_{d}(p)\geq 0\). Hence, it suffices to show that \(F_{d}(p)=0\) for some \(p\). We recall that \(f^{\prime}_{-}\) denotes the left-hand derivative of \(f\). Let us choose \(p_{d}=-\frac{1}{f^{\prime}_{-}(x_{0})}\). Then, it holds by definition of the generalized inverse that \(f^{\prime-1}(-p_{d}^{-1})\geq x_{0}\). Hence, because the function \(f\) is decreasing it follows that \[f(f^{\prime-1}(-p_{d}^{-1}))\leq f(x_{0})=y_{0},\] which implies the desired identity \(F_{d}(p_{d})=0\). The argument that the remaining supply function \(F_{s}\) is non-decreasing and right-continuous is similar and left out. To show that \(\lim_{p\to 0}F_{s}(p)=0\), we observe that \(F_{s}(p)\) is non-decreasing and non-negative. Hence, it suffices to find a value \(p_{s}\) such that \(F_{s}(p_{s})=0\). We recall that \(f^{\prime}_{+}\) denotes the right-hand derivative of \(f\). Choosing \(p_{s}=-\frac{1}{f^{\prime}_{+}(x_{0})}\) we observe that \(f^{\prime-1}(-p_{s}^{-1})\leq x_{0}\). Now, a similar argument as for \(F_{d}\) implies the desired identity \(F_{s}(p_{s})=0\). The last step is to show that the limit order book given by \(F_{d}\) and \(F_{s}\) is settled. By (4) we have \(p_{d}\leq p_{s}\), which implies the desired conclusion by observing that \(F_{d}(p_{d})=F_{s}(p_{s})=0\) Figure 13. Remaining supply and demand function for the iso-util \(x\cdot y=1\) with current supply level \((1,1)\). **Example 4.39**.: _Let us consider an ideal market which is given by the utility function \(U(x,y)=\log x+\log y\). We consider the iso-util \(x\cdot y=T=A^{2}\) at temperature \(T\) or, equivalently, with mean activity \(A^{2}\). Let \((x_{0},y_{0})\) denote the current supply level. Then the marginal price is given by \(\hat{p}=\frac{x_{0}}{y_{0}}\) and straightforward computations show that the RSF is given by_ \[F_{s}(p)=\begin{cases}0,&\text{for }p<\hat{p},\\ A\left(\frac{1}{\sqrt{\hat{p}}}-\frac{1}{\sqrt{\hat{p}}},\right)&\text{for }p \geq\hat{p}.\end{cases}\] _This implies that the supply measure \(\mu_{s}\) is given by the Lebesgue density_ \[f_{s}(p)=\begin{cases}0&\text{for }p<\hat{p},\\ \frac{A}{2p^{\frac{3}{2}}}&\text{for }p\geq\hat{p}.\end{cases}\] _The RDF is given by_ \[F_{d}(p)=\begin{cases}A\left(\frac{1}{\sqrt{\hat{p}}}-\frac{1}{\sqrt{\hat{p}}} \right),&\text{for }p<\hat{p},\\ 0,&\text{for }p\geq\hat{p}.\end{cases}\] _Therefore, the demand measure \(\mu_{d}\) is given by the Lebesgue density_ \[f_{d}(p)=\begin{cases}0,&\text{for }p<\hat{p},\\ \frac{A}{2p^{\frac{3}{2}}},&\text{for }p\geq\hat{p}.\end{cases}\] _We refer to Figure 13 for an illustration. From this calculation we see that in an ideal market temperature modulates the available liquidity; e.g., in a market four times as hot, there is twice as much liquidity._ Let us now turn to associating an iso-util to a limit order book. We will need some preparation. **Definition 4.40** (Pricing function).: _Consider a limit order book \(\mathscr{L}\) given by the remaining demand function \(F_{d}\) and the remaining supply function \(F_{s}\). Then the bid pricing function \(p_{b}\) is defined as \(p_{b}:=F_{d}^{-1}\) and the ask pricing function \(p_{a}\) is defined as \(p_{a}:=F_{s}^{-1}\)._ **Remark 4.41**.: _The pricing functions \(p_{b}\) and \(p_{a}\) have a simple interpretation: If one wants to buy or sell \(y\) many units of \(Y\), then the price of the \(y\)-th unit is given by \(p_{a}(y)=F_{s}^{-1}(y)\) and \(p_{b}(y)=F_{d}^{-1}(y)\), respectively._ The following statement is an immediate consequence of involved definitions. **Lemma 4.42**.: _Assume that the remaining demand function \(F_{d}\) and the remaining supply function \(F_{s}\) are not the zero functions. Then the pricing functions \(p_{b}\) and \(p_{a}\) are strictly positive for \(y>0\), i.e., they satisfy \(p_{b}(y)>0\) and \(p_{a}(y)>0\) for all \(y>0\)._ **Definition 4.43** (Depth of a limit order book).: _The depth of the bid side of the limit order book is defined as_ \[d_{b}(y):=\int_{0}^{y}p_{b}(t)\ dt.\] _The depth of the ask side of the limit order book is defined as_ \[d_{a}(y):=\int_{0}^{y}p_{a}(t)\ dt.\] **Remark 4.44** (Interpretation of depth).: _The depth \(d_{a}\) of the ask side is the amount of money needed to buy \(y\) many units, and the depth \(d_{b}\) of the bid side is the amount of money received from selling \(y\) many units._ We also have the elementary observation which follows from Lemma 4.42. **Lemma 4.45**.: _The bid depth \(d_{b}\) is strictly increasing on the interval \([0,\lim_{p\to 0}F_{d}(p))\) and \(d_{b}(y)=\infty\) for any \(y>\lim_{p\to 0}F_{d}(p)\). Similarly, the ask depth \(d_{a}\) is strictly increasing on the interval \([0,\lim_{p\to\infty}F_{s}(p))\) and \(d_{a}(y)=\infty\) for any \(y>\lim_{p\to\infty}F_{s}(p)\)._ When translating a limit order book into an iso-util we need to determine the current supply levels \((x_{0},y_{0})\), which will only be well-defined if the limit order book is bounded. **Definition 4.46** (Bounded limit order book).: _We say that a limit order book \(\mathscr{L}\) given by the remaining supply function \(F_{s}\) and remaining demand function \(F_{d}\) is bounded if the following limits exists:_ \[x_{0}:=\int_{0}^{\infty}p\ dF_{d}(p)<\infty \tag{19}\] _and_ \[y_{0}:=\lim_{p\to\infty}F_{s}(p)<\infty. \tag{20}\] _Here, the integral denotes the Lebesgue-Stieltjes integral._ **Remark 4.47** (Meaning of a bounded limit order book).: _The interpretation of a bounded limit order book is again straightforward. The condition in (19) means that by selling the asset \(Y\) one can only obtain a finite amount of the asset \(X\), even if you sell as much as possible. This is plausible because under normal circumstances the circulating supply of \(X\) is finite. The condition in (20) means that the total supply of \(Y\) available for purchase is finite._ **Proposition 4.48** (Associating an iso-util to a limit order book).: _We consider a bounded limit order book given by a remaining demand function \(F_{d}\) and a remaining supply function \(F_{s}\). We define the function \(y_{b}:(0,x_{0}]\to[0,\infty)\) by_ \[y_{b}(x):=y_{0}+d_{b}^{-1}(x_{0}-x)\] _and the function \(y_{a}:[x_{0},\infty)\to[0,\infty)\) by_ \[y_{a}(x):=\max\left\{y_{0}-d_{a}^{-1}(x-x_{0}),0\right\}.\] _Then \(y_{b}\) defines the bid part of an iso-util \(I_{b}\) and \(y_{a}\) defines the ask part of an iso-util \(I_{a}\). If the limit order book is settled, then the function \(y:(0,\infty)\to[0,\infty)\) given by_ \[y(x):=\begin{cases}y_{b}(x)&\text{if }x\leq x_{0}\\ y_{a}(x)&\text{if }x>x_{0}\end{cases}\] _is convex and its graph \(I=I_{b}\cup I_{a}\) defines an iso-util with current supply level \((x_{0},y_{0})\), bid part \(I_{b}\) and ask part \(I_{a}\)._ Proof of Proposition 4.48.: We need to show that the functions \(y_{b}\) and \(y_{a}\) are convex, that \(\lim_{x\to\infty}y_{a}(x)=0\), and that in the settled case the function \(y(x)\) is convex (cf. Remark 3.8). Let us first show that the function \(y_{a}\) is convex. It suffices to show that its derivative \(y_{a}^{\prime}\) is non-decreasing. Straightforward differentiation yields for \(x>x_{0}\) \[y_{a}^{\prime}(x)=-\frac{1}{d_{a}^{\prime}(d_{a}(x-x_{0}))}=-\frac{1}{p_{a}(d_{a }(x-x_{0}))}=-\frac{1}{F_{s}^{-1}(d_{a}(x-x_{0}))}.\] We observe that \(F_{s}\) is non-decreasing and therefore also \(F_{s}^{-1}\) (see Lemma 4.37). Because \(d_{a}\) is increasing (see Lemma 4.42) it follows that \(p_{a}(d_{a}(x-x_{0}))\) is non-decreasing, which in turn implies that \(y_{a}^{\prime}(x)\) is non-decreasing. The argument to show that the function \(y_{b}\) is convex is similar and left out. Also, the property \(\lim_{x\to\infty}y_{a}(x)=0\) follows directly from the definitions. It is left to show that if the limit order book is settled then the function \(y\) is convex. For this let us first note that the function \(y\) is continuous, which follows from the observation \(y_{a}(x_{0})=y_{b}(x_{0})\). To show that \(y\) is convex, it suffices to show that \[y_{b}{}^{\prime}(x_{0})\leq y_{a}{}^{\prime}(x_{0}). \tag{21}\] The last inequality means that the left-hand derivative of \(y_{b}\) at \(x_{0}\) is less than or equal to the right hand derivative of \(y_{a}\) at \(x_{0}\). Let \(y_{m}\) denote a mid price of the limit order book. Because the limit order book is settled it follows from the definitions that for any \(h>0\) \[d_{b}^{-1}(h)\leq y_{m}\leq d_{a}^{-1}(h).\] Straightforward manipulation of the last inequality using the definition yields \[\frac{y_{0}-y_{b}(x_{0}-h)}{h}\leq\frac{y_{a}(x+h)-y_{0}}{h},\] which implies the desired estimate (21) by sending \(h\to 0\). **Definition 4.49** (Iso-utils of a limit order book).: _Let us consider the union \(I=I_{b}\cup I_{a}\) of the bid part \(I_{b}\) and ask part \(I_{a}\) of a limit order book (see Proposition 4.48). By a slight misuse of terminology we also call \(I\) an iso-util even if it may be non-convex and thus cannot be an iso-util of a utility function._ In contrast to iso-utils of a utility function \(U\) iso-utils of a limit order book might be non-convex, which at the same time identifies an arbitrage opportunity. **Proposition 4.50** (Arbitrage and non-convex iso-utils).: _The iso-util of a limit order book is non-convex if and only if the limit order book is unsettled. If that is the case then there is an arbitrage opportunity in the market._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Point & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(x\) & 0 & 500 & 1700 & 3300 & 4240 & 5440 & 7530 & 10230 & 13230 \\ \hline \(y\) & 382 & 332 & 302 & 282 & 272 & 260 & 241 & 221 & 211 \\ \hline \hline Point & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\ \hline \(x\) & 13230 & 13730 & 14930 & 16400 & 19150 & 21950 & 27050 & 39550 & 64550 \\ \hline \(y\) & 211 & 201 & 189 & 175 & 150 & 130 & 100 & 50 & 0 \\ \hline \end{tabular} \end{table} Table 6. Supply levels for the unsettled limit order book given in Table 2. Proof of Proposition 4.50.: If the limit order book is unsettled, there is a bid limit order (buy order) with a higher price than an ask limit order (sell order). Therefore, the left hand derivative of the iso-util at the current supply level is strictly smaller than its right hand derivative, which contradicts convexity. The arbitrage opportunity is straightforward, buying the asset from the ask order and selling it for a better price to the bid order. Figure 14. Non-convex iso-util generated from the unsettled limit order book given in Table 2. The convex iso-util associated to adiabatic clearing is given in Figure 11. Figure 15. Central part of iso-util of Figure 14. **Remark 4.51** (Absence of arbitrage in Definition 2.2 of a market).: _Let us consider a utility function \(U\) of a market \(\mathcal{M}\). One of the defining conditions of a market is that the utility function \(U\) is quasi-concave, which means that the iso-utils \(I\) must be convex. Hence, quasi-concavity implies that there is no price arbitrage opportunity in the market._ **Example 4.52** (Non-convex iso-util of an unsettled limit order book).: _As we know from Proposition 4.50, if the limit order book is unsettled then the resulting graph of \(I_{b}\cup I_{a}\) is non-convex. Only after settling matching orders, i.e., after the clearing procedure outlined in Proposition 4.13 or Proposition 4.18, one obtains a proper convex iso-util. Table 2 shows an example of an unsettled limit order book. The ask part \(I_{a}\) and bid part \(I_{b}\) of the associated iso-util are illustrated in Figure 14. They are generated from the linear interpolations of the supply levels given in Table 6. The graph of \(I_{b}\cup I_{a}\) is not convex (see also Figure 15 which zooms in at the current supply level \((x_{0},y_{0})\)). Recall that the adiabatic clearing of the limit order book is given in Table 1 and the associated iso-util is illustrated in Figure 11 (cf. Example 4.15 from above). Hence, the clearing procedure gives rise to a new method of convexifying a graph._ ## 5. Aggregate markets In this section, we describe arbitrage-mediated aggregation of markets. We assume that all traders are transparent in the sense of Definition 4.21 and only consider markets of the same asset pair \((X,Y)\). Additionally, we assume that arbitrageurs are always present, able, and willing to trade on arising arbitrage opportunities. As mentioned in the introduction, we consider two different aggregation mechanisms: adiabatic and iso-util aggregation. In Definition 5.2 below we outline the details of the aggregation mechanisms. In Example 5.3 and 5.4 we calculate the adiabatic aggregation of two ideal markets. The iso-util aggregation of two ideal markets is left as an exercise. After this, we illustrate adiabatic market aggregation with a couple of hands-on examples in the context of economics. We define the iso-utils of single consumers and producers, look at their joint aggregated unsettled market, discuss possible obstacles for settling, and finally consider their adiabatic-settled aggregated market (see Example 5.6, 5.7, 5.8). When aggregating markets with different marginal prices there will be an overlap of buy and sell limit orders in the joint limit order book. Hence the aggregated market will be unsettled and will have a non-convex iso-util. After canceling the overlapping buy and sell orders out of the limit order book one gets a _settled_ aggregated market with a convex iso-util. For details on the settling procedure we refer to Section 4 (see, e.g., Proposition 4.13). When transitioning from the unsettled to the settled market the limit order book gets cleared. Both clearing mechanisms, adiabatic and iso-util, result in a negative supply change as liquidity is leaving the market and lost to arbitrage. Using adiabatic clearing the lost liquidity is given by the adiabatic entropy \(S_{a}\); using iso-util clearing it is given by the iso-util entropy \(S_{i}\). This observation motivates following general conjecture. **Conjecture 5.1** (Fundamental law of market dynamics).: When markets equilibrate some liquidity is lost to arbitrage. This law would share similarities with the Second Law of Thermodynamics which states that in a heat engine, i.e., a machine that transforms heat into work, not all heat is transformed to work but some is transformed into entropy. The energy transformed into entropy is _lost_ in the sense that it cannot be used to generate work anymore. This is the reason why we choose the terminology adiabatic and iso-util entropy to describe the lost liquidity in market aggregation. As we see from the examples of this section, subtle economic behavior like consumption, production, and trade, can be explained as a consequence of market aggregation and arbitrage (see, e.g., Example 5.6, 5.7, 5.8). Could it be that economic activity is nothing else than market-dynamical entropy? Let us now formalize the aggregation process. **Definition 5.2** (Adiabatic and iso-util market aggregation).: _We consider an iso-util \(I_{1}\) of a market \(\mathcal{M}_{1}\) with supply level \((x_{1},y_{1})\) and an iso-util \(I_{2}\) of a second market \(\mathcal{M}_{2}\) with supply level \((x_{2},y_{2})\). Let \(F_{s,1}\) and \(F_{d,1}\) denote the remaining supply and demand function associated to the iso-util \(I_{1}\); and \(F_{s,2}\) and \(F_{d,2}\) the remaining supply and demand function associated to the iso-util \(I_{2}\). Then the unsettled aggregated limit order book is given by the remaining supply function \(\hat{F}_{s,a}=F_{s,1}+F_{s,2}\) and remaining demand function \(\hat{F}_{d,a}=F_{d,1}+F_{d,2}\)._ _The unsettled aggregated iso-util \(\hat{I}\) is given by the associated iso-util to the limit order book given by \(\hat{F}_{s,a}\) and \(\hat{F}_{s,d}\) (see Proposition 4.48) and has current supply level \((x_{1}+x_{2},y_{1}+y_{2})\)._ _In the case of adiabatic aggregation, the settled aggregated limit order book \((F_{s,a},F_{d,a})\) is given by the adiabatic clearing of the limit order book \((\hat{F}_{s,a},\hat{F}_{d,a})\) (see Proposition 4.43); and the settled adiabatic aggregated iso-util \(I\) is defined as the iso-util associated to the limit order book \((F_{s,a},F_{d,a})\)._ _In the case of iso-util aggregation, the settled aggregated limit order book \((F_{s,a},F_{d,a})\) is given by the iso-util clearing of the limit order book \((\hat{F}_{s,a},\hat{F}_{d,a})\) (see Proposition 4.48)._ _As notation, \(\bigcirc\) indicates the operation of unsettled aggregation and \(\triangle\) indicates the operation of settled aggregation of markets. More precisely, we denote with_ \[\mathcal{M}_{1}\bigcirc\mathcal{M}_{2},\qquad U_{1}\bigcirc U_{2},\qquad I_{1} \bigcirc I_{2}\] _the unsettled aggregation of the markets, utility functions and iso-utils of the markets \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\); and with_ \[\mathcal{M}_{1}\triangle\mathcal{M}_{2},\qquad U_{1}\triangle U_{2},\qquad I_ {1}\triangle I_{2}\] _we denote the settled aggregated counterparts._ **Example 5.3** (Adiabatic aggregation of two ideal markets with same marginal price).: _Let us consider the ideal market \(\mathcal{M}\) with utility function \(U(x,y)=\log x+\log y\). For the first market, let us consider an iso-util \(x\cdot y=T_{1}=A_{1}^{2}\) with temperature \(T_{1}\), mean activity \(A_{1}\) and current supply level \((x_{1},y_{1})\). For the second market, _let us consider an iso-util \(x\cdot y=T_{2}=A_{2}^{2}\) with temperature \(T_{2}\), mean activity \(A_{2}\) and current supply level \((x_{2},y_{2})\). We assume that the marginal prices of both markets coincide, which implies by Proposition 3.18 the condition \(\hat{p}=\frac{x_{1}}{y_{1}}=\frac{x_{2}}{y_{2}}\). The aggregated market has supply level \((x_{1}+x_{2},y_{1}+y_{2})\). Adding up the remaining supply functions and remaining demand functions of the single markets determines the limit order book of the aggregated market. Using Example 4.39 yields that the remaining supply function of the aggregated market is given by_ \[F_{s}(p)=\begin{cases}0,&\text{for }p<\hat{p},\\ (A_{1}+A_{2})\left(\frac{1}{\sqrt{p}}-\frac{1}{\sqrt{p}}\right),&\text{for }p \geq\hat{p}.\end{cases}\] _The remaining demand function is given by_ \[F_{d}(p)=\begin{cases}(A_{1}+A_{2})\left(\frac{1}{\sqrt{p}}-\frac{1}{\sqrt{p} }\right),&\text{for }p<\hat{p},\\ 0,&\text{for }p\geq\hat{p}.\end{cases}\] _Because the remaining demand function and remaining supply function characterize iso-utils we can conclude that the aggregated market is an ideal market with utility function \(U(x,y)=\log x+\log y\) at temperature \(\left(\sqrt{T_{1}}+\sqrt{T_{2}}\right)^{2}\) and mean activity \(A_{1}+A_{2}\). Hence, when aggregating ideal markets, the mean activity is additive and the temperature is super-additive._ **Example 5.4** (Adiabatic aggregation of two ideal markets with different marginal prices.).: _Let us consider the same setup as in Example 5.4. This time we assume that the marginal prices are different, i.e., \(p_{1}=\frac{x_{1}}{y_{1}}<p_{2}=\frac{x_{2}}{y_{2}}\). The remaining supply function of the unsettled limit order book is given by_ \[F_{s}(p)=\begin{cases}0,&\text{for }p<p_{1},\\ A_{1}\left(\frac{1}{\sqrt{p_{1}}}-\frac{1}{\sqrt{p}}\right),&\text{for }p_{1} \leq p<p_{2}\\ A_{1}\left(\frac{1}{\sqrt{p_{1}}}-\frac{1}{\sqrt{p_{2}}}\right)+A_{2}\left( \frac{1}{\sqrt{p_{2}}}-\frac{1}{\sqrt{p}}\right),&\text{for }p_{2}\leq p.\end{cases}\] _The remaining demand function is given by_ \[F_{d}(p)=\begin{cases}A_{1}\left(\frac{1}{\sqrt{p}}-\frac{1}{\sqrt{p_{1}}} \right)+A_{2}\left(\frac{1}{\sqrt{p}}-\frac{1}{\sqrt{p_{2}}}\right),&\text{ for }p<p_{1},\\ A_{2}\left(\frac{1}{\sqrt{p_{1}}}-\frac{1}{\sqrt{p_{2}}}\right),&\text{for }p_{1} \leq p<p_{2},\\ 0,&\text{for }p\geq p_{2}.\end{cases}\] _From those formulas it is obvious that the unsettled and settled aggregate market is not an ideal market anymore if marginal prices are different in the original markets._ Let us now explain an application of adiabatic aggregation to non-financial markets. **Definition 5.5** (Consumer and producer market).: _A state of a market is called consumer market if the current supply level \((x,y)\) satisfies \(y=0\). As a consequence the associated iso-util only has a bid part. Similarly, a state of a market is called producer market if the current supply level \((x,y)\) satisfies \(x=0\). As a consequence the associated iso-util only has an ask part._ **Example 5.6** (Consumer market).: _Figure 16 gives an example of a consumer market. We consider the asset pair \((X,Y)=(\text{dollar},\text{cars})\). The figure illustrates an hypothetical iso-util of one individual consumer who is willing to buy at most one car (no need or space for a second) for the price of at most $20000 (not more money available). Hence, the current supply level is given by \((20000,0)\). In this example the supply levels \(y\) are discrete, i.e., the set of admissible supply levels is given by \(\mathcal{P}=\{(x,y)\ |\ x>0\text{ and }y\in\{0,1\}\}\) (see also Remark 2.7)._ **Example 5.7** (Producer market).: _Figure 17 gives an example of a producer market. Again, the asset pair \((X,Y)=(\,\text{dollar}\,,\text{cars})\) is considered. The figure illustrates an hypothetical iso-util of one individual producer. The producer is able to produce one car at the price of $40000. By using economies of scale the producer is able to produce at least four cars at a price of $15000 each. The current supply level is given by \((0,5)\). The supply levels \(y\) are discrete, i.e., the set of admissible supply levels is given by \(\mathcal{P}=\{(x,y)\ |\ x>0\text{ and }y\in\{0,1,5\}\}\) (see also Remark 2.7)._ **Example 5.8** (Unsettled aggregated consumer producer market).: _We aggregate the consumer market of Example 5.6 and the producer market of Example 5.7 and obtain an unsettled iso-util given by Figure 18. Because the producer can only produce at least four cars for a competitive price and the consumer is not able to pay more for one car and cannot purchase four cars, the admissible supply levels of the bid part are not compatible with the admissible supply levels of the ask part. Therefore, the market cannot settle. If one finds three more consumers with the same iso-util as in Example 5.6, then the admissible supply levels are compatible and the market could settle. In practice, this means that a car dealer (trader) would take advantage of the situation, buy the four cars from the producer and sell them to individual consumers. Hence, a trade can be understood as a consequence of market aggregation in combination with the first law of market dynamics. We refer to Figure 19 and Figure 20 for more details. Therein, we aggregated the market with four instead of three more consumers to get a better visualization._ Figure 16. Iso-util of a single consumer with no car that is willing to buy at most one car (no use for a second one) for at most the price of $20,000 (not more money). Squares mark admissible supply levels, and the current supply level is given by \((20000,0)\). Figure 17. Iso-util of a single car producer that is able to produce a single car at a price of $40,000 but needs to produce at least 4 cars at a price of $15000. Triangles mark admissible supply levels, and the current supply level is given by \((0,5)\) Figure 18. Iso-util of the aggregated market of the consumer of Figure 16 and the car producer of Figure 17. We observe that the iso-util is non-convex and therefore the market is not settled. The market cannot settle because the consumer is not willing to pay $40000 for a car and the producer cannot produce one car for less than $20000. This means that the admissible supply levels (given by the squares and triangles) are not compatible. The current supply level in this figure is given by \((20000,5)\). Figure 19. We took the market of Figure 18 and aggregated it with four more customers with iso-util given by Figure 16. Now, the market can settle as the admissible supply levels of the ask and bid part, indicated by the squares and triangles, are compatible. Settling in this context means that the producer will produce four cars, sell them to a car dealer (trader) who resells them to the four consumers. We refer to Figure 20 for the associated settled iso-util which will be convex. Figure 20. Settled iso-util of the unsettled iso-util of Figure 19. **Remark 5.9** (Aggregating markets with different assets).: _We only consider the aggregation of markets of the same underlying asset pair \((X,Y)\). It would be an interesting problem to investigate the aggregation of markets that only share one asset. For example, how to join a market of the asset pair \((X_{1},X_{2})\) with the market of the asset pair \((X_{1},X_{3})\). The main idea would be to use the pathway via the limit order book and settling._ From the discussion of the Figures 16, 17, 18, 19 and 20 it becomes obvious that the framework of market dynamics is capable to model complex economic actions from a micro economic to a macro economic level. Using the multi-asset definition of markets, i.e., allowing a finite number of assets \((X_{1},X_{2},\ldots,X_{n})\), one could imagine to construct from ground up the world market of all assets. ## 6. The emerging theory of market dynamics, open problems and conclusion Let us we briefly describe the main inspiration and goals of the emerging theory of market dynamics. Starting point is the observation that a thermodynamic system, e.g., a piston, can be reduced to its functionality: It is a mechanism to exchange volume for pressure and vice versa. This shares a lot with the functionalist approach to markets which sees a market as a mechanism to exchange one asset into another. Hence, it might not be surprising that meta-principles and ideas from thermodynamics are very useful when studying the structure of markets and the interaction between the markets. One of the main goals of thermodynamics is to describe how and why energy transfers occur if thermodynamic systems are brought into contact. We propose that the main goal of the theory of market dynamics should be to describe how markets and traders interact if brought into contact. In this article, we made the first step toward achieving this goal: We describe how markets aggregate in the simplistic framework of a static, transparent market of two assets. In this manuscript, we introduced new - and renamed some existing - notions to point out the connection to thermodynamics. For example, to make precise the similarities to isotherms in thermodynamics, we call indifference curves _iso-utils_. Isotherms denote curves in the state space that conserve the temperature/energy of the system. In market dynamics, iso-utils denote curves that preserve the utility of the portfolio/market. In thermodynamics, the term adiabatic refers to state changes that do not preserve the temperature. Therefore, we call a trader _adiabatic_ if her utility increased after a trade. Another example is the terminology of an _ideal market_ to describe a market with utility function \(U(x,y)=\log x+\log y\). The reason is that the associated iso-utils have the same form as the isotherms of an _ideal gas_. This association was also made in [10] where the relation is called _ideal crypto law_. The _law of market dynamics_ and the term _(market-dynamical) entropy_ (see Section 5) are also inspired by thermodynamics. The second law of thermodynamics states that when energy flows from the hotter to the colder system, some heat energy is transformed into entropy. This yields that this energy is lost in the sense that it cannot be transformed into work anymore. Our fundamental law states that in market aggregation, some liquidity is lost to arbitrage. We call (market-dynamical) entropy the amount of liquidity that got lost through arbitrage. The liquidity is lost because the arbitrageur does not have the intention to re-inject her arbitrage profit back into the market. However, entropy plays a more cheerful role in market dynamics as in thermodynamics, where it is seen as the cause of the heat death of the universe. As we have seen in Section 5, entropy essentially measures the size of the economic activity that resulted from bringing economic agents into contact with each other. That opens an interesting perspective to policy makers: Maximizing the market-dynamical entropy corresponds to maximizing economic activity. How can this be achieved? By making economic agents interact as much as possible. As entropy is maximized in transparent markets, the policy maker should set rules which reduce obstacles to interaction as much as possible, and promote and reward transparency such that economic agents reveal their iso-utils. However, just concentrating on achieving transparency is not necessarily the best, as transparency obviously stands in competition with other policy goals like privacy protection. From the present status, there are many directions for future research. For example, the authors work on a manuscript studying in more detail the ideal market and the role of temperature. Another project is about _(market-dynamical) work_ and its role in _financial engines_. We believe that work might indicate how much value is transferred from one asset to another. Financial engines are inspired by heat engines and seem to be a very useful tool to describe financial _bubbles_, especially in the context of trading meme stocks, which are driven by bullish sentiments of retail investors; cf., e.g., [12]. Additionally, the authors plan to study the role of market-dynamical entropy as another source of _price fluctuations_. In this project the main idea is to split up price volatility into two parts. The first part resembling the iso-util activity of traders on the visible market, and the second part resembling adiabatic activity caused by the entropy of the hidden market. There is still a lot of work ahead. Are there more concepts in market dynamics that correspond to thermodynamic objects (e.g., energy, free energy, internal energy, temperature, and enthalpy)? If so, what relations and main principles do they satisfy? To give an example from thermodynamics, the formula \(F=U-S\) connects the Helmholtz free energy \(F\) to the internal energy \(U\) and the entropy \(S\). Once the theory of market dynamics is sufficiently developed and its main principles are identified, one could turn to developing a theory of market mechanics. Like in statistical mechanics, the main goal of market mechanics would be to derive the main principles of market dynamics from the mechanics that govern the microscopic interaction of traders. We want to point out that our main result is not that in a static and transparent market, Pareto optimal aggregation is possible. As our model of a financial market can be interpreted as a pure exchange economy, this would directly follow from the _First Fundamental Theorem of Welfare Economics_[1]. The main contribution of our work is to identify a realistic arbitrage-mediated aggregation mechanism and to study its implications, as we outlined in the notion of entropy and the law of market dynamics. There are many ways to refine our model. For instance, it would be very conceivable to study the case of multiple assets \((X_{1},\ldots,X_{n})\) and not just one asset pair \((X,Y)\). It is also an intriguing question to identify compatibility conditions under which two markets \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) with asset pairs \((X_{1},X_{2})\) and \((X_{2},X_{3})\) combine into a single joint market \(\mathcal{M}_{3}\) of the assets \((X_{1},X_{2},X_{3})\); or to study how markets of different assets are aggregated. Another direction would be to include production and consumption processes as was done for example in the Arrow-Debreu model. Yet another direction would be to study aggregation in non-transparent markets. This would be the area of _hidden market models_, which could be inspired by hidden Markov chains. The main question would be: Is it possible to learn the hidden part of the utility function of the market from data? Let us summarize the contributions of this manuscript. * Main topic is the aggregation of financial markets. * In Section 2.2 we clarified how markets, from an atomistic trader to the global market, can be modeled with the same tool, namely utility functions. * We discussed the role of iso-utils and their relation to utility functions (see Section 3 and 4). * In Section 5, we presented a simple, yet efficient aggregation mechanism based on arbitrage. * We discussed the emerging theory of market dynamics and introduced fundamental notions like iso-util, adiabatic entropy and the law of market dynamics (see Section 5 and 6). * The presentation is kept as simple as possible and restricted to the case of a transparent static market of one asset pair \((X,Y)\). * We discussed the difference between visible and hidden markets and their implications (see Section 1). * In Section 6, we gave a list of open problems and our vision how to further develop the theory of market dynamics. We believe that there are still a lot of interesting research questions to address before the aggregation of financial markets is understood sufficiently. Then, the next natural but challenging step would be to study how utility functions of individual traders influence each other, once they are brought into contact via the aggregated market. ## Acknowledgment The authors want to extend their gratitude to Zachary Feinstein. His excellent presentation at the UCLA research seminar in Mathematical Finance and the follow-up discussions inspired the authors to this research project.
2302.14711
A guided light system for agile individual addressing of Ba$^+$ qubits with $10^{-4}$ level intensity crosstalk
Trapped ions are one of the leading platforms for quantum information processing, exhibiting the highest gate and measurement fidelities of all contending hardware. In order to realize a universal quantum computer with trapped ions, independent and parallel control over the state of each qubit is necessary. The manipulation of individual qubit states in an ion chain via stimulated Raman transitions generally requires light focused on individual ions. In this manuscript, we present a novel, guided-light individual addressing system for hyperfine Ba$^+$ qubits. The system takes advantage of laser-written waveguide technology, enabled by the atomic structure of Ba$^+$, allowing the use of visible light to drive Raman transitions. Such waveguides define the spatial mode of light, suppressing aberrations that would have otherwise accumulated in a free-space optics set up. As a result, we demonstrate a nearest neighbour relative intensity crosstalk on the order of 10$^{-4}$, without any active aberration compensation. This is comparable to or better than other previous demonstrations of individual addressing. At the same time, our modular approach provides independent and agile control over the amplitude, frequency, and phase of each channel; combining the strengths of previous implementations.
Ali Binai-Motlagh, Matthew Day, Nikolay Videnov, Noah Greenberg, Crystal Senko, Rajibul Islam
2023-02-28T16:20:18Z
http://arxiv.org/abs/2302.14711v1
A guided light system for agile individual addressing of Ba\({}^{+}\) qubits with \(10^{-4}\) level intensity crosstalk ###### Abstract Trapped ions are one of the leading platforms for quantum information processing, exhibiting the highest gate and measurement fidelities of all contending hardware. In order to realize a universal quantum computer with trapped ions, independent and parallel control over the state of each qubit is necessary. The manipulation of individual qubit states in an ion chain via stimulated Raman transitions generally requires light focused on individual ions. In this manuscript, we present a novel, guided-light individual addressing system for hyperfine Ba\({}^{+}\) qubits. The system takes advantage of laser-written waveguide technology, enabled by the atomic structure of Ba\({}^{+}\), allowing the use of visible light to drive Raman transitions. Such waveguides define the spatial mode of light, suppressing aberrations that would have otherwise accumulated in a free-space optics set up. As a result, we demonstrate a nearest neighbour relative intensity crosstalk on the order of \(10^{-4}\), without any active aberration compensation. This is comparable to or better than other previous demonstrations of individual addressing. At the same time, our modular approach provides independent and agile control over the amplitude, frequency, and phase of each channel; combining the strengths of previous implementations. ## I Introduction Trapped ions have the longest coherence times [1; 2], highest fidelity single and two qubit gate operations [3; 4] as well as state-preparation and measurement (SPAM) fidelities [5; 6; 7] of any experimental quantum information processing (QIP) platform. In recent years, barium has emerged as a popular candidate for trapped ion QIP due to the availability of a spin-1/2 isotope, long lived meta-stable states and visible wavelength atomic transitions. This favourable atomic structure has allowed for the highest experimentally demonstrated SPAM fidelities of any experimental qubit [8]. For applications in QIP, the ability to manipulate the quantum state of individual ions is of utmost importance. Independent, coherent control of one or more qubits encoded in the hyperfine structure of an ion can be accomplished via stimulated Raman transitions. This experimental challenge is realized by tightly focusing individual laser beams at chosen ion sites across a chain [9; 10]. The focus of this manuscript is on the necessary optical infrastructure needed for the individual addressing of long chains (\(N>10\)) of Ba\({}^{+}\) ions with low crosstalk and independent control. The visible wavelength Raman transition (532 nm) in Ba\({}^{+}\) ions enables the use of laser-written waveguides as well as fiber coupled AOMs. The operation of the former has not been demonstrated in the UV, while the latter is not commercially available for such short wavelengths. The ion of choice for many previous demonstrations of quantum computation has been \({}^{171}\)Yb\({}^{+}\), with 355 nm Raman transitions. \({}^{133}\)Ba\({}^{+}\) has all the same desirable properties of \({}^{171}\)Yb\({}^{+}\), with the added benefit of visible wavelength transitions, that enable the use of new optical technologies for robust quantum control of the ion. The ability to split and modulate light in waveguides, afforded by the use of such technologies, is central to the benefits of our implementation. For complete quantum control over a chain of ions, an individual addressing system must provide agile (with a time scale faster than the single qubit gate time) and independent control over the temporal characteristics (intensity, frequency, phase) of each beam. These are key requirements for enabling arbitrary single qubit, and high fidelity multi-qubit entangling gates as well as several quantum simulation protocols [11; 12]. Another major concern for the design of such a system is the intensity crosstalk between ion sites due to overlap of neighbouring laser beams. This presents a demanding optical engineering challenge as for most ions, the diffraction limited spot of the individual addressing beam is on order of a micron while the ion spacing is at most several microns in a typical chain. Thus, one must consider the error introduced to neighbouring sites when a given ion is addressed. Since this error is unitary, algorithmic techniques can be used to reduce the effects of crosstalk at the cost of greater circuit depth [13]. However, to get the most out of NISQ devices it is more favourable to reduce this error through optical engineering. A good target for the crosstalk intensity, relative to the intensity of the target ion, is \(10^{-4}\). Under a suitable optical scheme, this translates to a gate error on the order of \(10^{-4}\) which is below the threshold for many error correcting codes for fault-tolerant quantum computation [14]. Previous demonstrations of individual addressing have been enabled by three predominant technologies: micro-mirrors, multi-channel acousto optic modulators (MAOMs) and acousto-optic deflectors (AODs). Table 1 provides a comparison of each of these technologies. Beam steering with a single micro-mirror device allows the deflection of a high quality beam between array sites, leading to low crosstalk between sites at the expense of serial addressing [15]. Holographic beam shaping has further extended the micro-mirror device concept by using micro-mirror arrays, with each mirror much smaller than the beam size such that arbitrary beam profiles can be generated at ion sites [16]. This allows for parallel, selective addressing with low crosstalk at the sacrifice of independent control of the frequency of each beam. While independent control over the intensity of each channel is possible, the slow switching rate of the micro-mirrors inhibits pulse shaping for the purpose of optimal control. AODs allow for agile intensity control, however, they also cannot provide independent frequency control at specific ion sites. On the other hand, independent and agile control of frequency, intensity and phase can be accomplished through the use of MAOMs, which can generate an array of beams using a single diffractive optical element. However, since the MAOM is formed from a single crystal with individual acoustic transducers utilized to generate each beam, there is significant crosstalk between neighboring channels [9; 17]. To combine full independent control of each beam with low cross talk, we propose a guided-light individual addressing system (GLIAS) that makes use of a laser-written waveguide splitter, manufactured using femtosecond laser direct-write (FLDW), and fiber coupled AOMs for the individual addressing of trapped ions. The use of waveguides suppresses aberrations that would otherwise be introduced in a free-space optical system. Fiber coupled AOMs allow for Agile and independent control of each channel's temporal characteristics without distortion of the beam's spatial profile. An overview of this optical system and the orientation of the beams relative to the ions is shown in Fig. 1. The source is a 532 nm, mode-locked laser (NKT Photonics aeroPULSE PS) that gets divided into two beams. In the most common configuration, the profile in one arm is for global addressing and is shaped such that it illuminates the entire chain. The other arm is for individual addressing and is sent through a series of optics that splits the single beam into multiple channels and focuses each to a size significantly smaller than the ion spacing. The amplitude of each channel can be modulated with fiber AOMs to set the Rabi rate (the characteristic oscillation frequency of each qubit, that sets the single qubit gate time) for single qubit gates or to determine which ions are involved in a multi-qubit gate [19]. The normal modes of the ions are used as a vibrational data bus to create entanglement between two ions and hence enable the implementation of multi-qubit gates. Control over the applied frequency provided by each AOM determines which normal modes are used to mediate entanglement. This gives independent, agile control of each beam at the ion plane. We discuss each element of our proposed system, before presenting the results on the characterization of this individual addressing scheme. ## II System design The GLIAS is composed of four principal sections shown in Fig. 1 (a): splitting, modulation, path-length matching and mode-matching. Each will be discussed in detail in the following sections. ### Splitting and modulation The splitting of a single beam from the laser source is implemented via a custom laser-written waveguide array (manufactured by OptoFab, Access Macquarie Ltd), which is written inside a monolithic block of alumino-borosilicate glass shown in Fig. 1 (b). The chip takes a single beam and sequentially splits the light through concatenated 50/50 evanescent directional couplers. The initial input beam is split into 16 channels for the individual addressing of ions. The amount of coupled light between two waveguides is determined by the proximity as well as length of the waveguides and typical spacing is on the order of microns. The waveguides are created by using a high power, tightly focused femtosecond laser that locally heats up the piece of glass onto which the waveguide is being written. This locally changes the bonding structure of the underlying material at the focus, which increases the local refractive index by a small amount \(\approx 10^{-3}\) relative to the surrounding material [20]. Power coupling from free space into the waveguide is done with a 3-axis translation stage and an objective (20 mm EFL, 15 mm WD, 0.25 NA). The input waveguide is recessed into the glass by 200 \(\mu\)m to increase the amount of power that can be incident on the input facet without damage. The input waveguide has a mode field diameter of \(\sim 4.5\)\(\mu\)m and 2 W of average power can be coupled without damage to the input facet. The waveguide array is coupled into a fiber array, where the 16 individual addressing outputs of the waveguide are each coupled to a single-mode polarization maintaining fiber (PM460). A power tap from the input waveguide is coupled into a multi-mode fiber. The waveguide and fiber array are bonded together using UV curable epoxy. Care was taken to ensure that epoxy does not enter the gap between the fiber and waveguide as the epoxy can be damaged from the high optical power, thereby reducing the transmittance of the device. The entire device is placed inside an aluminum case that serves to protect and strain relieve the fragile waveguide-fiber bond. Each of the 16 channels is sent to a fiber coupled AOM (manufactured by Gooch & Housego PLC). The AOMs act as fast optical switches that can be used to precisely time when light is incident on a given ion in a chain, an essential requirement for individual quantum control. These AOMs also allow independent modulation of the amplitude, frequency and phase of each channel, which are key for complete control over the quantum state of the entire chain. ### Path length matching To drive stimulated Raman transitions, the counter propagating beams must have spatial and temporal overlap. The spatial overlap can be obtained by suitable imaging techniques or monitoring ion signals that rely on IA Raman beams (for example measuring the AC stark shift). Temporal overlap is made difficult as the fiber cables length tolerances are on the order of several millimeters. As the pulses from the 532 nm mode-locked laser are 10 picosecond in duration, corresponding to an approximately \(L=3\) mm pulse length, the paths must be matched to a length \(\Delta L\ll L\). To temporally align all pulses, miniature fiber delay stages with 4 mm of travel and a resolution of 210 \(\mu\)m (manufactured by OZ Optics Ltd.) are used for each individual addressing channel. For channels that have a length mismatch beyond what can be accommodated by the 4 mm travel range, the corresponding fibers can be cleaved and spliced. \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & Intensity Crosstalk & Intensity Control & Frequency Control & Phase Control \\ \hline AOD [18] & 5e-3 & \(\checkmark\) & & \(\checkmark\) \\ \hline DMD [16] & 1e-4 & \(\checkmark\)* & & \(\checkmark\)* \\ \hline MCAOM [17] & 1e-3 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline GLIAS & 1e-4 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of the characteristics of several Individual addressing systems. Check mark indicates that independent control (i.e. the ability to control a characteristic of one channel/beam without affecting that of the other channels) is possible. Asterisks in the DMD row indicate that independent control is possible, however it is slow relative to typical single qubit gate times. Figure 1: (a) Schematic of the GLIAS, starting with the splitting done by the laser-written waveguide, modulation of the frequency and intensity of the light with fiber AOMs, temporal overlapping of the individual beams with delay stages, and spacial mode-matching of the ion chain with a micro-lens array before re-imaging onto the ion plane. (b) Diagram of the laser-written waveguide, the key enabling technology for the proposed GLIAS for use with Ba\({}^{+}\) ions. The laser-written waveguide is used to split light into 16 addressing channels, which can then each be fiber coupled, allowing independent modulation with fiber based AOMs. (c) Typical beam orientation for Raman transitions used to drive phase-stable single and two qubit gates in trapped ion chains [19]. The overlap of two beams is necessary to modify the quantum state of the ions. In this case one of the beams illuminates the entire chain while the other is tightly focused on each ions. It is also possible to replace this global beam with a second set of individual addressing beams. This can help reduce gate errors at the cost of additional engineering complexity. ### Mode-matching The guided light system terminates at the output of a 16 channel fiber array with a 3.3 \(\mu\)m mode diameter and a 250 \(\mu\)m mode-spacing. This must be mapped onto the 4 \(\mu\)m spacing between the ions in the chain. To satisfy the pitch change, a telecentric imaging system with a demagnification of 62.5x was built, with a design shown in Fig. 2 (a). The pitch between the ions is fixed at 4 \(\mu\)m (chosen as it enables us to attain our desired crosstalk figure while still allowing for relatively high normal modes frequencies) and the maximum available NA is fixed at 0.37 by the geometry of the trap and the vacuum re-entrants. Given these two constraints at the ion plane, we cannot directly image every fiber core and still satisfy the Lagrange invariant [23]. This problem is depicted in Fig. 2 (b). At conjugate planes, the product of the height of the chief ray and the angle of the marginal ray is conserved for paraxial optical systems (perfect imaging). There are two plausible solutions. One could reduce the pitch between channels, thus reducing the height of the Chief ray or one could reduce the fiber NA, thus reducing the angle of the Marginal ray. The former approach was dismissed as simulations indicated significant crosstalk between neighbouring waveguides [22]. To reduce the NA of each fiber core, an array of microlenses are placed at the facet of each fiber to expand the beam waist out of each channel. The custom microlenses were designed using OpticStudio and manufactured by FEMTOprint. An aspheric profile, optimized by considering the resulting crosstalk in the ion plane as well as typical alignment tolerances, was used to minimize spherical aberrations in the system. A picture of these microlenses is shown in Fig. 2 (c). Simulations of the microlenses with the telecentric imaging system indicate a nearest neighbour intensity crosstalk of \(<10^{-5}\). The simulated beam profile after the telecentric imaging system is shown in Fig. 2 (a). The individual addressing system is designed to be telecentric so that the wavevectors of the beams from different channels are all parallel to each other and orthogonal to the chain. This is necessary to minimize coupling to the Figure 2: (a) Telecentric re-imaging system for mapping the microlens array (MLA) image onto the ion plane. The beam shape at the ion is elliptical, with a size orthogonal to the chain that is 4 times larger than that along the chain. This was done to reduce the magnitude of intensity fluctuations on the ion due to instabilities in beam pointing. The elliptical beam requires separate reshaping of the beam waists along the x-axis and y-axis with cylindrical lenses before focusing at the ion plane with an acromatic objective (manufactured by Special Optics Inc.). OpticStudio simulations of the focus at the ion plane through the MLA-Telescope optical system show \(<10^{-5}\) crosstalk along the ion chain axis (x-axis). (b) Optical re-imaging constraints set by the vacuum viewport (0.37 NA) [21] and ion spacing (4 \(\mu\)m), necessitating a reduction in object NA by a MLA in order to satisfy crosstalk requirements for high fidelity gates [22]. (c) Profile and picture of the MLA used in part to match the spatial mode of the VGA to the calculated ion spacing. The lenses contained in each row have the same profile, but each row of lenses are different. A single row from the MLA was used to create an image at the ion plane. The MLA was also designed in OpticStudio. axial modes of the chain, which are not intended to be used for QIP in this optical configuration. ## III Results ### Splitting ratios and maximum power throughput All channels coming out of the waveguide were designed to have equal optical powers, so that each ion experiences the same Rabi rate. However, as shown in Fig. 3 (a) we observe relatively large, unpredictable variations in the power output of each channel, due to uncontrolled fabrication variations in individual couplers. There is roughly a factor of 5 difference in the optical power out of the best performing channel (6) and the worst performing channel (16). The channel with the lowest optical power determines the maximum single qubit gate time of the computer. This variation adds a slight complexity to the experiment as the Rabi rate of each ion must be independently known and tracked. If equal Rabi frequencies are desired, the powers on each channel must be made equal, which can be done by controlling the RF power sent to the individual AOMs. Knowledge of the power reaching the ions is of utmost importance as it sets an upper bound on the possible Rabi rate. Insertion losses for major components of the individual addressing system are shown in Table 2. If equal optical power is desired for all channels, the input power must be further attenuated by the AOMs so that all channels have the same power as the channel with the smallest splitting ratio. This balanced output power can be increased by using a subset of all the available channels, ignoring one or more of the channels with the smallest output power. This is because through this omission, the AOMs must compensate for a smaller variation in the unbalanced power out of each channel. The curve in Fig. 3 (b) shows how the power per channel at the ions increases as the number of included channels are reduced, where each time we choose not to use the worst-performing channel in the set. To attain a Rabi rate of around \(2\pi\times 1\) MHz, we need the optical power to be around \(2\) mW on the individual addressing side, assuming equal optical intensities on both arms of the individual addressing system (see supplementary). From this curve it is evident that to attain our desired \(2\pi\times 1\) MHz Rabi rate, we must exclude the two worst-performing channels (channels 15 and 16). It may be possible to extract more power from the device by splicing each subsystem of the individual addressing device together and shortening the length of the fibers. This data is based on an input average power of 2 W into the waveguide chip, which is below the damage threshold for the chip. To protect against power fluctuations due to temperature gradients produced by ambient fluctuations or scattered light, active control of the temperature was built into the protected waveguide casing. Varying the temperature from 20-26 \({}^{\circ}\)C, well beyond the expected change in the temperature of the device, we observe a relatively small change in the splitting ratio (at most 0.6%). This data is shown in Fig. 3 (a). ### Path length matching characterisation To optimize the temporal overlap we send light from two channels of the FLDW splitter chip through to the VGA and block the rest using the AOMs. The resulting interference pattern formed by the two beams is then captured using a camera, at a point where they have significant spatial overlap. If there is no temporal overlap between the pulses then no interference fringes will be detected on the camera. The overlap between the pulses can be optimized by maximizing the contrast of the resulting fringes, through displacement of the fiber delay stages. The experiment is described in Fig. 4 (a) and the resulting fringe contrast is shown in Fig. 4 (b). Prior to optimization, the fringes are faint and hard to detect. After optimization, we see a clear interference pattern with large fringe contrast. For some channels in the system, the 4 mm range of the stages was not sufficient to fully optimize the interference pattern. For these, we use a pair of stages with longer travel range to precisely determine the length mismatch then splice the fibers to correct for this difference. For further fine tuning of the spatial and temporal overlap, the measured Rabi frequency or AC stark-shift at the ions can be used as the feedback signal. ### Microlens array characterisation The microlenses are manufactured using ultrafast lasers to locally modify the index and density of a piece \begin{table} \begin{tabular}{l|l} Element & Insertion loss (dB) \\ \hline Waveguide & 5 \(\pm\)0.4 \\ Waveguide-VGA bond + Fibers & 4.9 \(\pm\)0.4 \\ AOM (single device) & 3 \(\pm\)0.4 \\ Variable Delay lines & 1.9-4.6 \(\pm\)0.4 \\ VGA & 1.4 \(\pm\)0.4 \\ Telescope + Viewport & 1.2 \(\pm\)0.4 \\ \end{tabular} \end{table} Table 2: Breakdown of the insertion loss of the major components of the guided-light individual addressing system. Besides loss of power from individual components, mismatch in the output power from different waveguide channels is a limiting factor in maximum obtainable uniform power at each ion site across the chain. Loss from power matching is characterised in Fig. 3. Errors are calculated based on the measurement uncertainty of the power sensor used for measurements (\(\pm\)3%). of glass. This technique combined with chemical etching can be used to realize complex three dimensional structures in glass, such as our microlenses. The initial set of fabricated microlenses possessed a shorter focal length than what we had designed for, possibly due to the lens polishing step in the manufacturing process. To account for these fabrication inaccuracies, a grid of microlenses was designed with a range of effective focal lengths (EFL) starting from 0.525 mm and going up to 1 mm in 0.025 mm increments. A total of 20 microlens arrays (MLA) were produced, with each array (row) containing identical microlenses. Through this process, we were able to identify a row with the desired EFL. ### Crosstalk intensity characterisation To characterize the intensity crosstalk between channels, the beams after the MLA are imaged onto a camera using a single 250 mm focal length lens, providing a 0.5x magnification. Several profiles at multiple exposures were recorded and stitched together, to obtain a single profile with sufficient dynamic range to assess the crosstalk level between the neighbouring channels. The beam profile images are taken with one channel turned on at a time. The data for all 16 channels is shown in Fig. 5 (a). The profiles near the left end of the the system are broadened compared to the rest of the channels and as a result exhibit a larger amount of intensity crosstalk. For the first three channels, the crosstalk is on the order of \(10^{-3}\). For the rest, it is on the order of \(10^{-4}\). This asymmetry is most likely due to imperfections in the MLA manufacturing process. Fig. 5 (b) elucidates this by showing the crosstalk for each channel, when its nearest neighbours are simultaneously turned on. To verify these results, the crosstalk measurement was repeated for a single channel after the full individual addressing system (62.5x demagnification). To capture the beam profile at the ion plane, a camera with a relatively small pixel size is required, given that the expected beam waist at the focus of the objective is around 0.9 \(\mu\)m. For this we use the Raspberry Pi Noir V2 camera. It is a color camera with each unit cell consisting of 2 green pixels, one red pixel and one blue pixel. The size of each unit cell is 1.12 \(\mu\)m, thus the resolution is too low relative to the beam radius to obtain a precise beam profile. Nevertheless, the measured intensity crosstalk level should be comparable to that obtained in the prior measurement. This profile is shown in Fig. 5 (c). Our measurements indicate an intensity crosstalk slightly greater than \(10^{-4}\), 4 \(\mu\)m away from the peak, at the location of the neighbouring ions. This is commensurate with the data shown in Fig. 5 (a). ## Conclusion In this paper we demonstrated the design and characterization of a individual addressing system suited for digital quantum computation and analog simulation with a chain of up to 16 Barium ions. The system utilizes fiber optic and waveguide technology, making it modular and thus scalable and upgradable if addressing of larger chains is desired. This modularity also means that we could replace the waveguide chip, if we are able to design and manufacture one with a more balanced splitting ratio, without affecting the alignment of the system to the ions. Our architecture is not limited to FLDW technology. Future upgrades could replace this chip with higher index contrast, lithographically defined integrated waveguides, reducing crosstalk due to evanescent coupling. This crosstalk sets a threshold on the minimum possible waveguide pitch. Reduction of this threshold would allow the Lagrange invariant to be satisfied through a change in the pitch of the VGA fibers, thus removing the need for a MLA and thereby further Figure 3: (a) Relative power output of a single-mode fiber glued to each channel of the laser-written waveguide. (b) The maximum balanced power per channel as a function of the number of channels included. Error bars are calculated based on the measurement uncertainty of the power sensor used for measurements (\(\pm 3\%\)). reducing the number of mechanical degrees of freedom. This control would also allow for the pitch to be exactly tailored to the pitch of the ion chain which is necessary for chains with non-uniform spacing. We measure an intensity crosstalk on the order of \(10^{-4}\). If the other arm of the Raman system in Fig. 1 globally illuminates the ions then the resulting error in the Rabi rate is proportional to the square root of the intensity crosstalk. This corresponds to a crosstalk error in Rabi rate of 1% [24; 25], which is comparable to the state-of-the-art. To make this more concrete, if a \(\pi\)-pulse is performed on a desired ion, this crosstalk means that its nearest neighbours undergo a \(\pi/100\) pulse. To reduce this error further, one can implement individual addressing for both arms of the Raman system in which case the error in the Rabi rate scales linearly with the intensity (assuming equal intensity in the two beams). This will result in a crosstalk error in Rabi rate of 0.01%, commensurate with requirements of quantum error correction algorithms [14]. At the same time, our individual addressing system provides rapid, simultaneous control over the frequency, phase and amplitude of each beam using fiber coupled AOMs. This independent control is necessary for simulation of arbitrary fully connected spin models with ions [12; 26]. Further, the use of waveguides and fibers simplifies optical alignment and makes the system modular. Maintenance or upgrades to one part of the system has no effect on the alignment of components downstream from it. The use of this technology is not limited in scope to unitary qubit operations. FLDW waveguide technology is compatible with 493 nm, the S\({}_{1/2}\) to P\({}_{1/2}\) transition wavelength of the Barium ion, which is used for quantum state detection. Thus our system can be used for independent and agile state detection, in addition to unitary gate operations. This enables the implementation of mid-circuit measurement which is key to the realization of quantum error correction. Figure 4: (a) Overview of the entire IA system with fiber delay stages to ensure that the pulses arrive at the ions at the same time. Stages with 4 mm travel range were used to achieve temporal overlap between a pair of beams in the individual addressing system. (b) Fringe contrast created by the overlap of the two beams. Maximizing the fringe contrast ensures that all pulses arrive at the ions at the same time. For the purpose of path length matching, all AOMs are set to the same frequency. ## Acknowledgements This research was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), Grant Nos. RGPIN-2018-05253 and RGPIN-2018-05250, and the Canada First Research Excellence Fund (CFREF), Grant No. CFREF-2015-00011. CS is also supported by a Canada Research Chair.
2309.07379
A closed manifold is a fat CW complex
In this paper, we introduce a notion of a fat CW complex to show that a closed manifold is a regular CW complex, while it is not always the case if we discuss about a smooth CW structure, introduced by the first author, instead of a fat CW structure. We also verify that de Rham theorem holds for a fat CW complex and that a regular CW complex is reflexive in the sense of Y.~Karshon, J.~Watts and P.~I-Zemmour. Further, any topological CW complex is topologically homotopy equivalent to a fat CW complex. It implies that there are lots of non-manifold examples supporting de Rham theorem.
Norio Iwase, Yuki Kojima
2023-09-14T01:42:34Z
http://arxiv.org/abs/2309.07379v1
# A closed manifold is a fat CW complex ###### Abstract. In this paper, we introduce a notion of a fat CW complex to show that a closed manifold is a regular CW complex, while it is not always the case if we discuss about a smooth CW structure, introduced by the first author, instead of a fat CW structure. We also verify that de Rham theorem holds for a fat CW complex and that a regular CW complex is reflexive in the sense of Y. Karshon, J. Watts and P. I-Zemmour. Further, any topological CW complex is topologically homotopy equivalent to a fat CW complex. It implies that there are lots of non-manifold examples supporting de Rham theorem. Key words and phrases:diffeology, manifold, CW complex, handle, reflexivity, partition of unity 2020 Mathematics Subject Classification: Primary 57R12, Secondary 57R55, 57R35, 55P99 ## 1. Introduction A site is a concrete category with a 'coverage' assigning a 'covering family' to each object. For a site \(\mathsf{C}\), we denote by \(\mathsf{Obj}\,(\mathsf{C})\) the class of objects, by \(\mathsf{Mor}_{\mathsf{C}}\,(V,U)\) the set of morphisms from \(V\) to \(U\), and by \(\mathsf{Cov}_{\mathsf{C}}\,(U)\) the set of covering families on \(U\), where \(U\), \(V\in\mathsf{Obj}\,(\mathsf{C})\). We denote by \(\mathsf{Set}\) the category of sets and maps between sets. For a given set \(X\), we have two contravariant functors \(\mathcal{W}_{X}\), \(\mathcal{K}_{X}:\mathsf{C}^{\mathrm{op}}\to\mathsf{Set}\) defined by 1. \(\mathcal{W}_{X}(U)=\mathrm{Map}(U,X)\) and \(\mathcal{W}_{X}(\phi)(P)=P\circ\phi\). 2. \(\mathcal{K}_{X}(U)=\{\,Q\in\mathcal{W}_{X}(U)|Q\) is locally constant \(\}\) and \(\mathcal{K}_{X}(\phi)(Q)=Q\circ\phi\), for any \(U\in\mathsf{Obj}\,(\mathsf{C})\), \(P\in\mathcal{W}_{X}(U)\), \(Q\in\mathcal{K}_{X}(U)\) and \(\phi\in\mathsf{Mor}_{\mathsf{C}}\,(V,U)\), where \(Q:U\to X\) is said to be locally constant, if there exists a covering family \(\{\,\psi_{\alpha}:V_{\alpha}\to U\,\}_{\alpha\in\Lambda}\) of \(U\) such that \(Q\circ\psi_{\alpha}\) is constant for all \(\alpha\in\Lambda\). It then follows that \(\mathcal{K}_{X}\subset\mathcal{W}_{X}\) as contravariant functors, in other words, \(\mathcal{K}_{X}(U)\subset\mathcal{W}_{X}(U)\) and \(\mathcal{K}_{X}(\phi)=\mathcal{W}_{X}(\phi)|_{\mathcal{K}_{X}(U)}\). Let \(\mathsf{Domain}\) be the category of open sets in \(\mathbb{R}^{n}\) for some \(n\geq 0\), and smooth functions between them, with a 'coverage' assigning a 'covering family' to each open set, which is the set of open coverings. An element of \(\mathcal{W}_{X}(U)\), \(U\in\mathsf{Domain}\), is called a parametrization. We call a pair \((X,\mathcal{D}_{X})\) a diffeological space, if it satisfies the following conditions. 1. \(X\) is a set and \(\mathcal{D}_{X}:\mathsf{Domain}^{\mathrm{op}}\to\mathsf{Set}\) is a contravariant functor. 2. \(\mathcal{K}_{X}\subset\mathcal{D}_{X}\subset\mathcal{W}_{X}\) as contravariant functors. 3. For given \(U\in\mathsf{Obj}\,(\mathsf{Domain})\) and \(P\in\mathcal{W}_{X}(U)\), \(P\in\mathcal{D}_{X}(U)\) if there exists \(\{U_{\alpha}\}_{\alpha\in\Lambda}\in\mathsf{Cov}_{\mathsf{Domain}}\,(U)\) such that \(P|_{U_{\alpha}}\in\mathcal{D}_{X}(U_{\alpha})\) for all \(\alpha\in\Lambda\). A map \(f:X\to Y\) is said to be smooth, if the natural transformation \(f_{*}:\,\mathcal{W}_{X}\to\mathcal{W}_{Y}\) given by \(f_{*}(P)=f\circ P\) satisfies \(f_{*}(\mathcal{D}_{X}(U))\subset\mathcal{D}_{Y}(U)\) for any \(U\in\operatorname{Obj}\nolimits(\operatorname{Domain})\). An element of \(\mathcal{D}_{X}(U)\) is called a plot of \(X\) on \(U\), and \(\mathcal{D}=\bigcup_{U}\mathcal{D}_{X}(U)\) is called a 'diffeology' on \(X\). In this paper, \(\operatorname{Diffeology}\) stands for the category of diffeological spaces and smooth maps, which is cartesian-closed, complete and cocomplete (see [13]). We denote by \(\operatorname{Manifold}\) the category of smooth manifolds with or without boundary, which forms a full subcategory of \(\operatorname{Diffeology}\), where a manifold is assumed to be paracompact. We introduce a weaker version of "dimension" for a diffeological space \(X\): a family \(\mathcal{F}\) of smooth maps to \(X\) is called a generalised generating family (or GGF) on \(X\), if 1. the domain \(\operatorname{dom}\nolimits(f)\) of \(f\in\mathcal{F}\) is open in \(\mathbb{R}^{d}_{+}\), \(d\geq 0\), where \(\mathbb{R}_{+}=[0,\infty)\), and 2. the map \(F:\,\coprod_{f\in\mathcal{F}}\operatorname{dom}\nolimits(f)\to X\), given by \(F|_{\operatorname{dom}\nolimits(f)}=f\), is a subduction. Let \(\operatorname{w-dim}\nolimits\mathcal{F}\) be the smallest \(d\geq 0\) such that \(\dim\operatorname{dom}\nolimits(f)\leq d\) for any \(f\in\mathcal{F}\). **Definition 1.1**.: \(\operatorname{w-dim}\nolimits X=\min\{\operatorname{w-dim}\nolimits\mathcal{F} \mid\mathcal{F}\text{ is a GGF on }X\}\). Then, for \(M\) a \(d\)-manifold with corners, we clearly have \(\operatorname{w-dim}\nolimits M=d\). Generally, \(\operatorname{w-dim}\nolimits X\) is less than or equal to \(\dim X\) the original version of dimension (see [13]). Our goal in this paper is to show the following picture in \(\operatorname{Diffeology}\). In the above picture, "thin CW" part includes the exitic interval \(\mathbb{I}\) introduced in [20] as well as all topological homotopy types of topological CW complexes, and the part (\(*\)) consists of thin CW complexes of dimension \(0\). We expect that "regular CW" part includes all compact manifolds and that an open manifold is an regular open CW complex, while they are not in the above picture. ## 2. Smooth Handles Taking \(D\)-topology (see [13]) gives a left-adjoint forgetful functor \(T:\operatorname{Diffeology}\to\operatorname{Topology}\). Let us denote by \(\operatorname{NumGenTop}\) the category of numerically generated topological spaces, introduced by Shimakawa-Yoshida-Haraguchi in [21] as the image of \(T\) For a smooth manifold with or without boundary, \(T(X)\) is often denoted again by \(X\). Let \(\lambda:\mathbb{R}\to\mathbb{R}\) be a smooth function given as follows: \[\lambda(t)=\frac{1}{\alpha}\cdot\int_{0}^{t}\!\!\ell(3x)\cdot\ell(3-3x)\,dx,\quad \alpha=\int_{0}^{1}\!\!\ell(3x)\cdot\ell(3-3x)\,dx,\quad\ell(t)=\begin{cases}0,&t \leq 0,\\ e^{-\nicefrac{{1}}{{1}}},&t>0.\end{cases}\] According to [wol], \(e^{\nicefrac{{4}}{{3}}}\cdot\alpha=0.55731493\cdots>11\nicefrac{{1}}{{20}}\), and hence we have \(\frac{\ell(3\nicefrac{{2}}{{2}})^{2}}{\alpha}<20\nicefrac{{1}}{{11}}\). Further, we obtain \(\ell^{\prime}(t)=\frac{1}{t^{2}}\cdot\ell(t)\) if \(t>0\), and \(\lambda\) enjoys the following four properties. 1. \(\lambda(t)=0\) if \(t\leq 0,\quad\text{b)}\)\(\lambda(t)+\lambda(1-t)=1,\quad\text{c)}\)\(\lambda^{\prime}(t)=\frac{1}{\alpha}\cdot\ell(3t)\cdot\ell(3-3t)\), 2. \(\frac{d}{dt}\ell(3t)=\frac{1}{3t^{2}}\cdot\ell(3t)\) if \(t>0\), and \(\lambda^{\prime\prime}(t)=\frac{1-2t}{3t^{2}(1-t)^{2}}\cdot\lambda^{\prime}(t)\) if \(0<t<1\). By b), we have \(\lambda(1\nicefrac{{1}}{{2}})=\nicefrac{{1}}{{2}}\). By d), together with the fact that \(\lambda(0)=0\) and \(\lambda(1\nicefrac{{1}}{{2}})=1\nicefrac{{1}}{{2}}\), we obtain that \(0<\lambda(t)<t\) if \(0<t<1\nicefrac{{1}}{{2}}\). It is also follows from a) and b) that \(\lambda(t)=1\) if \(t\geq 1\). We obtain the following proposition by c) and d). **Proposition 2.1**.: \(\lambda^{\prime}(t)\leq\lambda^{\prime}(1\nicefrac{{1}}{{2}})=\frac{\ell(3 \nicefrac{{2}}{{2}})^{2}}{\alpha}<20\nicefrac{{1}}{{11}}\) _for all \(t\in\mathbb{R}\)._ Let \(\phi:\mathbb{R}\to[0,\infty)\) be a smooth function given as follows. \[\phi(t)=\int_{0}^{1/2+t}\!\lambda(x)\,dx\] Then by a) through d), \(\phi\) enjoys the following another four properties. 1. \(\phi(t)=0\) if \(t\leq-1\nicefrac{{1}}{{2}},\quad\text{f)}\)\(\phi(t)=t\) if \(t\geq 1\nicefrac{{1}}{{2}}\). \(0<\phi(0)<1\nicefrac{{1}}{{8}}\), 2. \(\phi\) is monotone increasing on \(\mathbb{R}\) and strictly monotone increasing on \([-1\nicefrac{{1}}{{2}},\infty)\). Then we obtain the following proposition. **Proposition 2.2**.: \(\phi(t)-\phi(-t)=t\) _for all \(t\in\mathbb{R}\) and \(\phi(t)+\phi(-t)=|t|\) if \(|t|\geq 1\nicefrac{{1}}{{2}}\)._ Proof.: Firstly, \(\phi(0)-\phi(0)=0\), and \(\frac{d}{dt}(\phi(t)-\phi(-t))=\lambda(1\nicefrac{{1}}{{2}}+t)+\lambda(1 \nicefrac{{1}}{{2}}-t)=1\) by b). Thus \(\phi(t)-\phi(-t)=t\). Secondly, if \(t\geq 1\nicefrac{{1}}{{2}}\), then \(\phi(t)=t\) by f) and \(\phi(-t)=0\) by e), and hence we have \(\phi(t)+\phi(-t)=|t|\). If \(t\leq-1\nicefrac{{1}}{{2}}\), then \(\phi(t)=0\) by e) and \(\phi(-t)=-t=|t|\) by f) as well, and hence we have \(\phi(t)+\phi(-t)=|t|\) if \(|t|\geq 1\nicefrac{{1}}{{2}}\). Now we are ready to introduce the following manifolds (with boundary). \[\mathbb{S}^{n-1}=\{u\in\mathbb{R}^{n}\mid\|u\|\geq 1\}= \mathbb{R}^{n}\setminus\operatorname{Int}D^{n}\supset S^{n-1},\] \[\mathbb{D}^{n,m}=\{(u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{m} \mid\|u\|\geq 1-\phi(2-\|u\|-\|v\|)\},\] \[\mathbb{S}^{n-1,m}=\{(u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{m} \mid\|u\|\geq 1\}=\mathbb{S}^{n-1}\times\mathbb{R}^{m},\] where we denote \(D^{n}=\{v\in\mathbb{R}^{n}\mid\|v\|\leq 1\}\supset S^{n-1}=\{v\in\mathbb{R}^{n} \mid\|v\|=1\}\). Then we see that \(\mathbb{S}^{n-1}\approx S^{n-1}\times[1,\infty)\), where \(u\in\mathbb{S}^{n-1}\) corresponds to \((\frac{1}{\|u\|}u,\|u\|)\in S^{n-1}\times[1,\infty)\). **Example 2.3**.: Since \(\phi(u-2)-\phi(2-u)=u-2\) by Proposition 2.2, \(1-\phi(2-u)=u-1-\phi(u-2)\leq u-1<u\). Thus \(\mathbb{D}^{n,0}=\mathbb{R}^{n}\), \(\mathbb{S}^{n-1,0}=\mathbb{S}^{n-1}\), and \(\mathbb{D}^{0,0}=\{0\}\), \(\mathbb{S}^{-1,0}=\emptyset\). **Proposition 2.4**.: _Let \((u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\). Then \((u,v)\in\partial\mathbb{D}^{n,m}\) if and only if \((\|u\|,\|v\|)=(1-\phi(-t),1+\phi(t))\) for \(t=\|u\|+\|v\|-2\)._ Proof.: Assume \((u,v)\in\partial\mathbb{D}^{n,m}\). Then \(\|u\|=1-\phi(2-\|u\|-\|v\|)\), and by Proposition 2.2, \(\|v\|=1+\phi(\|u\|+\|v\|-2)\), and hence we obtain \((\|u\|,\|v\|)=(1-\phi(-t),1+\phi(t))\) for \(t=\|u\|+\|v\|-2\). The converse is clear. **Proposition 2.5**.: \(\mathbb{S}^{n-1}\times\mathbb{R}^{m}\subset\mathbb{D}^{n,m}\supset\mathbb{R}^ {n}\times D^{m}\)_._ Proof.: Let \((u,v)\in\mathbb{S}^{n-1}\times\mathbb{R}^{m}\). Then \(\|u\|\geq 1\geq 1-\phi(2-\|u\|-\|v\|)\) by the definition of \(\phi\), and hence \((u,v)\in\mathbb{D}^{n,m}\). Let \((u,v)\in\mathbb{R}^{n}\times D^{m}\). Then we have \(\|v\|\leq 1\) and \(\|u\|-1\geq\|u\|+\|v\|-2=\phi(\|u\|+\|v\|-2)-\phi(2-\|u\|-\|v\|)\geq-\phi(2-\| u\|-\|v\|)\) by Proposition 2.2 and the definition of \(\phi\), and hence \((u,v)\in\mathbb{D}^{n,m}\). We obtain the following theorems whose proofs shall be given in Appendix. **Theorem 2.6**.: _There is a diffeomorphism \(\Psi_{n,m}\) : \(\mathbb{R}^{n}\times D^{m}\to\mathbb{D}^{n,m}\)._ By definition, we have \(\partial\mathbb{S}^{n-1}=S^{n-1}=\{u\in\mathbb{R}^{n}\mid\|u\|=1\}\) and \(\partial\mathbb{S}^{n-1,m}=S^{n-1}\times\mathbb{R}^{m}\). Let us denote \(\partial_{0}\mathbb{S}^{n-1,m}=\{(u,v)\in\mathbb{S}^{n-1,m}\mid\|u\|=1,\ \|v\|\leq 3/2\} \subset\partial\mathbb{S}^{n-1,m}\), where one can easily verify that \(\mathbb{D}^{n,m}\!\setminus\!\partial_{0}\mathbb{S}^{n-1,m}\) is the union of two disjoint subsets \(\mathbb{D}^{n,m}\!\setminus\!\mathbb{S}^{n-1,m}\) and \(\mathbb{S}_{0}^{n-1,m}=\mathbb{S}^{n-1,m}\!\setminus\!\partial_{0}\mathbb{S} ^{n-1,m}\) both of which are open in \(\mathbb{D}^{n,m}\). In fact, we see that \(\partial_{0}\mathbb{S}^{n-1,m}=\widehat{\Phi}_{n,m}(S^{n-1}\times D^{m})\), \(\mathbb{D}^{n,m}\!\setminus\!\mathbb{S}^{n-1,m}=\widehat{\Phi}_{n,m}(\operatorname {Int}D^{n}\times D^{m})\) and \(\mathbb{S}_{0}^{n-1,m}=\widehat{\Phi}_{n,m}((\mathbb{R}^{n}\setminus D^{n}) \times D^{m})\). **Theorem 2.7**.: _There is a smooth bijection \(\widehat{\Phi}_{n,m}\) : \((\mathbb{R}^{n}\times D^{m},\mathbb{S}^{n-1}\times D^{m},S^{n-1}\times D^{m}) \to(\mathbb{D}^{n,m},\mathbb{S}^{n-1,m},\partial_{0}\mathbb{S}^{n-1,m})\) which is diffeomorphic apart from \(S^{n-1}\times D^{m}\)._ We remark that \((\mathbb{R}^{n}\!\times\!D^{m},\mathbb{S}^{n-1}\!\times\!D^{m})\) has the topological homotopy type of \((D^{n},S^{n-1})\). ## 3. fat CW Complex Let \(X=(X,\hat{X})\) and \(A=(A,\hat{A})\subset(X,\hat{X})=X\) be pairs of diffeological spaces, and the inclusion map \(A\hookrightarrow X\) is an induction. **Definition 3.1**.: The pair \(X=(X,\hat{X}\!:\!A,\hat{A})\) is said to be a relative fat CW complex if there is a series of pairs \((X^{(n)},\hat{X}^{(n)})\), \(n\geq-1\), of diffeological spaces and (smooth) attaching maps \[h_{n}\ \colon(\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)},\coprod_{j\in J_{n}} \operatorname{Int}\mathbb{S}^{n-1,m_{n}(j)})\to(X^{(n-1)},\hat{X}^{(n-1)}),\] each of which is smooth, where \(m_{n}\) denotes a map from \(J_{n}\) to \(\mathbb{N}_{0}\) the set of non-negative integers, \(n\geq 0\), satisfying the following conditions: 1. \((X^{(-1)},\hat{X}^{(-1)})=(A,\hat{A})\). 2. For \(n\geq 0\), the pair \((X^{(n)},\hat{X}^{(n)})\) is given as follows: let \(\operatorname{Int}h_{n}\ \colon\coprod_{j\in J_{n}}\operatorname{Int} \mathbb{S}^{n-1,m_{n}(j)}\to\hat{X}^{(n-1)}\) be the restriction of \(h_{n}\). Then \(\hat{X}^{(n)}\) and \(X^{(n)}\) satisfy * \(\hat{X}^{(n)}\) is the pushout of the inclusion \(\operatorname{Int}i_{n}:\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{S}^{n-1,m_{n} (j)}\hookrightarrow\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) and \(\operatorname{Int}h_{n}:\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{S}^{n-1,m _{n}(j)}\to\hat{X}^{(n-1)}\) in \(\operatorname{Diffeology}\): \(\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{S}^{n-1,m_{n}(j)}\xrightarrow{ \operatorname{Int}h_{n}}\hat{X}^{(n-1)}\) \(\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\xrightarrow{ \operatorname{Int}\hat{h}_{n}}\hat{X}^{(n)}\) \(\operatorname{b)}\)\(X^{(n)}\) is the pushout of the inclusion \(i_{n}:\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\hookrightarrow\coprod_{j\in J _{n}}\mathbb{D}^{n,m_{n}(j)}\) and \(h_{n}:\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\to X^{(n-1)}\) in \(\operatorname{Diffeology}\): \(\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\xrightarrow{h_{n}}X^{(n-1)}\) \(\coprod_{i_{n}}\)\(\coprod_{j\in J_{n}}\mathbb{D}^{n,m_{n}(j)}\xrightarrow{\hat{h}_{n}}X^{(n)}\) * \(X=(X,\hat{X})\) is a colimit of \(X^{(n)}=(X^{(n)},\hat{X}^{(n)})\), \(n\geq 0\), and \(X^{(n)}=(X^{(n)},\hat{X}^{(n)})\) is called a fat \(n\)-skeleton of \(X=(X,\hat{X})\). If \(A=\hat{A}=\emptyset\), then we say that \(X=(X,\hat{X})\) is a fat CW complex. We can also define an open version of a fat CW complex, which called an _open CW complex_, for a pair \((\hat{X},\hat{A})\) by just using series of diffeological spaces \(\hat{X}^{(n)}\) with (1a) above. Then it would be plausible to see that an open manifold is an open CW complex. _Remark 3.2_.: Apparently, we have a commutative diagram \[\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)} \xleftarrow{\operatorname{Int}i_{n}}\coprod_{j\in J_{n}}\operatorname{Int} \mathbb{S}^{n-1,m_{n}(j)}\xrightarrow{\operatorname{Int}h_{n}}\hat{X}^{(n-1)}\] \[\xleftarrow{\coprod_{j\in J_{n}}\mathbb{D}^{n,m_{n}(j)}}\xleftarrow{ i_{n}}\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\xrightarrow{h_{n}}X^{(n-1)}\] which induces the following commutative diagram: **Example 3.3**.: Let \((\hat{X},\{\hat{X}_{n}\})\) be a smooth CW complex with smooth attaching maps \(\hat{h}_{n}:S^{n-1}\to\hat{X}^{(n-1)}\), \(n\geq 0\). Then there is a fat CW complex \((X,\{X^{(n)}\})\) and smooth injections \(j_{n}:\check{X}_{n}\leftrightarrow X^{(n)}\), \(n\geq 0\) with smooth attaching maps \(h_{n}=j_{n-1}{{}^{\circ}}\check{h}_{n}{{}^{\circ}}p_{n-1}:\mathbb{S}^{n-1} \xrightarrow{p_{n-1}}S^{n-1}\xrightarrow{\check{h}_{n}}\check{X}_{n} \xrightarrow{j_{n-1}}X^{(n-1)}\) with \(m_{n}(j)=0\) for all \(j\), \(n\geq 0\) where \(p_{n-1}\) denotes the projection \(\mathbb{S}^{n-1}\cong S^{n-1}\times[1,\infty)\xrightarrow{\operatorname{pr}_{1 }}S^{n-1}\) a left inverse of the inclusion \(j^{\prime}_{n-1}:S^{n-1}=S^{n-1}\times[1,\infty)\cong\mathbb{S}^{n-1}\). Since the smooth injections \(j_{n}\) are all homeomorphisms, each CW complex is topologically a fat CW complex. In this paper, we say that \((X,A)\) is a _relative thin CW complex_, when \(m_{k}=0\) for all \(k\geq 0\). If further \(A\) is an empty set, then we say that \(X\) is a _thin CW complex_. **Proposition 3.4**.: _Let \(m_{k}=\max\{\,m_{k}(j)\mid j\in J_{k}\,\}\), \(k\geq 0\). Then \(\operatorname{w-dim}X^{(n)}\leq\max\{\,m_{k}+k\mid 0\leq k\leq n\,\}\). Hence \(\operatorname{w-dim}X^{(n)}\leq n\), provided that \(X\) is thin._ **Example 3.5**.: Let \(\pi_{\text{set}}:\mathbb{R}\to\mathbb{R}\) be the (continuous) map defined as follows: \[\pi_{\text{set}}(t)=\max\{\,0,\min\{\,t,1\,\}\}=\min\{\,1,\max\{\,t,0\,\}\}\in[ 0,1]\subset\mathbb{R}.\] Then the diffeological quotient \(\mathbb{I}=\mathbb{R}/\pi_{\text{set}}\) is a thin CW complex with a subduction \(\pi:\mathbb{R}\to\mathbb{I}\) by \(\pi(t)=[\pi_{\text{set}}(t)]\), \(t\in\mathbb{R}\). Thus we also have \(\dim\mathbb{I}=1\). ## 4. Basic Properties In this section, we show some basic properties of a fat CW complex. Let \(X\) be a collection of pairs \((X^{(n)},\hat{X}^{(n)})\) of diffeological spaces and attaching maps \(h_{n}:\prod\limits_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\to X^{(n-1)}\). Then it is clear that \(X^{(n-1)}\) is a closed subset of \(X^{(n)}\) under the \(D\)-topology. We first show that a fat CW complex is a paracompactum. **Proposition 4.1**.: _The \(D\)-topology of \(X\) is paracompact and Hausdorff._ Proof.: We show that \(X^{(n)}\) is paracompact and Hausdorff by induction on \(n\geq-1\). When \(n=-1\), there is nothing to do. So, we assume that we have done up to \(n-1\), \(n\geq 0\). Firstly, \(\mathbb{D}^{n,m}\approx\mathbb{R}^{n}\times D^{m}\) is paracompact and Hausdorff as a closed subset of \(\mathbb{R}^{n}\times\mathbb{R}^{m}\), and so is its closed subset \(\mathbb{S}^{n-1,m}\). Thus the coproducts \(\prod\limits_{j\in J_{n}}\mathbb{D}^{n,m_{n}(j)}\) and \(\prod\limits_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\) are paracompact and Hausdorff. Secondly, since \(i_{n}:\coprod\limits_{j\in I_{n}}\mathbb{S}^{n-1,m_{n}(j)}\leftrightarrow\coprod \limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\) is a closed embedding in Topology, the pushout \(X^{(n)}\) of \(i_{n}\) and \(\hat{i}_{n}:X^{(n-1)}\to X^{(n)}\) is also paracompact and Hausdorff. Finally, since "paracompact and Hausdorff"-ness is preserved under taking colimit of closed embeddings, \(X=\mathsf{colim}\;X^{(n)}\) is paracompact and Hausdorff. We then consider the inclusion map \(\hat{i}_{n}:X^{(n-1)}\hookrightarrow X^{(n)}\), \(n\geq 0\). **Proposition 4.2**.: _For each \(n\geq 0\), the map \(\hat{i}_{n}:X^{(n-1)}\hookrightarrow X^{(n)}\) is an induction._ Proof.: Assume that \(P:U\to X^{(n)}\) is a plot with its image contained in \(\operatorname{Im}(\hat{i}_{n})\). Since there is a subduction \(X^{(n-1)}\amalg\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)} \twoheadrightarrow X^{(n)}\) by the definition of \(X^{(n)}\), there is an open cover \(\{U_{\alpha}\}\) of \(U\) such that \(P|_{U_{\alpha}}\) can be pulled back to a plot from \(U_{\alpha}\) to either \(X^{(n-1)}\) or \(\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\). In the first case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\to X^{(n-1)}\) as \(P|_{U_{\alpha}}=\hat{i}_{n^{\circ}}P_{\alpha}\). In the second case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\to\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\) as \(P|_{U_{\alpha}}=\hat{h}_{n^{\circ}}P_{\alpha}\). Then \(\operatorname{Im}(P_{\alpha})\subset(\hat{h}_{n})^{-1}(\operatorname{Im}( \hat{i}_{n}))=\mathbb{S}^{n-1,m_{n}(j)}\) in \(\mathbb{D}^{n,m_{n}(j)}\). Since \(i_{n}:\mathbb{S}^{n-1,m_{n}(j)}\hookrightarrow\mathbb{D}^{n,m_{n}(j)}\) is an induction, we may assume that \(P_{\alpha}:U_{\alpha}\to\mathbb{S}^{n-1,m_{n}(j)}\) is a plot in \(\mathbb{S}^{n-1,m_{n}(j)}\), and hence we obtain \(P|_{U_{\alpha}}=\hat{h}_{n^{\circ}}P_{\alpha}=\hat{i}_{n^{\circ}}h_{n^{\circ}}P _{\alpha}\). In either case, \(P|_{U_{\alpha}}\) is pulled back to a plot in \(X^{(n-1)}\). Thus \(\hat{i}_{n}\) is an induction. A similar but an easier argument to the above shows the following. **Proposition 4.3**.: _For each \(n\geq 0\), the map \(\operatorname{Int}\hat{i}_{n}:\hat{X}^{(n-1)}\hookrightarrow\hat{X}^{(n)}\) is an induction._ ## 5. Smooth Functions **Theorem 5.1**.: _For any open covering \(\mathcal{U}\) on a fat CW complex, there is a smooth partition of unity subordinate to \(\mathcal{U}\)._ Proof.: By Proposition 4.1, we may assume that \(\mathcal{U}\) is of locally finite. Let \(X\) be a fat CW complex with fat skeletons \(\{X^{(n)}\}\), and \(\mathcal{U}_{n}=\{U\cap X^{(n)}\mid U\in\mathcal{U}\}\). We show that there is a smooth partition of unity on \(X^{(n)}\) subordinate to \(\mathcal{U}_{n}\) by induction on \(n\geq 0\). (Case \(n=0\)) Since \(D^{m}\) is a manifold with boundary, we have nothing to do. (Case \(n>0\)) By induction hypothesis, there is a smooth partition of unity \(\{\rho_{U}\,\}_{U\in U}\) on \(X^{(n-1)}\) subordinate to \(\mathcal{U}_{n-1}\). By pulling back \(\rho_{U}\)'s by \(h_{n}^{j}:\mathbb{S}^{n-1,m_{n}(j)}\to X^{(n-1)}\) the restriction of \(h_{n}:\coprod\limits_{j\in I_{n}}\mathbb{S}^{n-1,m_{n}(j)}\to X^{(n-1)}\) to \(\mathbb{S}^{n-1,m_{n}(j)}\), we obtain a smooth partition of unity \(\{\rho_{U^{\circ}}h_{N}^{j}\}_{U\in U}^{j}\) on \(\mathbb{S}^{n-1,m_{n}(j)}\), \(j\in J_{n}\) subordinate to \(\mathcal{U}_{n}^{j}=\{h_{n}^{-1}(U)\cap\mathbb{S}^{n-1,m_{n}(j)}\mid U\in \mathcal{U}\}\) of \(\mathbb{S}^{n-1,m_{n}(j)}\), such that we obtain \(\operatorname{supp}\rho_{U^{\circ}}h_{n}^{j}\subset h_{n}^{-1}(U)\). Then we take a smooth extension \(\hat{\rho}_{U}^{j}:\mathbb{D}^{n,m_{n}(j)}\to\mathbb{R}\) of \(\rho_{U^{\circ}}h_{n}^{j}\) such that \(\operatorname{supp}\hat{\rho}_{U}^{j}\subset\hat{h}_{n}^{-1}(U)\). We also have a neighbourhood \(V_{j}\) of \(\mathbb{S}^{n-1,m_{n}(j)}\) in \(\mathbb{D}^{n,m_{n}(j)}\) on which \(\sum\limits_{V\in U}\hat{\rho}_{V}^{j}\neq 0\). Let \(\hat{\mathcal{U}}_{n}^{j}=\{\,\hat{h}_{n}^{-1}(U)\cap\mathbb{D}^{n,m_{n}(j)}\mid U \in\mathcal{U}\,\}\) an open cover of \(\mathbb{D}^{n,m_{n}(j)}\). Since \(\mathbb{D}^{n,m_{n}(j)}\) is a manifold with boundary, we have a smooth partition of unity \(\{\hat{\sigma}_{U}\}_{U\in\mathcal{U}}\) on \(\mathbb{D}^{n,m_{n}(j)}\) subordinate to \(\hat{\mathcal{U}}_{n}^{j}\) so that \(\operatorname{supp}\hat{\sigma}_{U}\subset\hat{h}_{n}^{-1}(U)\cap\mathbb{D}^{ n,m_{n}(j)}\) and \(\sum\limits_{V\in\mathcal{U}}\hat{\sigma}_{V}\neq 0\). Since \(\mathcal{V}_{j}=\{\,V_{j},\mathbb{D}^{n,m_{n}(j)}\setminus\mathbb{S}^{n-1,m_{ n}(j)}\,\}\) is an open covering of \(\mathbb{D}^{n,m_{n}(j)}\), and hence there is a smooth partition of unity \(\{\,\chi_{1},\chi_{2}\,\}\) subordinate to \(\mathcal{V}_{j}\): \[\chi_{1}+\chi_{2}=1,\qquad\begin{cases}\chi_{1}\,:\,\mathbb{D}^{n,m_{n}(j)} \to\mathbb{R},&\operatorname{supp}\chi_{1}\,\subset\mathbb{D}^{n,m_{n}(j)} \setminus\mathbb{S}^{n-1,m_{n}(j)},\\ \chi_{2}\,:\,\mathbb{D}^{n,m_{n}(j)}\to\mathbb{R},&\operatorname{supp}\chi_{2} \subset V_{j},\end{cases}\] Using the above functions, we define smooth functions \(\sigma_{U}^{j}\) on \(\mathbb{D}^{n,m_{n}(j)}\) as follows. \[\hat{\sigma}_{U}^{j}(\mathrm{x})=\begin{cases}\chi_{1}(\mathrm{x})\cdot\hat{ \sigma}_{U}^{j}(\mathrm{x})+\chi_{2}(\mathrm{x})\cdot\hat{\sigma}_{U}^{j}( \mathrm{x}),&\mathrm{x}\in V_{j},\\ \hat{\sigma}_{U}^{j}(\mathrm{x}),&\mathrm{x}\in\mathbb{D}^{n,m_{n}(j)}\setminus \operatorname{supp}\chi_{2}.\end{cases}\] Hence \(\operatorname{supp}\hat{\sigma}_{U}^{j}\subset\operatorname{supp}\hat{\sigma }_{U}\cup\operatorname{supp}\hat{\rho}_{U}^{j}\subset\hat{h}_{n}^{-1}(U)\) and \(\sum\limits_{V\in\mathcal{U}}\hat{\sigma}_{V}^{j}\neq 0\). Then we define \[\sigma_{U}^{j}(\mathrm{x})=\frac{\hat{\sigma}_{U}^{j}(\mathrm{x})}{\sum \limits_{V\in\mathcal{U}}\hat{\sigma}_{V}^{j}(\mathrm{x})},\quad\mathrm{x} \in\mathbb{D}^{n,m_{n}(j)},\] which gives a smooth partition of unity on \(\mathbb{D}^{n,m_{n}(j)}\) subordinate to \(\hat{\mathcal{U}}_{n}^{j}\) and is also an extension of a smooth partition of unity \(\{\rho_{U}{}^{\circ}h_{n}^{j}\}_{U\in\mathcal{U}}\) on \(\mathbb{S}^{n-1,m_{n}(j)}\) subordinate to \(\mathcal{U}_{n}^{j}\). Smooth maps \(\sigma_{U}^{j}\), \(j\in J_{n}\) and \(\rho_{U}\) are compatible data for us to obtain a smooth partition of unity on the pushout \(X^{(n)}\). **Corollary 5.2**.: _For a fat CW complex, the de Rham theorem holds._ Let \(X_{0}^{(n-1)}=X^{(n)}\setminus\hat{h}_{n}(\coprod_{j\in J_{n}}(\mathbb{D}^{n,m _{n}(j)}\setminus\mathbb{S}_{0}^{n-1,m_{n}(j)}))\). Since \(\mathbb{D}^{n,m_{n}(j)}\setminus\mathbb{S}_{0}^{n-1,m_{n}(j)}\) is compact in \(\mathbb{D}^{n,m_{n}(j)}\), so is \(\hat{h}_{n}(\coprod_{j\in J_{n}}(\mathbb{D}^{n,m_{n}(j)}\setminus\mathbb{S}_ {0}^{n-1,m_{n}(j)}))\) in \(X^{(n)}\), and hence \(X_{0}^{(n-1)}\) is \(D\)-open in \(X^{(n)}\). Since \(X_{0}^{(n-1)}=X^{(n-1)}\setminus\hat{h}_{n}(\coprod_{j\in J_{n}}\hat{\sigma }_{0}\mathbb{S}^{n-1,m_{n}(j)})\subset X^{(n-1)}\), \(X_{0}^{(n-1)}\) is \(D\)-open in \(X^{(n-1)}\), too. Following [10], we say that a diffeological space \(X\) has enough many smooth functions, if the \(D\)-topology of \(X\) has an open base of the form \(\pi^{-1}(\{0,1\})\), where \(\pi\) is a smooth function on \(X\). For example, by J. Watts [20] and P. I-Zemmour [12], a smooth manifold has enough many smooth functions. **Theorem 5.3**.: _A fat CW complex has enough many smooth functions._ Proof.: Let \(X\) be a fat CW complex and \(U\) be an open neighbourhood of an element \(\mathsf{a}\in X\). Then, since \(X\) is Hausdorff, \(V=X\setminus\{\mathsf{a}\}\) is open, and hence \(\mathbb{U}=\{U,V\}\) is an open covering of \(X\). Hence, by Theorem 5.1, there is a smooth partition of unity \(\{\rho_{U},\rho_{V}\}\) subordinate to \(\mathbb{U}\). Put \(f=\rho_{U}\), and we have done. **Definition 5.4**.: A parametrization \(P\,:\,U\to X\) of a diffeological space \(X\) is said to be a pre-plot if it satisfies the following condition. * for any smooth function \(f\,:\,X\to\mathbb{R}\), \(f\circ P\) is smooth in the ordinary sense. **Proposition 5.5**.: _Let \(X\) be a fat CW complex, and \(n\geq 0\). If a parametrization \(P\,:\,U\to X^{(n)}\) is a pre-plot, then \(P\) is continuous w.r.t. \(D\)-topology._ Proof.: It is sufficient to show that, for any open subset \(O\subset X\), \(P^{-1}(O)\) is open in \(U\). Assume \(a\in P^{-1}(O)\), and hence \(P(a)\in O\). Then by Theorem 5.3, there exists a smooth function \(f\,:\,X^{(n)}\to\mathbb{R}\) such that \(f\,{\circ}\,P(a)=1\) and \(\operatorname{supp}f\subset O\). Since \(f\,{\circ}\,P\) is smooth, it is continuous and \((f\,{\circ}\,P)^{-1}(0,\infty)\) is open in \(U\) containing \(a\). In other words, \(a\) is an interior point of \(P^{-1}(O)\). Thus \(P^{-1}(O)\) is open in \(U\). ## 6. Further Properties In this section, \(X\) stands for a fat CW complex with skeleta \(\{X^{(n)}\}\). **Definition 6.1**.: Let us assume that \(X\) is equipped with attaching maps \(h_{n}\), \(n\geq 0\). 1. A fat CW complex \(X\) is said to be _good_, if \(h_{n}\) is a local subduction onto its image which is a \(D\)-open subset of \(X^{(n-1)}\) for every \(n\geq 0\). In this case, we say that \(X\) is a good smooth CW complex. 2. A fat CW complex \(X\) is said to be _regular_, if \(h_{n}\) is an induction onto its image which is a \(D\)-open subset of \(X^{(n-1)}\) for every \(n\geq 0\). In this case, we say that \(X\) is a regular CW complex. **Proposition 6.2**.: _If \(X\) is good, then \(\hat{X}^{(n)}\) is \(D\)-open in \(X^{(n)}\), \(n\geq-1\)._ Proof.: Case \(n=-1\): the proposition is clear by definition. Case \(n\geq 0\): assume that the proposition is true up to the case \(n-1\). Then, by the definition of \(X^{(n)}\), the inverse image of \(\hat{X}^{(n)}\subset X^{(n)}\) by the subduction \(X^{(n-1)}\twoheadrightarrow X^{(n)}\) is given as \(\coprod_{j\in I_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\)\(\coprod\hat{X}^{(n-1)}\cup h_{n}(\coprod_{j\in I_{n}}(\mathbb{S}^{n-1,m_{n}(j)}\cap \operatorname{Int}\mathbb{D}^{n,m_{n}(j)}))\) by Remark 3.2. Clearly, \(\coprod_{j\in I_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) and \(\hat{X}^{(n-1)}\) are \(D\)-open respectively in \(\coprod_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\) and \(X^{(n-1)}\) by induction hypothesis. Since a local subduction is an open map, \(h_{n}(\coprod_{j\in I_{n}}(\mathbb{S}^{n-1,m_{n}(j)}\cap\operatorname{Int} \mathbb{D}^{n,m_{n}(j)}))\) is \(D\)-open in \(X^{(n-1)}\), and hence the inverse image of \(\hat{X}^{(n)}\subset X^{(n)}\) by the subduction \(\coprod_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\)\(\coprod X^{(n-1)}\twoheadrightarrow X^{(n)}\) is open in \(\coprod_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\)\(\coprod X^{(n-1)}\). **Corollary 6.3**.: _If \(X\) is good, then \(\hat{X}^{(n-1)}\) is \(D\)-open in \(X^{(n)}\), \(n\geq 0\)._ We then consider the canonical inclusion map \(\iota_{n}\,:\,\hat{X}^{(n)}\hookrightarrow X^{(n)}\), \(n\,{\geq}\,-1\). **Proposition 6.4**.: _If \(X\) is good, then \(\iota_{n}\) is an induction, \(n\,{\geq}\,-1\)._ Proof.: We show this by induction on \(n\geq-1\). Case \(n=-1\): the statement is clear by definition. Case \(n\geq 0\): assume that the canonical inclusion \(\iota_{n-1}\,:\,\hat{X}^{(n-1)}\hookrightarrow X^{(n-1)}\) is an induction. Let \(P\,:\,U\to X^{(n)}\) be a plot whose image is in \(\hat{X}^{(n)}=\hat{X}^{(n-1)}\cup\operatorname{Im}(\operatorname{Int}\hat{h}_ {n})\). Since there is a subduction \(\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\coprod X^{(n-1)}\nrightarrow X ^{(n)}\), we may assume that \(P\) can be pulled back to either a plot \(P_{j}:U\rightarrow\mathbb{D}^{n,m_{n}(j)}\), \(j\in J_{n}\) or a plot \(P^{\prime}:U\to X^{(n-1)}\). There is an open cover of \(X^{(n-1)}\cap\hat{X}^{(n)}\) consisting of \(h_{n}(\mathbb{S}^{n-1,m_{n}(j)}\cap\operatorname{Int}\mathbb{D}^{n,m_{n}(j)})\), \(j\in J_{n}\) and \(\hat{X}^{(n-1)}\). In the case when \(P=\operatorname{Int}\hat{h}_{n}{{}^{\circ}}P_{j}\), we have \(\operatorname{Im}(P_{j})\subset(\operatorname{Int}\hat{h}_{n})^{-1}(\hat{X}^{ (n)})=\coprod\limits_{j\in I_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) and \(P^{\prime}\) is a plot in \(\coprod\limits_{j\in I_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\), and hence \(P\) is a plot in \(\hat{X}^{(n)}\). In the case when \(P=\hat{l}_{n}{{}^{\circ}}P^{\prime}:U\xrightarrow{P^{\prime}}X^{(n-1)} \xrightarrow{l_{n}}X^{(n)}\), we may also assume that \(\operatorname{Im}(P^{\prime})\subset h_{n}(\mathbb{S}^{n-1,m_{n}(j)}\cap \operatorname{Int}\mathbb{D}^{n,m_{n}(j)})\) for some \(j\in J_{n}\) or \(\operatorname{Im}(P^{\prime})\subset\hat{X}^{(n-1)}\). Since \(h_{n}\) is a subduction and \(t_{n-1}:\hat{X}^{(n-1)}\nrightarrow X^{(n-1)}\) is an induction by the induction hypothesis, we may assume that \(P^{\prime}\) can be pulled back to either \(\mathbb{S}^{n-1,m_{n}(j)}\cap\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\subset \operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) or \(\hat{X}^{(n-1)}\). In either case, \(P\) is a plot in \(\hat{X}^{(n)}\). **Proposition 6.5**.: _If \(X\) is regular, then, for each \(n\geq 0\), the map \(\hat{h}_{n}:\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\to X ^{(n)}\) induced from the induction \(h_{n}\) is also an induction._ Proof.: Assume that \(P:U\to X^{(n)}\) is a plot whose image is in \(\operatorname{Im}(\hat{h}_{n})\). Then there is an open cover \(\{U_{\alpha}\}\) of \(U\) such that \(P|_{U_{\alpha}}\) can be pulled back to a plot from \(U_{\alpha}\) to either \(X^{(n-1)}\) or \(\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\), by the definition of \(X^{(n)}\). In the first case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\to X^{(n-1)}\) as \(P|_{U_{\alpha}}=\hat{l}_{n}{{}^{\circ}}P_{\alpha}\). Then \(\operatorname{Im}(P_{\alpha})\subset(\hat{l}_{n})^{-1}(\operatorname{Im}( \hat{h}_{n}))=\operatorname{Im}(h_{n})\) in \(X^{(n-1)}\). Since \(h_{n}\) is an induction, \(P_{\alpha}\) can be pulled back to a plot \(P_{\alpha}^{\prime}:U_{\alpha}\rightarrow\coprod\limits_{j\in I_{n}}\mathbb{S }^{n-1,m_{n}(j)}\) as \(P_{\alpha}=h_{n}{{}^{\circ}}P_{\alpha}^{\prime}\) which is a plot in \(\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\). In the second case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\rightarrow\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\) as \(P|_{U_{\alpha}}=\hat{h}_{n}{{}^{\circ}}P_{\alpha}\), which is, of course, a plot in \(\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\). In each case, \(P|_{U_{\alpha}}\) is pulled back to a plot in \(\coprod\limits_{j\in I_{n}}\mathbb{D}^{n,m_{n}(j)}\). Thus \(\hat{h}_{n}\) is an induction. **Proposition 6.6**.: _If \(X\) is regular, then, for each \(n\geq 0\), the canonical inclusion \(\operatorname{Int}\hat{h}_{n}:\coprod\limits_{j\in I_{n}}\operatorname{Int} \mathbb{D}^{n,m_{n}(j)}\nrightarrow\hat{X}^{(n)}\) is an induction._ Proof.: For a plot \(P:U\to\hat{X}^{(n)}\) with image in \(\operatorname{Im}(\operatorname{Int}\hat{h}_{n})\), there is an open cover \(\{U_{\alpha}\}\) of \(U\) such that \(P|_{U_{\alpha}}\) is pulled back to a plot on either \(\hat{X}^{(n-1)}\) or \(\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\). In the first case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\rightarrow\hat{X}^{(n-1)}\) as \(P|_{U_{\alpha}}=\operatorname{Int}\hat{l}_{n}{{}^{\circ}}P_{\alpha}\). Then \(\operatorname{Im}(P_{\alpha})\subset\operatorname{Int}\hat{l}_{n}{{}^{-1}}( \operatorname{Im}(\operatorname{Int}\hat{h}_{n}))=\operatorname{Im}( \operatorname{Int}h_{n})=h_{n}(\coprod\limits_{j\in I_{n}}\operatorname{Int} \mathbb{S}^{n-1,m_{n}(j)})\) in \(X^{(n-1)}\). Since \(h_{n}\) is an induction, \(P_{\alpha}\) can be pulled back to a plot \(P_{\alpha}^{\prime}:U_{\alpha}\rightarrow\coprod\limits_{j\in I_{n}} \operatorname{Int}\mathbb{S}^{n-1,m_{n}(j)}\) as \(P_{\alpha}=h_{n}{{}^{\circ}}P_{\alpha}^{\prime}\), and hence \(P_{\alpha}^{\prime}\) is a plot in \(\coprod\limits_{j\in I_{n}}\operatorname{Int}\mathbb{S}^{n-1,m_{n}(j)}\). In the second case, \(P|_{U_{\alpha}}\) is pulled back to a plot \(P_{\alpha}:U_{\alpha}\rightarrow\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\) as \(P|_{U_{\alpha}}=\operatorname{Int}\hat{h}_{n}{{}^{\circ}}P_{\alpha}\), which is, of course, a plot in \(\coprod\limits_{j\in I_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\). In either case, \(P|_{U_{a}}\) can be pulled back to a plot in \(\coprod_{j\in J_{n}}\operatorname{Int}\mathbb{D}^{n,m_{n}(j)}\supset\coprod_{j\in J _{n}}\operatorname{Int}\mathbb{S}^{n-1,m_{n}(j)}\), and hence \(\operatorname{Int}\hat{h}_{n}\) is an induction. ## 7. Main Results By J. Watts [20], a manifold with corners is a Frolicher space, which can be embedded into Diffeology as a reflexive diffeological space. Thus spheres and disks are all reflexive in Diffeology. The following is our main result. **Theorem 7.1**.: _A regular CW complex of finite dimension is reflexive._ Proof.: Let \(X\) be a fat CW complex satisfying \(X=X^{(n)}\), for some \(n\geq 0\). We show the statement by induction on \(n\geq 0\). In the case when \(n=0\), we have nothing to do. In the case when \(n>0\), we may assume that we have done up to dimension \(n-1\), and assume that \(P:U\to X\) is a parametrization satisfying that, for every smooth function \(f:X\to\mathbb{R}\), \(f\circ P:U\to\mathbb{R}\) is a smooth function. Since \(X=X^{(n)}\), an open covering of \(X\) is given by \(\mathbb{D}^{n,m_{n}(j)}\) and \(X^{n}\setminus h_{n}(\coprod_{j\in J_{n}}\mathbb{D}^{n,m_{n}(j)}\setminus \mathbb{S}_{0}^{n-1,m_{n}(j)})=X^{(n-1)}\setminus h_{n}(\coprod_{j\in J_{n}} \delta_{0}\mathbb{S}^{n-1,m_{n}(j)})\). Then by Proposition 5.5, we obtain that there is an open covering \(\mathcal{U}=\{U_{\underline{a}}\}\) of \(U\) such that \(P_{\alpha}=P|_{U_{\underline{a}}}:U_{\underline{a}}\to X\) goes through either \(X^{(n-1)}\) or \(\mathbb{D}^{n,m_{n}(j)}\) for some \(j\in J_{n}\). In case when \(P_{\alpha}\) can be described as a composition \(\hat{l}_{n}\circ P_{\alpha}^{\prime}:U_{\alpha}\xrightarrow{P_{\alpha}^{ \prime}}X^{(n-1)}\xrightarrow{\hat{l}_{n}}X^{(n)}\), we have that \(P_{\alpha}^{\prime}:U_{\alpha}\to X^{(n-1)}\) is smooth: for a given smooth function \(f:X^{(n-1)}\to\mathbb{R}\), the composition \(f\circ h_{n}:\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\to\mathbb{R}\) is also a smooth function. For each \(j\in J_{n}\), \(f_{j}=f\circ h_{n}|_{\mathbb{S}^{n-1,m_{n}(j)}}:\mathbb{S}^{n-1,m_{n}(j)}\to \mathbb{R}\) can be smoothly extendable on \(\mathbb{D}^{n,m_{n}(j)}\), since \(\mathbb{S}^{n-1,m_{n}(j)}\) is a closed subset of \(\mathbb{D}^{n,m_{n}(j)}\) as a smooth function \(\hat{f}_{j}:\mathbb{D}^{n,m_{n}(j)}\to\mathbb{R}\). Since \(\hat{f}_{j}\) and \(f\) coincide with each other on \(\mathbb{S}^{n-1,m_{n}(j)}\), they define a smooth function \(\hat{f}:X^{(n)}\to\mathbb{R}\) whose restriction to \(X^{(n-1)}\) is \(f\). Thus \(\hat{f}\circ P_{\alpha}=f\circ P_{\alpha}^{\prime}:U_{\alpha}\to\mathbb{R}\) is smooth. Since \(X^{(n-1)}\) is reflexive, \(P_{\alpha}:U_{\alpha}\to X^{(n-1)}\) is a plot. In case when \(P_{\alpha}\) can be described as a composition \(\hat{h}_{n}\circ P_{\alpha}^{\prime}:U_{\alpha}\xrightarrow{P_{\alpha}^{ \prime}}\mathbb{D}^{n,m_{n}(j_{0})}\xrightarrow{\hat{h}_{n}}X^{(n)}\) for some \(j_{0}\in J_{n}\), we have that \(P_{\alpha}^{\prime}:U_{\alpha}\to\mathbb{D}^{n,m_{n}(j_{0})}\) is smooth: for any \(x\in\mathbb{D}^{n,m_{n}(j_{0})}\), there is an open neighbourhood \(O\subset\mathbb{D}^{n,m_{n}(j_{0})}\) of \(x\). We choose a smaller open neighbourhood \(V\) and \(W\) of \(x\) such that \(x\in W\subset\operatorname{Cl}W\subset V\subset\operatorname{Cl}V\subset O\) where \(\operatorname{Cl}V\) is a compact subset. Let \(W^{\prime}=(P_{\alpha}^{\prime})^{-1}(W)\) so that \(x\in P_{\alpha}^{\prime}(W^{\prime})\). For any smooth function \(f:O\to\mathbb{R}\), there is a smooth function \(f^{\prime}:O\to\mathbb{R}\) such that \(f^{\prime}|_{W}=f|_{W}\) and \(\operatorname{supp}f^{\prime}\subset\operatorname{Cl}V\). Then we get a smooth function \(\hat{f}^{\prime}:\coprod_{j\in J_{n}}\mathbb{D}^{n,m_{n}(j)}\to\mathbb{R}\) as its zero extension: \[\hat{f}^{\prime}(x)=\begin{cases}f^{\prime}(x),&x\in O,\\ 0,&x\not\in\operatorname{supp}f^{\prime}.\end{cases}\] Then \(\hat{f}^{\prime}\circ\hat{l}_{n}:\coprod_{j\in J_{n}}\mathbb{S}^{n-1,m_{n}(j)}\to \mathbb{R}\) has also a compact support which is closed in the open subset \(\operatorname{Im}(h_{n})\) in \(X^{(n-1)}\), since \(X\) is regular. Hence \(\hat{f}^{\prime}\circ\hat{l}_{n}\) has its zero-extension \(\hat{f}_{0}\) on the entire \(X^{(n-1)}\) so as to satisfy \(\hat{f}^{\prime}{}_{\circ}\hat{l}_{n}=\hat{f}_{\circ}{}^{\circ}h_{n}\). Thus smooth functions \(\hat{f}^{\prime}\) and \(\hat{f}_{0}\) defines a smooth function \(\hat{f}:X^{(n)}\to\mathbb{R}\) such that \(\hat{f}{}_{\circ}\hat{h}_{n}|_{W}=f\). Thus \(f{}_{\circ}P^{\prime}_{\alpha}|_{W^{\prime}}=\hat{f}{}_{\circ}\hat{h}_{n}|_{ W^{\circ}}P^{\prime}_{\alpha}|_{W^{\prime}}=\hat{f}{}_{\circ}P_{\alpha}|_{W^{ \prime}}\) is smooth by the hypothesis, and hence \(P^{\prime}_{\alpha}|_{W^{\prime}}\) is a plot. Since \(x\in\mathbb{D}^{n,m_{n}(j_{0})}\) can be chosen arbitrary, \(P^{\prime}_{\alpha}\) is smooth, and so is \(P_{\alpha}\). Thus \(P\) is a plot. In the above theorem, the finiteness condition on dimensions is essential, for the reflexivity is not preserved under taking colimits. Now, let us recall [Iwaar, Example 6.5]. **Example 7.2**.: The thin CW complex \(\mathbb{I}\) is not reflexive. Let \(n>0\), and let \(\mathbb{TD}^{n}=\mathbb{R}^{n}/\pi^{n}_{\text{set}}\), where \(\pi^{n}_{\text{set}}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is defined by \[\pi^{n}_{\text{set}}(\mathsf{v})=\left\{\begin{array}{ll}\mathsf{v},&\| \mathsf{v}\|\leq 1\\ (1/\|\mathsf{v}\|)\cdot\mathsf{v},&\|\mathsf{v}\|\geq 1\end{array}\right\} \in D^{n}\subset\mathbb{R}^{n},\] which is a thin CW complex with one \(0\)-cell and one \(n\)-cell. Then \(\mathbb{TD}^{n}\) is topologically the same as an \(n\)-sphere \(S^{n}\). Similarly to [Iwaar, Example 6.5], we obtain the following. **Proposition 7.3**.: _The thin CW complex \(\mathbb{TD}^{n}\) is not reflexive._ _Proof: By definition, there is a canonical subduction \(\pi^{n}:\mathbb{R}^{n}\twoheadrightarrow\mathbb{R}^{n}/\pi^{n}_{\text{set}}= \mathbb{TD}^{n}\). Let \(\phi_{0}:(-1,1)\to\mathbb{R}\), \(\psi_{0}:\mathbb{R}\to\mathbb{R}^{n}\) and \(f:(-1,1)\to\mathbb{TD}^{n}\) be maps defined by_ \[f=\pi_{n}{}^{\circ}\psi_{0}{}^{\circ}\phi_{0},\qquad\phi_{0}(t)=\sqrt{\max\{ 0,t\}},\qquad\psi_{0}(t)=(1-t)\cdot\mathsf{e},\] _for a fixed vector \(\mathsf{e}={}^{t}(1,0,...,0)\in S^{n}\subset D^{n}\subset\mathbb{R}^{n}\). Then we see that \(f\) is not smooth at \(t=0\). In fact, if \(f\) is smooth, \(f\) can be expressed as \(f=\pi_{n}{}^{\circ}\phi\) near \(t=0\) by a smooth map \(\phi:\mathbb{R}\to\mathbb{R}^{n}\). Then we have \(\phi(t)=(1-\sqrt{t})\cdot\mathsf{e}\) for \(t>0\), and hence \(\phi^{\prime}(0)=\lim_{t\to 0}\phi^{\prime}(t)\)\(=(-\infty,0,...,0)\). It contradicts to the smoothness of \(\phi\) at \(t=0\). On the other hand, for any smooth function \(\mathsf{g}:\mathbb{TD}^{n}\to\mathbb{R}\), the composition \(\psi=\mathsf{g}{}^{\circ}\pi_{n}{}^{\circ}\psi_{0}:\mathbb{R}\to\mathbb{R}\) is also smooth on \(\mathbb{R}\), and is constant on \((-\infty,0]\), as well. Thus we have_ \[\lim_{t\to 0}\psi^{(r)}(t)=\psi^{(r)}(0)=\lim_{t\to 0}\psi^{(r)}(t)=0\quad \text{for all}\quad r\geq 1.\] _By applying L'Hopital's rule many times, one obtains that \(\lim_{t\to 0}\psi^{(r)}(t)/t^{n}=0\) for all \(r\), \(n\geq 1\). Then by induction, one can express \((\psi{}_{\circ}f)^{(r)}(t)\) as the following form:_ \[(\psi{}_{\circ}f)^{(r)}(t)=\sum_{j=0}^{r}P_{r,j}(1/\sqrt{t})\cdot\psi^{(j)}( \sqrt{t}),\,t>0,\quad\text{for all}\,\,r>1\text{,}\] _where \(P_{r,j}(x)\) is a polynomial on \(x\). Again by applying L'Hopital's rule, one obtains that \((\psi{}_{\circ}f)^{(r)}(0)\) exists and equals to \(\lim_{t\to 0}(\psi{}_{\circ}f)^{(r)}(t)=0\) for all \(r\geq 1\), and hence \(\psi{}_{\circ}f\) is smooth at \(t=0\). Thus \(f\in\mathcal{D}^{\prime}(\mathbb{TD}^{n})\) while \(f\not\in\mathcal{D}(\mathbb{TD}^{n})\), where we denote \(\mathcal{D}^{\prime}(X)=\{P\in\mathcal{N}(X)\mid g{}_{\circ}P\) is smooth for any smooth function \(g:X\to\mathbb{R}\}\supset\mathcal{D}(X)\). So, \(\mathbb{TD}^{n}\) is not reflexive (outside \(\operatorname{Int}D^{n}\subset\mathbb{TD}^{n}\)). In contrast, \(\mathbb{TD}^{n}\) is reflexive at any point in \(\operatorname{Int}D^{n}\). _ **Corollary 7.4**.: _A thin CW complex of positive dimension is not reflexive._ **Theorem 7.5**.: _A closed manifold is a regular CW complex._ Proof.: In view of the standard Morse theory for a closed manifold (see Tamura [13]), it is sufficient to show that a smoothing of \(N=M\cup_{h}D^{n}\times D^{m}\) a manifold with boundary obtained by attaching a handle \(D^{n}\times D^{m}\) on \(M\), where \(h:S^{n-1}\times D^{m}\hookrightarrow M\) is a smooth embedding in Manifold, is diffeomorphic to \(N^{\prime}=M\cup_{h^{\prime}}\mathbb{D}^{n,m}\) a manifold obtained by attaching a smooth handle \(\mathbb{D}^{n,m}\) on \(M\), where \(h^{\prime}:S^{n-1,m}\to M\) is a diffeological embedding in Diffeology, so that we obtain the following pushout diagram in Diffeology. First, we take a collar neighbourhood \(\partial M\times(-\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}]\subset M\) of \(\partial M=\partial M\times\{\nicefrac{{1}}{{2}}\}\) in \(M\). Then we have a submanifold \(M_{a}=M\setminus\{(x,t)\in\partial M\times(-\nicefrac{{1}}{{2}},\nicefrac{{1 }}{{2}}]\mid t>a\,\}\) of \(M\), diffeomorphic to \(M\), for \(a\in[-\nicefrac{{1}}{{4}},\nicefrac{{1}}{{2}})\). We smoothly extend \(h:S^{n-1}\times D^{m}\to\partial M\) to a diffeological embedding \(h_{3}:S^{n-1}\times D^{m}_{3}\to\partial M\), where we denote \(D^{d}_{r}=\{v\in\mathbb{R}^{d}\mid\|v\|\leq r\,\}\) for \(d\in\mathbb{N}_{1}\) and \(r>0\), which can be obtained by using a diffeomorphism \(D^{m}_{3}\approx D^{m}\) if necessary. For \(a\in[-\nicefrac{{1}}{{4}},\nicefrac{{1}}{{2}})\) and \(b\in[1,3]\), there is also a diffeological embedding \(h_{a,b}:\partial D^{n}_{1-a}\times D^{m}_{b}=S^{n-1}\times D^{m}_{b}\to \partial M=\partial M_{a}\) as an extension of \(h\), obtained by restricting \(h_{3}\) to \(S^{n-1}\times D^{m}_{b}\). Then by definition, \(h_{1}=h\). Let \(N_{a,b}=M_{a}\cup_{h_{a}}D^{n}_{1-a}\times D^{m}_{b}\), which is diffeomorphic to \(N\) for \(a\in[-\nicefrac{{1}}{{4}},\nicefrac{{1}}{{2}})\) and \(b\in[1,3]\). In the case when \(b\leq 2\), we have the following open neighbourhood of \(\operatorname{Im}(h_{b})\) in \(N_{a,b+1}\): \[h_{b+1}(S^{n-1}\times\operatorname{Int}D^{m}_{b+1})\times(a-\nicefrac{{1}}{{4 }},a]\cup_{h_{b}}D^{n}_{1-a}\times D^{m}_{b}\] which is diffeomorphic to \(S^{n-1}\times(a-\nicefrac{{1}}{{4}},a]\times\operatorname{Int}D^{m}_{b+1} \cup D^{m}_{1-a}\times D^{m}_{b}\approx\mathbb{S}^{n-1}_{a}\times\mathbb{R}^{ m}\cup D^{n}_{1-a}\times D^{m}_{b}\) \(\subset\mathbb{R}^{n}\times\mathbb{R}^{m}\), where we denote \(\mathbb{S}^{n-1}_{a}=\mathbb{R}^{n}\setminus\operatorname{Int}D^{n}_{1-a}\). Thus \(N_{a,b}\) has a nebula consisting of three manifolds with boundary, \(M_{0}\setminus\operatorname{Im}(h_{a})\), \(\mathbb{S}^{n-1}_{a}\times\mathbb{R}^{m}\cup(D^{n}_{1-a}\setminus D^{n}_{1/2- a})\times D^{m}_{b}\) and \(\operatorname{Int}D^{n}_{1-a}\times D^{m}_{b}\) where the latter two are open subsets of \(\mathbb{S}^{n-1}\times\mathbb{R}^{m}\cup D^{n}_{1-a}\times D^{m}_{b}\). Hence we have the following pushout diagram in Diffeology: Let us denote by \(N_{b}=N_{-\nicefrac{{1}}{{4}},b}\). Then we may regard \(M_{a}\) and \(N_{b}\) as submanifolds of \(N_{a,b}\), and hence we obtain \(N_{a,b}=M_{a}\cup N_{b}\). Second, we define a smooth function \(\lambda_{\varepsilon}\) by \(\lambda_{\varepsilon}(t)=\lambda(\frac{t-\varepsilon}{1-2\varepsilon})\) for a fixed small \(\varepsilon>0\) (\(\varepsilon<\nicefrac{{1}}{{2}}\), so that \(\frac{1}{1-2\varepsilon}<\nicefrac{{1}}{{1}}/\nicefrac{{1}}{{10}}\)), and hence we obtain \(\lambda^{\prime}_{\varepsilon}(t)=\frac{1}{1-2\varepsilon}\lambda^{\prime}( \frac{t-\varepsilon}{1-2\varepsilon})\). Hence by Proposition 2.1, \(\lambda^{\prime}_{\varepsilon}(t)\) has the maximum value \(\frac{\ell(3/2)^{2}}{(1-2\varepsilon)\alpha}<2\) when \(\frac{t-\varepsilon}{1-2\varepsilon}=\nicefrac{{1}}{{2}}\), i.e, \(t=\nicefrac{{1}}{{2}}\). Let \(f\), \(g:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) be smooth functions defined for \((t,x)\in\mathbb{R}^{2}\) as follows: \[f(t,x)=x+t\cdot\lambda_{\varepsilon}(4-2x),\] \[g(t,x)=x-t\cdot\lambda_{\varepsilon}(2x).\] Then, if \(|t|\leq 1/4\), we have \(\frac{\delta}{\delta x}f(t,x)=1-2t\cdot\lambda_{\varepsilon}^{\prime}(4-2x)>1-4 |t|\geq 0\) and \(\frac{\delta}{\delta x}g(t,x)=1-2t\cdot\lambda_{\varepsilon}^{\prime}(2x)>1-4 |t|\geq 0\). Thus both \(f\) and \(g\) are strictly increasing on \(x\in\mathbb{R}\). Let \(\widehat{N}=\delta N_{0,1}\times[-1/4,1/4]\). Then, following [13], we define a smooth map \(\Psi:\widehat{N}\to N_{1/4,5/4}\) for \(t\in[-1/4,1/4]\), \(x\in\delta M\setminus h_{2}(S^{n-1}\times D_{2}^{m})\), \(y=h_{2}(u,v)\), \((u,v)\in S^{n-1}\times(\operatorname{Int}D_{2}^{m}\setminus D^{m})\) and \(z=(u^{\prime},v^{\prime})\in D^{n}\times S^{m-1}\) as follows: \[\Psi(x,t)=(x,t)\in\delta M\times[-1/4,1/4]\subset M_{1/4},\] \[\Psi(y,t)=\begin{cases}((1-t)\cdot u,v)\in(D^{n}\setminus \operatorname{Int}D_{\gamma/8}^{n})\times\operatorname{Int}D_{2}^{m}\subset M_ {1/4},&v\in\operatorname{Int}D_{2}^{m}\setminus D_{(4-\varepsilon)/2}^{m},\\ ((1-t)\cdot u,\frac{f(t,\|v\|)}{\|v\|}\cdot v)\in(D^{n}\setminus\operatorname{ Int}D_{\gamma/8}^{n})\times\operatorname{Int}D_{2}^{m}\subset M_{1/4},&v\in \operatorname{Int}D_{2}^{m}\setminus D_{3/2}^{m},\\ ((1-t)\cdot u,v+\frac{t}{\|v\|}\cdot v)\in(D^{n}\setminus\operatorname{Int}D_{ \gamma/8}^{n})\times\operatorname{Int}D_{2}^{m}\subset M_{1/4},&v\in \operatorname{Int}D_{(3+\varepsilon)/2}^{m}\setminus D^{m},\end{cases}\] \[\Psi(z,t)=\begin{cases}(u^{\prime}-\frac{t}{\|v\|}\cdot u^{ \prime},(1+t)\cdot v^{\prime})\in D^{n}\times\operatorname{Int}D_{5/4}^{m} \subset N_{5/4},&u^{\prime}\in D^{n}\setminus D_{(1-\varepsilon)/2}^{n},\\ (\frac{g(t,\|u^{\prime}\|)}{\|u^{\prime}\|}\cdot u^{\prime},(1+t)\cdot v^{ \prime})\in D^{n}\times\operatorname{Int}D_{5/4}^{m}\subset N_{5/4},&u^{ \prime}\in\operatorname{Int}D_{1/2}^{n}\setminus\{0\},\\ (u^{\prime},(1+t)\cdot v^{\prime})\in D^{n}\times\operatorname{Int}D_{5/4}^{m} \subset N_{5/4},&u^{\prime}\in\operatorname{Int}D_{\epsilon/2}^{n}.\end{cases}\] Clearly, \(\Psi\) is a smooth monomorphism into \(M_{1/4}\cup N_{5/4}=N_{1/4,5/4}\), and so we regard \(\widehat{N}\) as a subspace of \(N_{1/4,5/4}\) by giving a pullback diffeology on it from \(\Psi\). Finally, a smooth function \(\kappa:\partial N=\partial M\setminus\operatorname{Im}\left(h\right)\cup D^{ n}\times S^{m-1}\to[-1/4,1/4]\) is defined by \(\kappa|_{\partial M}:\partial M\to\{0\}\subset[-1/4,1/4]\) and, for \(x=h_{2}(u,v)\in h_{2}(S^{n-1}\times D_{2}^{m})\supset h(S^{n-1}\times D^{m})\), \[\kappa(x)=\begin{cases}\phi(1-\|v\|)\leq\phi(0),&(u,v)\in\partial M\setminus h (S^{n-1}\times\operatorname{Int}D^{m})\subset S^{n-1}\times\mathbb{R}^{m},\\ \phi(\|u\|-1)\leq\phi(0),&(u,v)\in D^{n}\times S^{m-1}.\end{cases}\] By the property g) of \(\phi\), we have \(\kappa(x)=\phi(0)\in(0,1/4)\) on \(x\in h(S^{n-1}\times S^{m-1})\). By definition, we have \(\operatorname{Im}(\kappa)\subset(-1/4,1/4)\subset[-1/4,1/4]\). Thus \(N^{\prime}=N_{3/4}\cup\{(x,t)\in\widehat{N}\setminus N_{3/4}\mid t\leq\kappa (x)\}\) gives a smoothing of \(N_{3/4}\), which is nothing but a manifold with boundary obtained by attaching our smooth handle \(\mathbb{D}^{n,m}\) to \(M_{0}\) with \(\mathbb{S}^{n-1,m}\) pasted into a neighbourhood of \(\partial M_{0}\) in \(M_{0}\). **Conjecture 7.6**.: _The converse of Theorem 7.5 is true, i.e., any regular CW complex \(X\) with \(X=X^{(n)}=\hat{X}^{(n)}\) is a closed manifold._ **Conjecture 7.7**.: _A manifold with boundary is a regular CW complex._ ## Appendix A Proof of Theorem 2.6 Let \(\alpha\), \(\beta:[0,\infty)\times[-1,1]\to\mathbb{R}\) be smooth functions defined as follows: \[\alpha(u,v)=\nicefrac{{3}}{{2}}-p_{1}(u)\cdot(1-\cos(\nicefrac{{\pi}}{{2}} \cdot v)),\quad\beta(u,v)=1+\phi(\nicefrac{{3}}{{2}}\cdot u-1)\cdot s(v),\] where smooth functions \(p_{a}(x)\), \(a>1/2\), and an analytic function \(s(x)\) are defined by \[p_{a}(x)=\begin{cases}\frac{\phi(3/2\cdot x-a)}{x}>0,&x>0\\ 0,&x<\frac{(2a-1)}{3}\end{cases},\quad s(x)=\begin{cases}\frac{\sin(\pi/2\cdot x )}{x}\geq 0,&x\neq 0\\ \frac{\pi}{2}>0,&x=0\end{cases}.\] By definition, we have \(\beta(u,v)\geq 1\). By Proposition 2.2, we have \(\phi(3/2\cdot u-a)-\phi(a-3/2\cdot u)=3/2\cdot u-a\). If \(3/2\cdot u\geq a+1/2\), then \(\phi(a-3/2\cdot u)=0\) and \(\phi(3/2\cdot u-a)=3/2\cdot u-a\), and hence \(p_{a}(3/2\cdot u)<3/2\). If \(a\leq 3/2\cdot u\leq a+1/2\), then \(0\leq a-3/2\cdot u+1/2\leq 1/2\) and \(0\leq\phi(a-3/2\cdot u)\leq\frac{(a-3/2\cdot u+1/2)^{2}}{2}\leq\frac{(a-3/2 \cdot u+1/2)}{4}\). Thus \(\phi(3/2\cdot u-a)\leq 3/2\cdot u-a+\frac{(a-3/2\cdot u+1/2)}{4}<9/8\cdot u\) and \(p_{a}(3/2\cdot u)<9/8\). If \(a-1/2<3/2\cdot u<a\), then \(0<3/2\cdot u-a+1/2<1/2\) and \(\phi(3/2\cdot u-a)\leq\frac{(3/2\cdot u-a+1/2)^{2}}{2}<\frac{3/2\cdot u-a+1/2}{4 }<3/8\cdot u\) and \(p_{a}(u)<3/8\). If \(3/2\cdot u\leq a-1/2\), then \(p_{a}(3/2\cdot u)=0\). Thus always \(p_{a}(3/2\cdot u)<3/2\) and \(\alpha(u,v)>0\). Using them, we define a smooth function \(\Phi_{n,m}:\mathbb{R}^{n}\times D^{m}\to\mathbb{R}^{n}\times\mathbb{R}^{m}\) as follows: \[\Phi_{n,m}(u,v)=(\alpha(\|u\|,\|v\|)\cdot u,\beta(\|u\|,\|v\|)\cdot v).\] Let \((x,y)=\Phi_{1,1}(u,v)\), \((u,v)\in\mathbb{R}\times[-1,1]\). Then we have \[x=3/2\cdot u-p_{1}(3/2\cdot|u|)\cdot(1-\cos(\pi/2\cdot v))\cdot u,\] \[y=v+\phi(3/2\cdot|u|-1)\cdot\sin(\pi/2\cdot v).\] **Lemma A.1**.: \(\Phi_{1,1}:\mathbb{R}\times[-1,1]\to\mathbb{R}\times\mathbb{R}\) _is a diffeomorphism onto \(\mathbb{D}^{1,1}\subset\mathbb{R}\times\mathbb{R}\)._ Proof.: If \(v=0\), then we have \((x,y)=(u,0)\). If \(|v|=1\), then we have \((x,y)=((3/2-p_{1}(3/2\cdot|u|))\cdot u,(1+\phi(3/2\cdot|u|-1))\cdot v)\), and we have \(|x|=3/2\cdot|u|-\phi(3/2\cdot|u|-1)\), \(|y|=1+\phi(3/2\cdot|u|-1)\) and \(|x|+|y|=3/2\cdot|u|+1\). Hence we have \(|x|=1-\phi(1-3/2\cdot|u|)=1-\phi(2-|x|-|y|)\) which implies \((x,y)\in\partial\mathbb{D}^{1,1}\). If \(|u|\leq 1/3\), then \((x,y)=(3/2\cdot u,v)\), while we obtain, in general, \[|x|+|y| =3/2\cdot|u|-\phi(3/2\cdot|u|-1)(1-\cos(\pi/2\cdot|v|))+|v|+\phi(3/2 \cdot|u|-1)\sin(\pi/2\cdot|v|)\] \[=3/2\cdot|u|+|v|+\phi(3/2\cdot|u|-1)\cdot(\cos(\pi/2\cdot|v|)+ \sin(\pi/2\cdot|v|)-1)\geq 3/2\cdot|u|+|v|,\] since \(\cos\phi+\sin\theta=\sqrt{2}\sin(\theta+\pi/4)\geq 1\) if \(0\leq\theta\leq\pi/2\). Thus the image of \(\Phi_{1,1}\) is \(\mathbb{D}^{1,1}\). Now, let us calculate the Jacobian of \(\Phi_{1,1}\). In the case when \(u>0\), we have \[x=3/2\cdot u-\phi(3/2\cdot u-1)\cdot(1-\cos(\pi/2\cdot v)),\] \[y=v+\phi(3/2\cdot u-1)\cdot\sin(\pi/2\cdot v),\] and then, it follows that \(\frac{\phi(x,y)}{\phi(u,v)}=3/2\cdot(1-(1-\cos(\pi/2\cdot v))\cdot\lambda(3/2 \cdot u-1/2))+\frac{3\pi}{4}\cdot(\cos(\pi/2\cdot v)+(1-\cos(\pi/2\cdot v))\cdot \lambda(3/2\cdot u-1/2))\cdot\phi(3/2\cdot u-1)\). Hence, assuming \(\frac{\phi(x,y)}{\phi(u,v)}=0\), we obtain the following since \(0\leq\cos(\pi/2\cdot v)\), \((1-\cos(\pi/2\cdot v))\cdot\lambda(3/2\cdot u-1/2)\leq 1\) and \(\phi(3/2\cdot u-1)\geq 0\): \[(1-\cos(\pi/2\cdot v))\cdot\lambda(3/2\cdot u-1/2)=1.\] It implies \(\lambda(3/2\cdot u-1/2)=1\) and \(\cos(\pi/2\cdot v)=0\), and hence \(\frac{\phi(x,y)}{\phi(u,v)}=\frac{3\pi}{4}\cdot\phi(3/2\cdot u-1)\). Thus \(\frac{\phi(x,y)}{\phi(u,v)}=0\) implies \(\phi(3/2\cdot u-1)=0\). The condition on \(\lambda\) implies \(u\geq 1\), while the condition on \(\phi\) implies \(u\leq 1/3\), which is a contradiction, and we obtain \(\frac{\phi(x,y)}{\phi(u,v)}\neq 0\). In the case when \(u<0\), assuming \(\frac{\phi(x,y)}{\phi(u,v)}=0\), we are led to a contradiction as well, by using an arguments parallel to the case when \(u>0\). In the case when \(-1/3<u<1/3\), we have \((x,y)=(3/2\cdot u,v)\) and \(\frac{\phi(x,y)}{\phi(u,v)}=3/2\neq 0\). Thus \(\Phi_{1,1}\) has a smooth inverse function, and we have done. Now, we are ready to show Theorem 2.6. Let \(\Theta_{1,1}:\mathbb{D}^{1,1}\to\mathbb{R}\times D^{1}\) be the smooth inverse function of \(\Phi_{1,1}\). Then we obtain a smooth function \(\Theta_{n,m}\) for \((x,y)\in\mathbb{D}^{n,m}\) by the following formula: \[\Theta_{n,m}(x,y)=(\frac{1}{\alpha(u,v)}\cdot x,\frac{1}{\beta(u,v)}\cdot y), \quad(u,v)=\Theta_{1,1}(\|x\|,\|y\|).\] Then we have \(\Psi_{1,1}(u,v)=(\|x\|,\|y\|)\), and hence \((\|x\|,\|y\|)=(\alpha(u,v)\cdot u,\beta(u,v)\cdot v)\). Let \((u,v)=\Theta_{n,m}(x,y)\). Then we obtain \((\|u\|,\|v\|)=(\frac{1}{\alpha(u,v)}\cdot\|x\|,\frac{1}{\beta(u,v)}\cdot\|y\| )=(u,v)\) and \(\Psi_{n,m}(u,v)=(\alpha(\|u\|,\|v\|)\cdot u,\beta(\|u\|,\|v\|)\cdot v)=( \alpha(u,v)\cdot u,\beta(u,v)\cdot v)=(x,y)\). Thus \(\Theta_{n,m}\) is the inverse function of \(\Phi_{n,m}\). It completes the proof of the theorem. ## Appendix B Proof of Theorem 2.7 If we replace smooth function \(\alpha\) in the previous section with the following \(\widehat{\alpha}\), we must obtain a smooth bijection \(\widehat{\Phi}_{n,m}:(\mathbb{R}^{n}\times D^{m},\mathbb{S}^{n-1}\times D^{m}) \to(\mathbb{D}^{n,m},\mathbb{S}^{n-1,m})\): \[\widehat{\alpha}(u,v)=\nicefrac{{3}}{{2}}-p_{1}(u)+p_{2}(u)\cdot\cos(\pi/2 \cdot v),\] \[\widehat{\Phi}_{n,m}(u,v)=(\widehat{\alpha}(\|u\|,\|v\|)\cdot u,\beta(\|u\|, \|v\|)\cdot v).\] Let \((x,y)=\widehat{\Phi}_{n,m}(u,v)\). If \(\|u\|>1\), then \(\widehat{\alpha}(\|u\|,\|v\|)\cdot\|u\|=\nicefrac{{3}}{{2}}\cdot\|u\|-\phi( \nicefrac{{3}}{{2}}\cdot\|u\|-1)+\phi(\nicefrac{{3}}{{2}}\cdot\|u\|-2)\cos( \nicefrac{{\pi}}{{2}}\cdot\|u\|-2)\cos(\nicefrac{{\pi}}{{2}}\cdot\nu)>1\), and hence \(\widehat{\Phi}_{n,m}(u,v)\in\mathbb{S}^{n-1,m}\). If \(\|u\|<1\), then \(\nicefrac{{3}}{{2}}\cdot\|u\|-2<-\nicefrac{{1}}{{2}}\), and hence the properties of \(\phi\) imply \[\widehat{\alpha}(\|u\|,\|v\|)\cdot\|u\| =\nicefrac{{3}}{{2}}\cdot\|u\|-\phi(\nicefrac{{3}}{{2}}\cdot\|u \|-1)+\phi(\nicefrac{{3}}{{2}}\cdot\|u\|-2)\cos(\nicefrac{{\pi}}{{2}}\cdot\nu)\] \[=1-\phi(1-\nicefrac{{3}}{{2}}\cdot\|u\|)<1,\] and hence \(\widehat{\Phi}_{n,m}(u,v)\notin\mathbb{S}^{n-1,m}\). If \(\|u\|=1\), then \(\widehat{\alpha}(\|u\|,\|v\|)\cdot\|u\|=1\) and \(\beta(\|u\|,\|v\|)\cdot\|v\|\)\(=\|v\|+\phi(\nicefrac{{1}}{{2}})\cdot\sin(\nicefrac{{\pi}}{{2}}\cdot\|v\|)=\|v\|+ \nicefrac{{1}}{{2}}\cdot\sin(\nicefrac{{\pi}}{{2}}\cdot\|v\|)\) which ranges over \([0,\nicefrac{{3}}{{2}}]\), and hence we obtain \(\widehat{\Phi}_{n,m}(S^{n-1}\times D^{m})=\partial_{0}\mathbb{S}^{n-1,m}\). By arguments similar to that in Lemma A.1, we have that \(\widehat{\Phi}_{n,m}\) is diffeomorphic on \((\mathbb{R}^{n}\times S^{n-1})\times D^{m}\). Now, let \((n,m)=(1,1)\). Then, for \((x,y)=\widehat{\Phi}_{1,1}(u,v)\), we have \[x=\nicefrac{{3}}{{2}}\cdot u-\phi(\nicefrac{{3}}{{2}}\cdot|u|-1)+\phi( \nicefrac{{3}}{{2}}\cdot|u|-2)\cdot\cos(\nicefrac{{\pi}}{{2}}\cdot v),\] \[y=v+\phi(\nicefrac{{3}}{{2}}\cdot|u|-1)\cdot\sin(\nicefrac{{\pi}}{{2}}\cdot v).\] It then follows, by putting \(u=1\), that \[\frac{\partial\,x}{\partial\,u}(1,v) =\frac{3}{2}-\frac{3}{2}\cdot\lambda(1)+\frac{3}{2}\cdot\lambda(0) \cdot\cos(\nicefrac{{\pi}}{{2}}\cdot v)=0,\] \[\frac{\partial\,x}{\partial\,v}(1,v) =-\frac{\pi}{2}\cdot\phi(-\nicefrac{{1}}{{2}})\cdot\sin(\nicefrac{{ \pi}}{{2}}\cdot v)=0,\] which implies \(\frac{\hat{\vartheta}(x,y)}{\hat{\vartheta}(u,v)}(1,v)=0\), and \(\widehat{\Phi}_{1,1}\) is not a diffeomorphism. Further, we obtain that the Jacobian of \(\widehat{\Phi}_{n,m}\) is zero on \(S^{n}\times D^{m}\) for all \(n\), \(m\geq 1\). Details are left to the reader. ## Acknowledgements This research is partly based on second author's master thesis [10], and is partially supported by Grant-in-Aids for Challenging Research (Exploratory) JP18K18713 and Scientific Research (C) JP23K03093 both from JSPS (Norio Iwase).
2309.16260
A gate-tunable quantum phase transition in a topological excitonic insulator
Coulomb interactions among electrons and holes in two-dimensional (2D) semimetals with overlapping valence and conduction bands can give rise to a correlated insulating ground state via exciton formation and condensation. One candidate material in which such excitonic state uniquely combines with non-trivial band topology are atomic monolayers of tungsten ditelluride (WTe2), in which a 2D topological excitonic insulator (2D TEI) forms. However, the detailed mechanism of the 2D bulk gap formation in WTe2, in particular with regard to the role of Coulomb interactions, has remained a subject of ongoing debate. Here, we show that WTe2 is susceptible to a gate-tunable quantum phase transition, evident from an abrupt collapse of its 2D bulk energy gap upon ambipolar field-effect doping. Such gate tunability of a 2D TEI, into either n- and p-type semimetals, promises novel handles of control over non-trivial 2D superconductivity with excitonic pairing.
Yande Que, Yang-Hao Chan, Junxiang Jia, Anirban Das, Zhengjue Tong, Yu-Tzu Chang, Zhenhao Cui, Amit Kumar, Gagandeep Singh, Hsin Lin, Shantanu Mukherjee, Bent Weber
2023-09-28T08:53:54Z
http://arxiv.org/abs/2309.16260v1
# A gate-tunable ambipolar quantum phase transition in a topological excitonic insulator ###### Abstract Coulomb interactions among electrons and holes in two-dimensional (2D) semimetals with overlapping valence and conduction bands can give rise to a correlated insulating ground state via exciton formation and condensation. One candidate material in which such excitonic state uniquely combines with non-trivial band topology are atomic monolayers of tungsten ditelluride (WTe\({}_{2}\)), in which a 2D topological excitonic insulator (2D TEI) forms. However, the detailed mechanism of the 2D bulk gap formation in WTe\({}_{2}\), in particular with regard to the role of Coulomb interactions, has remained a subject of ongoing debate. Here, we show that WTe\({}_{2}\) is susceptible to a gate-tunable quantum phase transition, evident from an abrupt collapse of its 2D bulk energy gap upon ambipolar field-effect doping. Such gate tunability of a 2D TEI, into either \(n\)- and \(p\)-type semimetals, promises novel handles of control over non-trivial 2D superconductivity with excitonic pairing. ## I Introduction An excitonic insulator is a correlated insulator that arises from electron-hole interactions in semimetals with overlapping electron and hole pockets at the Fermi level (\(E_{\rm F}\)). First proposed in the 1960s [1; 2; 3; 4], only recently has experimental evidence for the excitonic insulating state been put forth in a small number of two-dimensional (2D) electronic systems [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Among them, excitonic pairing has been proposed [15; 16; 18; 19] as a contributing mechanism to stabilize the bulk energy gap in WTe\({}_{2}\) monolayers, previously understood as a single-particle band gap in the quantum spin hall (QSH) state [20; 21; 22; 23]. WTe\({}_{2}\) crystallizes in a monoclinic 1T' lattice structure (**Figure** 1a), result of a spontaneous lattice distortion that gives rise to a doubling of the crystal unit cell and inversion [24; 25] of the Te-\(p\) and W-\(d\) bands. Spin orbit coupling (SOC) [25; 26; 27; 28] lifts the degeneracy at the band crossing points and - depending on the strength of the SOC - has been predicted to give rise to a fully gapped band-insulator [22], or a 2D semimetal with overlapping electron and hole pockets [25] (**Figure** 1b). Qian et al. [25] first noted that in WTe\({}_{2}\) the SOC strength may be insufficient to open a gap, leaving it semimetallic. In contrast, a single-particle band gap of \(>\)100 meV was predicted by Zheng et al [22] who took both SOC and exchange correlations into account. Measurement of the WTe\({}_{2}\) electronic band structure in angle resolved photoemission spectroscopy (ARPES) [23], scanning tunnelling microscopy/spectroscopy (STM/STS) [23; 29], and transport [30; 31] have confirmed the presence of a 2D bulk gap, which has since been shown to be sensitive to external electric fields [32] and strain [33]. Meanwhile, early studies reported a semimetallic bulk [34; 35] in heavily \(n\)-doped WTe\({}_{2}\) monolayers, which appeared to be at odds with a band-insulator picture. A negative band gap with overlapping electron and hole pockets was found, as inferred from quasiparticle interference experiments. The presence of a long-range disorder induced Coulomb gap at the Fermi level confirmed the presence of a semimetallic state at high doping. Indeed, if a negative band gap exists in WTe\({}_{2}\), 2D correlations would be expected at low temperature, arising from Coulomb interactions among momentum-separated electrons and holes at \(E_{\rm F}\). These can give rise to bound electron-hole pairs (excitons) and their condensation [2; 18], as illustrated in **Figure** 1b. The resulting interaction-stabilized quasiparticle gap would thus be expected to depend strongly on the interaction strength, susceptible to temperature [17], electric fields [36] or charge doping [37; 11; 38]. Here, we provide direct evidence of such excitonic insulating state in WTe\({}_{2}\)--for the first time demonstrating a gate-controlled quantum phase transition (QPT), as evident from a rapid collapse of the 2D bulk gap upon ambipolar field-effect doping away from the charge-neutral point (CNP). ## II Results and Discussions Our main evidence for the gate-controlled QPT is summarized in **Figure** 1d-e. We employ a field-effect gated monolayer graphene as a substrate for van-der Waals (vdW) epitaxy of WTe\({}_{2}\) (**Figure** 1c). For this, a graphene monolayer was first mechanically exfoliated onto a SiO\({}_{2}\)(300nm)/Si wafer (**Figure** S1 of the supporting information) on which WTe\({}_{2}\) was subsequently grown by vdW epitaxy in ultra-high vacuum. When a gate bias is applied to the highly _p_-doped silicon backgate, it allows to tune the carrier density in the graphene within a range of -5.6 \(\times\) 10\({}^{12}\) cm\({}^{-2}\) (holes) to 2.3 \(\times\) 10\({}^{12}\) cm\({}^{-2}\) (electrons) (see **Figure** S2 of the supporting information). The associated mismatch in el electrochemical potential then allows charge transfer doping of the WTe\({}_{2}\) across the vdW gap. Measurements of the WTe\({}_{2}\) bulk LDOS at different gate voltages (\(V_{\rm G}\)) are shown in **Figure** 1d-f. The QPT becomes evident as a sharp transition between an insulating (gapped) and a semimetallic (ungapped) bulk LDOS with a sudden collapse of the gap at \(V_{\rm G}\) = 15 V. Such collapse of a well-formed and stable 2D bulk gap cannot be explained from a single-particle picture and constitutes key-evidence for the presence of a 2D correlated insulating state. Beyond the critical voltage of the QPT Figure 1: A Gate-tunable quantum phase transition (QPT) in WTe\({}_{2}\). **a**, Atomic structure of WTe\({}_{2}\) and corresponding first Brillouin zone (BZ). The lattice constants \(a\) and \(b\) are indicated. The high symmetry points (\(\Gamma\), X, Y) and \(\Lambda\) are indicated. **b**, Band diagrams showing the formation of the 2D TEI bulk energy gap. Band inversion and SOC leave a negative bulk gap (\(E_{\rm G}\)\(<\) 0), while excitonic interactions open a positive gap (\(E_{\rm G}\)\(>\) 0) at low temperature. **c**, Schematic of the gated WTe\({}_{2}\)/graphene sample for STM/STS. Grounding and bias of graphene and the back-gate electrodes are indicated. The inset shows a topographic image (5 \(\times\) 5 nm\({}^{2}\)) of the WTe\({}_{2}\) surface. Long-range modulations in the measured _z_-height are due to surface roughness of the underlying SiO\({}_{2}\) substrate. **d**,**e**, Differential conductance (d\(I\)/d\(V\)) measured in the bulk of a WTe\({}_{2}\) crystal at different gate voltages ranging between -55 V and +55 V as indicated. The spectra in **e** are vertically offset for clarity, with the zero-conductance levels indicated by horizontal dashed lines. **f**, Corresponding d\(I\)/d\(V\) intensity map as functions of energy and gate bias from which the data in **d** and **e** were extracted (horizontal dashed lines). The Fermi level (\(E\)\(=\)\(E_{\rm F}\)) is indicated by the vertical dashed line in **d** and **e**. The black arrows in **e** and **f** highlight the position of the charge neutral point (CNP). (\(V_{\rm G}>15\) V), only a V-shaped suppression of the LDOS remains at \(E_{\rm F}\), previously attributed to a Coulomb gap [35]. A local minimum at \(E-E_{\rm F}=-40\) meV reflects the position of the CNP [35], and further confirms the presence of an _n_-type semimetal. A weak effect of the gate on charge doping in the semimetallic phase over tens of volts in gate bias finally confirms a large density of states at \(E_{\rm F}\) in which the large carrier density screens any electric field applied. To gain further insight into the effects of field-effect doping, we describe the electronic structure of a WTe\({}_{2}\) monolayer in a \(\mathbf{k}\cdot\mathbf{p}\) model [19; 15] (See Appendix for detail). Near the \(\Gamma\) point, the Hamiltonian of a four-band model reads, \[\hat{h}\left(k\right)=\varepsilon_{+}\left(k\right)+\left[\varepsilon_{-} \left(k\right)+\delta\right]\tau^{z}+v_{x}k_{x}\tau^{x}s^{y}+v_{y}k_{y}\tau^{y} s^{0} \tag{1}\] where the \(\tau^{\mu}\) and \(s^{\mu}\) are Pauli matrices, representing the orbital and spin degrees of freedom. \(\tau^{z}=\pm 1\) refers to \(d\) and \(p\) orbitals, respectively. For the non-interacting Hamiltonian, we choose the same parameters as proposed in Ref. [19]. We consider an interacting Hamiltonian, \[H_{int}=\frac{1}{2N_{k}}\sum_{k,p,q,\alpha,\beta}U(q)c^{+}_{k+q,\alpha}c^{+}_{ p-q,\beta}c_{p,\beta}c_{k,\alpha} \tag{2}\] where \(c^{+}_{k,\alpha}\) (\(c_{k,\alpha}\)) are the creation (annihilation) operators for electrons with momentum \(\mathbf{k}\) and orbital index \(\alpha\), \(N_{k}\) is the total number of \(k\)-points, and \(U\left(\mathbf{q}\right)=\frac{2U_{0}}{q\xi}\tanh\frac{q\xi}{2}\) is a model screened interaction with a screening length \(\xi=25\) nm. We have adjusted the value of \(U_{0}\) such that the gap after self-consistent mean-field calculations matches that observed in the experiments (\(\sim 60-80\) meV). The so obtained \(U_{0}=25\) eV suggests [16] that the system is in a spin density wave (SDW) phase with finite SDW order parameters but close to the insulator boundary. To simulate charge doping, we further introduce a chemical potential term \(\mu\) in the non-interacting part of the Hamiltonian. The folded band structure, together with the unfolded spectral weights in the BZ, are shown for different doping levels in **Figure** 2, where we clearly see the transition from an insulating (gapped) to _n_- and _p_-type semimetallic (ungapped) phase, respectively. A direct comparison of measured and calculated LDOS is shown in **Figure** 3, demonstrating that the QPT occurs ambipolar in gate bias, as evident from the shift of CNP from -40 meV to +40 meV. At both gate bias polarities, the QPT occurs precisely when the Fermi level (\(E=E_{\rm F}\)) reaches either band edge, allowing for charge to be transferred into the WTe\({}_{2}\) monolayer. An interpolation between the respective positions of the CNP in the _n_- and _p_-type semimetallic phases (red double-headed arrow in **Figure** 3b) shows that at net zero doping, the Fermi level aligns precisely with the position of the CNP, and the excitonic gap is centered perfectly symmetrically about \(E=E_{\rm F}\). We thus suspect that the QPT is driven by a break-down of the Fermi surface's nesting condition such that the bare electronic susceptibility is suppressed. We further note that the effect of a vertical electric field in our experiments is limited to a rigid band shift in the gapped phase as shown in **Figure** 3c and that no net field-effect on the gap magnitude [32] was observed. Importantly, this suggests that in our samples charge transfer doping rather than the electric fields are responsible for the QPT observed. Indeed, from a measurement of the gate-dependent LDOS on bare graphene without any WTe\({}_{2}\) coverage (**Figure** S2 in the supporting information), we estimate a chemical potential shift of \(\sim\)150 meV over a gate voltage from -35 V to +25 V, similar in order compared to that measured in WTe\({}_{2}\), and consistent with that assumed in our \(\mathbf{k}\cdot\mathbf{p}\) calculations (100 meV), confirming the field-effect doping. Despite the presence of Coulomb correlations, the excitonic insulator phase is expected to be topologically non-trivial as the combination of inverted bands and SOC would demand the existence of 1D metallic edge states. Indeed, real-space tight-binding calculations for WTe\({}_{2}\) ribbons with different edges reveals the edges band crossing the Fermi energy along with the gapped bulk bands (**Figure** in the supporting information). Further, as shown in **Figure** 4, we clearly observe the expected enhanced LDOS on the crystal's edges (black curves) reflecting a metallic boundary mode, regardless of the position of the Fermi level. The pseudo-gap like suppression seen in the edge state's LDOS, has previously been shown to arise from a helical Tomonaga-Luttinger liquid (TLL) ground state [39; 40; 41], and is seen to remain strictly centered at \(E_{\rm F}\) regardless of doping (**Figure** 4b). From fits to a TLL model we extract a Luttinger parameter \(K\sim\) 0.3-0.4, consistent with previous work [39; 41]. The metallic edge states persist even in the semimetallic phase (black curve at \(V_{G}=40V\) in **Figure** 4b), which might be expected from the "custodial" glide [41; 42] symmetry protection of the helical edge in WTe\({}_{2}\), facilitated by a large band inversion of order hundreds of meV with band crossing and gap opening away from the high-symmetry points of the BZ. We note that in an excitonic insulating bulk, translational symmetry is expected to be reduced [2] due to the formation of charge density wave (CDW) order. In WTe\({}_{2}\), a CDW with wave vector \(|\mathbf{q}_{c}|\simeq\frac{1}{6}\frac{2\pi}{a}\) would be expected, connecting states in the hole (\(\Gamma\)) and electron (\(\pm\Lambda\)) pockets. Yet, despite claims of the excitonic insulating bulk [15; 16], no CDW order has been reported to date, which has previously been explained as the contributions of entangled spin, orbital and valley degrees, paired by time-reversal symmetries, may cancel each others respective contributions [16]. Breaking time reversal symmetry by application of external magnetic fields [28] or measurement with magnetic probes could potentially reveal the CDW (or SDW) order. Alternatively, one might expect CDW order to reemerge in the presence of translational symmetry-breaking or a spin polarization along the edge [25; 43]. Indeed, we believe that the absence of translational invariance at the edge could explain periodic modulations previously observed [39] in Figure 3: Quantum phase transition (QPT) ambipolar in field-effect doping, comparing theory and experiment. **a**,**b**, Measured d\(I\)/d\(V\) curves (**a**) and map (**b**) taken on a nominally undoped (Fermi level midgap) WTe\({}_{2}\) crystal (see **Figure** S1 for topographic images). A QPT is clearly observed at both gate bias polarity. The red dashed lines indicate the charge neutral point (CNP) in the semimetallic WTe\({}_{2}\), and the double-headed arrow highlights the shift of CNP from _p_-type to _n_-type semimetal. **c**, Extracted band edges and gap size from the data in **b** indicating a negligible field-effect on the gap magnitude. **d**,**e**, Corresponding density of states (DOS) calculated based on our \(\mathbf{k}\cdot\mathbf{p}\) model. The black arrows in **a** and **c** highlight the measured and calculated DOS in _p_-type semimetal. **f**, Schematic phase diagram under electrochemical potential \(\mu\) showing _n_- and _p_-type semimetal (SM) and a 2D topological excitonic insulator (TEI). Figure 2: Effect of charge doping on the WTe\({}_{2}\) band structure. **a**-**c**, \(\mathbf{k}\cdot\mathbf{p}\) band structure calculation of a (**a**) _p_-doped (\(\mu\) = -0.75 eV), (**b**) charge neutral (\(\mu\) = -0.72 eV), and (**c**) _n_-doped (\(\mu\) = -0.65 eV) WTe\({}_{2}\) monolayer (see main text for detail) in a \(6\times 1\) supercell geometry. The spectral weight of the unfolded bands are superimposed, highlighted by purple markers. Black and green dashed lines indicate the position of Fermi level and charge neutral point, respectively. the charge density along the crystal edges of WTe\({}_{2}\) monolayers, reflecting CDW order. As we show in **Figure** S3, a fast Fourier Transform (FFT) of the energy-dependent LDOS seems to confirm this notion as the modulations have a periodicity of \(q\simeq\frac{1}{6}\frac{2\pi}{a}\), and are non-dispersive. A self-consistent real-space tight-binding calculation (see S3 of the supporting information for detail) agrees with the experimental signatures, including the period of the modulations and the exponential decay of edge state's LDOS into the 2D bulk (**Figure** S3 of the supporting information). Although atomic-level disorder at the edge can redistribute the LDOS [34, 39], and broaden the power spectral density around \(q=k_{\Lambda}\), we find that the CDW modulations remain intact, and of similar frequency. Finally, We note that the exact form of SOC in WTe\({}_{2}\) is still under debate [26, 28, 42, 44]. While an SOC type different from that assumed in Eq. (1) could influence the precise order parameters of the excitons due to the change in the bare band characters, the doping dependence and DOS inferred from our calculations should not have any significant dependence on the SOC type, as long as the resulting bare band dispersion remains similar. Future local probe spectroscopy experiments in magnetic fields and/or with magnetic probes [45] should allow to to reveal the CDW, SDW or spin spiral ground states and thus shed light on the precise SOC type. ## III Conclusion We have demonstrated a gate-tunable quantum phase transition (QPT), further confirming the topological excitonic insulating (TEI) state in WTe\({}_{2}\) monolayers. The QPT becomes evident from a rapid collapse of the 2D bulk gap upon ambipolar field-effect doping from a backgate, leading to a break-down of the Fermi surface's nesting condition. The presence of a 1D metallic edge state surrounding the interaction-stabilized gapped bulk, regardless of doping, confirms that bulk-boundary correspondence persists in the 2D TEI. Periodic modulations in the LDOS at the edge, with a wave vector \(q_{\rm c}\simeq\frac{1}{6}\frac{2\pi}{a}=k_{\rm\Lambda}\), further confirm the presence of a CDW. Our work suggests WTe\({}_{2}\) and materials with similar custodial symmetry to be candidates as gate-tunable topological insulators, given the sensitive control of the bulk electronic structure arising from electron interactions and the stabilities of the edge modes at surprisingly high temperature [31, 41]. The interplay of topology and 2D correlated excitonic condensation might further allow to realize recent predictions of 2D triplet superconductivity with an excitonic pairing mechanism [46, 47, 48, 49]. ###### Acknowledgements. This research is supported by National Research Foundation (NRF) Singapore, under the Competitive Research Programme "Towards On-Chip Topological Quantum Devices" (NRF-CRP21-2018-0001), with partial support from a Singapore Ministry of Education (MOE) Figure 4: Bulk-edge correspondence in the topological excitonic insulator WTe\({}_{2}\). **a**, Height profile and corresponding \(\mathrm{d}I/\mathrm{d}V\) intensity maps measured along a line from edge to bulk (arrow in inset) and under gate voltages from -50 V to 50 V. **b**, Individual \(\mathrm{d}I/\mathrm{d}V\) spectra comparing the bulk (gray lines) and edge (black lines) of WTe\({}_{2}\). Spectra are vertically offset for clarity, and the horizontal and vertical dashed lines indicate the zero conductance and the Fermi level, respectively. Academic Research Fund Tier 3 grant (MOE2018-T3-1-002). H.L. acknowledges support by the Ministry of Science and Technology (MOST) in Taiwan under Grant No. MOST 109-2112-M-001-014-MY3. S.M. would like to acknowledge the new faculty seed grant from IIT Madras under project number Project No: PHY/18-19/703/NFSC/SHAA. B.W. acknowledges a Singapore National Research Foundation (NRF) Fellowship (NRF-NRFF2017-11). ## Appendix A MBE growth Monolayer crystals of 1T'-WTe\({}_{2}\) were synthesized by molecular-beam epitaxy (MBE) on monolayer graphene, exfoliated on SiO\({}_{2}\) (300 nm)/Si in an Omicron Lab10 ultra-high vacuum (UHV) MBE chamber [39, 50] (base pressure below \(1\times 10^{-10}\) mbar). The freshly exfoliated monolayer graphene substrates were annealed in UHV at 400 \({}^{\circ}\)C slowly (ramping rate of (\(\sim\)2 \({}^{\circ}\)C/min) followed by electrical contact formation by micro-sodering with indium [51] in an Ar-filled glove box. Prior to MBE, monolayer graphene substrates were further degassed in UHV at 180 \({}^{\circ}\)C for 30 min. WTe\({}_{2}\) crystals were grown by co-deposition of W (99.998%) and Te (99.999%) with a flux ratio of 1:280 and substrate temperature of 160 \({}^{\circ}\)C for 1 hour to achieve a \(40\sim 50\%\) monolayer coverage. ## Appendix B Scanning tunnelling microscopy/spectroscopy Low-temperature scanning tunnelling microscopy and spectroscopy (STM/STS) were carried out in an Omicron low-temperature STM (\(\sim\)4.5 K) under UHV conditions (\(<1\times 10^{-10}\) mbar). Chemically etched W tips or mechanically cut platinum/iridium tips were calibrated against the Au(111) Shockley surface state before spectroscopy measurements. The spectroscopy measurements were performed using standard lock-in techniques with a modulation amplitude of \(V_{\rm ac}=2\) mV and a modulation of frequency of 731.2 Hz. ## Appendix C \(\mathbf{k\cdot p}\) calculation In the four-band Hamiltonian (Eq. 1), the band energy \(\varepsilon_{\pm}(k)\) is given by \[\varepsilon_{\pm}(k)=\frac{1}{2}(\varepsilon_{d}(k)\pm\varepsilon_{p}(k)), \tag{10}\] where \(\varepsilon_{d}(k)=ak^{2}+bk^{4}\) and \(\varepsilon_{p}(k)=-\frac{k^{2}}{2m}\) with \(a=-3\), \(b=18\), \(m=0.03\). The remaining parameters in Eq. 1 were set \(\nu_{x}=0.5\), \(\delta=-0.9\), and \(\nu_{y}=3\). The energy unit is in eV and the length unit is in Angstrom. Decoupling the interacting Hamiltonian (Eq. 2) with the standard mean-field procedure, we get \[H_{MF}=-\frac{1}{2N_{k}}[\sum_{k}\sum_{\alpha,\beta}\Delta_{\alpha\beta}(k,nq_{ c})c^{+}_{k+nq_{c},\beta}c_{k,\alpha}+\sum_{k}\sum_{\alpha,\beta}\Delta_{ \beta\alpha}(k,nq_{c})c^{+}_{k,\alpha}c_{k+nq_{c},\beta}], \tag{11}\] where the order parameter \(\Delta_{\alpha\beta}(k,nq_{c})=\sum_{q}U(q)<c^{+}_{k+q,\alpha}c_{k+q+nq_{c}, \beta}>\). The numerical calculation was done in a truncated BZ in a region of \([-\frac{3}{2}q_{c},\frac{3}{2}q_{c}]\times[-\frac{1}{4},\frac{1}{4}]\) in units of the reciprocal lattice vector. The Hamiltonian is self-consistently solved with a uniform \(24\times 24\)\(k\)-grid. We included upto \(n=2\) order parameter in our calculations. The self-consistent cycle is stopped when the total energy is converged (\(<10^{-4}\) eV).
2302.14705
AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers
Self-attention-based transformer models have achieved tremendous success in the domain of natural language processing. Despite their efficacy, accelerating the transformer is challenging due to its quadratic computational complexity and large activation sizes. Existing transformer accelerators attempt to prune its tokens to reduce memory access, albeit with high compute overheads. Moreover, previous works directly operate on large matrices involved in the attention operation, which limits hardware utilization. In order to address these challenges, this work proposes a novel dynamic inference scheme, DynaTran, which prunes activations at runtime with low overhead, substantially reducing the number of ineffectual operations. This improves the throughput of transformer inference. We further propose tiling the matrices in transformer operations along with diverse dataflows to improve data reuse, thus enabling higher energy efficiency. To effectively implement these methods, we propose AccelTran, a novel accelerator architecture for transformers. Extensive experiments with different models and benchmarks demonstrate that DynaTran achieves higher accuracy than the state-of-the-art top-k hardware-aware pruning strategy while attaining up to 1.2$\times$ higher sparsity. One of our proposed accelerators, AccelTran-Edge, achieves 330K$\times$ higher throughput with 93K$\times$ lower energy requirement when compared to a Raspberry Pi device. On the other hand, AccelTran-Server achieves 5.73$\times$ higher throughput and 3.69$\times$ lower energy consumption compared to the state-of-the-art transformer co-processor, Energon. The simulation source code is available at https://github.com/jha-lab/acceltran.
Shikhar Tuli, Niraj K. Jha
2023-02-28T16:17:23Z
http://arxiv.org/abs/2302.14705v2
# AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers ###### Abstract Self-attention-based transformer models have achieved tremendous success in the domain of natural language processing. Despite their efficacy, accelerating the transformer is challenging due to its quadratic computational complexity and large activation sizes. Existing transformer accelerators attempt to prune its tokens to reduce memory access, albeit with high compute overheads. Moreover, previous works directly operate on large matrices involved in the attention operation, which limits hardware utilization. In order to address these challenges, this work proposes a novel dynamic inference scheme, DynaTran, which prunes activations at runtime with low overhead, substantially reducing the number of ineffectual operations. This improves the throughput of transformer inference. We further propose tiling the matrices in transformer operations along with diverse dataflows to improve data reuse, thus enabling higher energy efficiency. To effectively implement these methods, we propose AccelTran, a novel accelerator architecture for transformers. Extensive experiments with different models and benchmarks demonstrate that DynaTran achieves higher accuracy than the state-of-the-art top-\(k\) hardware-aware pruning strategy while attaining up to 1.2\(\times\) higher sparsity. One of our proposed accelerators, AccelTran-Edge, achieves 330K\(\times\) higher throughput with 93K\(\times\) lower energy requirement when compared to a Raspberry Pi device. On the other hand, AccelTran-Server achieves 5.73\(\times\) higher throughput and 3.69\(\times\) lower energy consumption compared to the state-of-the-art transformer co-processor, Energon. The simulation source code is available at [https://github.com/jha-lab/acceltran](https://github.com/jha-lab/acceltran). Accelerators; application-specific integrated circuits; machine learning; natural language processing; neural networks; transformers. ## I Introduction The transformer architecture [1], which is based on the self-attention mechanism [2], has gained widespread interest in the domain of natural language processing [3] and, recently, even in computer vision [4]. One reason is its massive parallelization capabilities on modern-day graphical processing units (GPUs), unlike traditional sequential models like long short-term memories [5] and recurrent neural networks [6] that are slow to train and thus may not perform as well. Transformers have been able to achieve state-of-the-art performance on diverse benchmarking tasks due to pre-training on massive public and private language corpora [7, 8, 9]. The massive models come with their own challenges. For instance, pre-training a large state-of-the-art model usually requires millions of dollars worth of GPU resources [10]. Furthermore, large transformer models also have a high memory footprint, making them challenging to train even on modern GPUs. Convolutional neural networks (CNNs) have been able to overcome these challenges with a plethora of application-specific integrated circuit (ASIC)-based accelerators, each specialized for a different set of models in its design space [11, 12]. These accelerators have specially-designed hardware modules that leverage sparsity in model weights, data reuse, optimized dataflows, and CNN mapping to attain high performance and energy efficiency [13]. However, CNN accelerators are incompatible with transformer workflows since they are optimized for the inner-product operation, the basis of a convolution operation, and not for matrix-matrix multiplication control flows. Some recent works attempt to accelerate transformers by reducing their memory footprint and the compute overhead of the self-attention operation. For instance, A\({}^{3}\)[14] contains several approximation strategies to avoid computing attention scores close to zero. SpAtten [15] leverages a cascade token pruning mechanism that progressively prunes unimportant tokens based on low attention probabilities, reducing overall compute complexity. However, the proposed 'top-\(k\)' pruning mechanism [15], a state-of-the-art hardware-aware dynamic inference method, has a high compute overhead, which partially offsets its throughput gains during model inference according to our experiments (details in Section V-A). Energon [16] approximates the top-\(k\) pruning method with its mixed-precision multi-round filtering algorithm. However, it only exploits sparsity in the attention probabilities, not in all possible multiplication operations in the transformer architecture (details in Section II-B). To tackle this problem, OPTT-MUS [17] uses a set-associative rearranged compressed sparse column (SA-RCSC) format to eliminate ineffectual multiply-and-accumulate (MAC) operations. However, it only exploits sparsity in the weight matrices and not the activations, i.e., the matrices formed from intermediate MAC operations. It also only works with encoder-decoder models, where the decoder is known to support limited parallelism. Leveraging encoder-only models, which have recently shown to perform well even on translation and language generation tasks [18, 19], not only reduces the critical path by 2\(\times\) but also improves hardware utilization. Further, these works implement an entire matrix multiplication over an array of processing elements (PEs), which are the basic compute blocks of an accelerator. OPTT-MUS [17], with its SA-RCSC sparse matrix format, does not break down the matrices involved into multiple _tiles_ [implemented in general matrix multiplication (GEMM) pipelines] in order to improve hardware utilization. FTRANS [20] and SpAtten [15] break down a matrix-matrix multiplication operation into multiple matrix-vector multiplication operations, losing out on data reuse capabilities. This also limits the scope of parallelization (details in Section V-A). Data reuse, parallelization, and optimal hardware utilization are crucial to obtaining high throughput and energy efficiency. Energon [16] is a co-processor and not a full-fledged accelerator. This limits the scope of optimization across the entire pipeline, resulting in superfluous off-chip accesses. Field-programmable gate array (FPGA)-based transformer accelerators have also been proposed owing to their low cost [20, 21, 22]. However, they suffer from performance and power inefficiencies due to bit-level reconfigurable abstractions and correspondingly high interconnect overheads [23]. To overcome the above challenges, we propose AccelTran, a novel cycle-accurate accelerator for transformer models. Our main contributions are as follows. * We propose a granular and hardware-aware dynamic inference framework, DynaTran, for transformers that dynamically prunes all activations in order to remove ineffectual MAC operations. DynaTran has much less compute overhead compared to previous works [15, 16], enabling higher throughput for model inference. * To _efficiently_ execute DynaTran, we design and implement an ASIC-based architecture called AccelTran. Instead of using traditional encoder-decoder models [17], we leverage recently-proposed encoder-only models [1], thus reducing the critical path by 2\(\times\) and improving throughput and hardware utilization. Further, unlike previous works [16], AccelTran's dynamic inference pipeline is agnostic to the pre-processed weight pruning strategy. * We propose the use of _tiled_ matrix multiplication for our transformer accelerator. For this, we leverage a novel mapping scheme from the transformer model to the tiled operations that maximizes hardware utilization and improves parallelization. * We also formulate and implement, for the first time, various _dataflows_ for the transformer optimal dataflow that maximizes data reuse to improve energy efficiency. * We further leverage monolithic-3D RRAM [24] for higher memory bandwidth. This alleviates the performance bottleneck in transformer inference since state-of-the-art models are huge and thus memory-bound [25, 8]. Our proposed control block maps the transformer computational graph to scheduled hardware-implementable operations. It leverages the high-bandwidth monolithic-3D RRAM to schedule these operations intelligently, enabling high throughput and energy efficiency. We also support LP-DDR3 memory for low-cost edge solutions. The rest of the article is organized as follows. Section II presents background on transformer acceleration. Section III illustrates the methodology underpinning the DynaTran and AccelTran frameworks in detail. Section IV describes the experimental setup and baselines that we compare against. Section V discusses the results. Section VI compares related works and suggests future work directions. Finally, Section VII concludes the article. ## II Background and Motivation In this section, we provide background on various compute operations employed in a transformer model and previous works on transformer pruning and dynamic inference (sometimes interchangeably termed as dynamic pruning [15, 16]). ### _The Transformer Model_ We present the details of the memory and compute operations in the transformer model next. #### Ii-A1 Compute Operations Table I summarizes the required memory load and compute operations in a transformer model. The first is the loading of word embeddings and position encodings, which take up a significant fraction of the weights in a transformer. Here, \(\mathbf{H}_{emb}\) corresponds to the embeddings of all tokens in the vocabulary (vocabulary size is 30,522 for the BERT [1] family of models). We represent each token by a vector of length \(h\), which is the hidden dimension of the transformer (e.g., \(h=128\) for BERT-Tiny [26] and \(h=768\) for BERT-Base [1]). Then, we load the weight matrices for the multi-head attention operations. Here, \(\mathbf{W}_{i}^{\text{Q}}\), \(\mathbf{W}_{i}^{\text{k}}\), and \(\mathbf{W}_{i}^{\text{V}}\in\mathbb{R}^{h\times h/n}\) are needed in each attention head, where \(n\) is the number of attention heads. Subsequent compute operations (color-coded **blue** for matrix multiplication and **green** for softmax) are employed in self-attention [2]. Intermediate matrices are called _activations_; those that are loaded from memory are called _weights_. \(\mathbf{W}_{i}^{\text{Q}}\in\mathbb{R}^{h/n\times h/n}\) maps the attention probabilities to output scores. Then, we add the input to the output of the multi-head attention (which is formed by concatenating the output of all attention heads) and normalize the resultant matrix. This is the layer-norm operation (color-coded orange) that is used to reduce covariance shifts [27]. Finally, the layer norm feeds the feed-forward operation that, in turn, feeds the layer norm. GeLU is the activation function commonly used in transformers [1, 28]. \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{2}{c}{Word Embedding and Position Encoding} \\ \hline **M-OP-0** & \(\mathbf{H}=\mathbf{H}_{emb}+\text{PE}(\mathbf{H}_{emb})\) \\ \hline \multicolumn{2}{c}{Multi-Head Attention} \\ \hline **M-OP-[1-4]** & load \(\mathbf{W}_{i}^{\text{Q}}\), \(\mathbf{W}_{i}^{\text{K}}\), \(\mathbf{W}_{i}^{\text{V}}\), \(\mathbf{W}_{i}^{\text{O}}\), \(\mathbf{W}_{i}^{\text{O}}\) \\ **C-OP-[1-3]** & \(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}=\mathbf{H}\mathbf{W}_{i}^{\text{H}}\), \(\mathbf{H}\mathbf{W}_{i}^{\text{K}}\), \(\mathbf{H}\mathbf{W}_{i}^{\text{V}}\) \\ **C-OP-4** & \(\mathbf{A}_{i}=\mathbf{Q}_{i}\mathbf{K}_{i}\) \\ **C-OP-5** & \(\mathbf{S}_{i}=\text{softmax}\left(\frac{\Delta_{i}}{\sqrt{h}}\right)\) \\ **C-OP-6** & \(\mathbf{P}_{i}=\mathbf{S}_{i}\mathbf{V}_{i}\) \\ **C-OP-7** & \(\mathbf{H}_{i}^{\text{MBHA}}=\mathbf{P}_{i}\mathbf{W}_{i}^{\text{O}}\) \\ \hline \multicolumn{2}{c}{Add and Layer-norm} \\ \hline C-OP-8 & \(\mathbf{H}^{\text{LN}}=\text{layer-norm}(\mathbf{H}^{\text{MBHA}}+\mathbf{H})\) \\ \hline \multicolumn{2}{c}{Feed Forward} \\ \hline **M-OP-[5-6]** & load \(\mathbf{W}^{\text{F1}},\mathbf{W}^{\text{F2}}\) \\ **C-OP-9** & \(\mathbf{H}^{\text{F1}}=\text{GeLU}(\mathbf{W}^{\text{F1}}\mathbf{H}^{\text{LN}})\) \\ **C-OP-10** & \(\mathbf{H}^{\text{F2}}=\text{GeLU}(\mathbf{W}^{\text{F2}}\mathbf{H}^{\text{F1}})\) \\ \hline \multicolumn{2}{c}{Layer-norm} \\ \hline C-OP-11 & \(\mathbf{H}^{\text{O}}=\text{layer-norm}(\mathbf{H}^{\text{F2}})\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Memory and compute operations in a transformer. #### Ii-A2 Memory Requirements Fig. 1 shows the memory requirements for BERT-Tiny and BERT-Base. BERT-Tiny has higher memory requirements for word and position embeddings (compared with BERT-Base) relative to requirements for weights and activations. Further, activations take up much memory, 8.98\(\times\) that of the weights for BERT-Tiny and 2.06\(\times\) for BERT-Base. The total main memory requirements for the two models are 52.8MB and 3.4GB, respectively, when only the weights and embeddings are stored. Activations are formed at runtime and stored in internal registers or on-chip buffers. With increasing transformer model sizes (calculated solely in terms of weights) [8], taking into account their operation on hardware accelerators, the memory budget should also have to account for the commensurate increase in activations. ### _Sparsity in Self-Attention_ Researchers have striven to reduce the computational complexity of transformers by pruning, during pre-training or fine-tuning, the transformer weights [29, 30]. Previous works have also proposed various methods to reduce the quadratic complexity of the self-attention operation [31]. Distillation [32] recovers the accuracy loss due to such pruning techniques. However, all these works prune the model while training; more so, they only prune the weights. During inference, sparse matrices with ineffectual values may be formed _dynamically_ from both activations and weights. Such ineffectual values must be pruned at runtime to improve energy efficiency and hardware utilization. SpAtten [15] proposed the top-\(k\) pruning method. It essentially identifies query-key pairs that produce large attention probabilities at runtime. Given an attention score matrix (\(\mathbf{S}_{i}\) in Table I), it keeps the \(k\) largest elements in each row to obtain the probability matrix (\(\mathbf{P}_{i}\)) and neglects the rest. Even though this method only results in a minor accuracy loss, it has a high overhead (as we show experimentally in Section V-A) due to its \(\mathcal{O}(N^{3})\) complexity. Further, a matrix multiplication operation benefits from sparsification when small values, which do not have much effect on the final result, are completely pruned out so that the hardware does not have to implement the corresponding MAC operations. SpAtten only considers the attention probabilities (\(\mathbf{P}_{i}\)), but not all the matrix multiplication operations presented in Table I. Thus, it loses out on gains that could be obtained by pruning other matrices as well. We compare it with our proposed method, DynaTran, in Section V-A. ## III Methodology Fig. 2 presents a flowchart for the AccelTran simulation pipeline. We first weight-prune the transformer that is provided as input, either using movement pruning (MP) [30] or DynaTran. Then, we tile the transformer model into granular compute and memory operations. These tiled operations are passed to the AccelTran simulator, which implements the tiled operations, in hardware, in a cycle-accurate manner. We now present the DynaTran framework for efficient dynamic inference with the transformer model. We also present AccelTran, a cycle-accurate accelerator for implementing this framework efficiently in hardware. ### _DynaTran_ Unlike the top-\(k\) pruning algorithm [15], we propose a low-overhead dynamic inference method that quickly prunes ineffectual weight and activation values at runtime. For a given matrix, which is either loaded as a weight matrix from memory or is an activation matrix obtained from previous MAC operations, DynaTran prunes values with a magnitude less than a given threshold \(\tau\). Mathematically, an input matrix \(\mathbf{M}\in\mathbb{R}^{m\times n}\) is pruned to \(\mathbf{M}^{\text{p}}\) as follows: \[\mathbf{M}^{\text{p}}_{ij}=\begin{cases}\mathbf{M}_{ij}&\text{if }|\mathbf{M}_{ij}| \geq\tau\\ 0&\text{if }|\mathbf{M}_{ij}|<\tau\end{cases}\] This simple comparison operation incurs negligible compute overhead at runtime. This is important since transformer evaluation involves many such matrices at runtime, most of which are on the critical path for model computation. Further, each comparison operation can be parallelized, ensuring that pruning only takes up one clock cycle. This has a much lower overhead compared to SpAtten [15] and Energon [16] that have dedicated engines for this operation. We now define the pruning ratio (or level of sparsity) for the output matrix as: \[\rho(\mathbf{M}^{\text{p}})=\frac{\sum_{x\in\mathbf{M}^{\text{p}}}\delta_{x,0} }{m\times n}\] where \(\delta\) is the Kronecker delta function. We profile the resultant sparsity in the weights and activations for different transformer models on diverse applications to obtain a desired \(\rho\). One or more such profiled curves can be stored in memory. For the desired values of \(\rho\), we determine the corresponding Fig. 1: Memory requirements for (a) BERT-Tiny and (b) BERT-Base. Fig. 2: AccelTran workflow for an input transformer model and its acceleration in hardware. \(\tau\) at runtime through a simple look-up operation. We present such curves in Section V-A to compare the throughput of our proposed approach with top-\(k\) pruning. ### _The AccelTran Simulator_ We present details of the proposed accelerator simulator next. #### V-B1 Tiling and Dataflow As per Table I, most compute operations in the transformer model are matrix multiplication operations. Thus, it is important to optimize these operations for high gains. Unlike previous works that perform matrix multiplications directly using large MAC units, we propose using tiled matrix multiplication (primarily employed by modern GPUs [33]). Tiling the operations helps with better utilization of resources and enables massive parallelization. Fig. 3 shows the tiling operation along with an example _dataflow_. We can also think of a dataflow as a loop-unrolling scheme. The four for-loops can be unrolled in any permutation (giving 24 possible ways to unroll the loops, i.e., 24 dataflows). Multiplication between two tiles (say, weights **W[b,i,k]** and activations **A[b,k,j]**) is performed by a MAC lane (in parallel, based on the number of MAC units). Each dataflow results in different data reuse capabilities. For example, if only four MAC lanes are available, with the dataflow shown in Fig. 3, when **j** changes from **0** to **1** (**b** and **i** remaining constant), the MAC lanes can reuse the corresponding weights **W[b,i,k]**, **k**\(\in\) [**0,...,N2x]**. Similarly, other dataflows would result in different reuse capabilities for different input matrix sizes. We show the reuse instances and corresponding energy savings for this example in Section V-B. No previous work has leveraged different dataflows to improve data reuse in transformer evaluation. #### V-B2 Accelerator Organization Taking inspiration from a state-of-the-art CNN accelerator, SPRING [12], we leverage monolithic-3D integration to connect to an on-chip 3D resistive random-access memory (RRAM) [24]. In monolithic-3D integration, multiple device tiers are fabricated on one substrate wafer, connected through monolithic inter-tier via that allow much higher density than traditional through-silicon-via-based 3D integration [34]. This leaves much more space for logic and also permits high memory bandwidth, which are crucial for large state-of-the-art transformer models. For scalable edge deployments, we also support an off-chip dynamic RAM (DRAM). Fig. 4 shows the organization of the accelerator tier in the proposed architecture. The control block takes the instruction stream for the transformer model from the host CPU. The weights and embeddings are brought on-chip from the off-chip DRAM, or from the monolithic-3D RRAM, by the direct memory access (DMA) controller. The activation and the weight buffers store the activations and weights, respectively, in a compressed format (discussed in Section III-B6). Data compression relies on binary masks (stored in the mask buffer). The PEs use the compressed data and the associated masks to perform the main compute operations in the transformer. #### V-B3 Processing Elements Fig. 5 shows the main modules present inside a PE, which is the basic compute block in our accelerator. The compressed data are stored in local registers of the PE by the activation first-in-first-out (FIFO) and weight FIFO registers. The data then enter the DynaTran module that induces sparsity based on the desired \(\rho\). As explained in Section III-A, this module prunes the given weights or activations based on a pre-calculated threshold \(\tau\). The sparse Fig. 4: Accelerator organization. Fig. 5: Internal components of a PE. Fig. 3: Tiling of a matrix multiplication operation along with a selected dataflow (specifically, **[b,i,j,k]**). Here, a tensor is shown instead, with the first dimension being the batch size. data then enter the pre-compute sparsity module with the binary masks. This module converts the input data into a zero-free format based on the associated masks. The PE then forwards this zero-free data to the MAC lanes (for matrix multiplication), softmax modules (for softmax operation), or the layer-norm module (for layer-norm operation). The zero-free data eliminate any ineffectual computations in these modules. Finally, the post-compute sparsity module implements the inverse of this operation on the output activations, before storing them in the activation FIFO register and, eventually, the main activation buffer. #### Iii-B4 MAC Lanes MAC lanes are responsible for multiplication between two tiles in a parallelized fashion. Let the tiles be denoted by \(\mathbf{W}\in\mathbb{R}^{b\times x\times y}\) and \(\mathbf{A}\in\mathbb{R}^{b\times y\times z}\) for conserved matrix (in general, tensor) multiplication. Then, the number of multiplication operations is \(n_{o}=b\times x\times y\times z\). Each MAC lane in AccelTran has \(M\) multipliers. Thus, the minimum number of cycles to compute the tiled operation is \(n_{o}/M\). Fig. 6 shows the implementation of a MAC lane. We store all activation and weight data in fixed-point format with \((\text{IL}+\text{FL})\) bits, denoting integer length and fractional length, respectively [12]. The module first feeds the data to the \(M\) multipliers, then the corresponding outputs to the adder tree over multiple stages. We represent the products with \(2\times(\text{IL}+\text{FL})\) bits to prevent overflow. The accumulations also use this bit-width. The depth of the adder tree is \(\log_{2}M\) for the \(M\) multipliers in our MAC lane. The module then passes the data to the output register. For feed-forward operations, where activation is required, the GeLU module implements this nonlinearity at the output of the MAC units. All other compute modules also work with the \((\text{IL}+\text{FL})\) bits. #### Iii-B5 Dynamic Inference Modules To execute DynaTran pruning, we implement a low-overhead DynaTran module that prunes infeffectual values in the input activations or weights. As explained in Section III-A, we prune the values of the input matrices by comparing their magnitude with a pre-determined threshold \(\tau\). Fig. 7 shows how this is implemented, in parallel, for the entire tile. For an input tile \(\mathbf{M}\in\mathbb{R}^{b\times x\times y}\), we use \(b\times x\times y\) comparators. The threshold calculator determines the required threshold, using the desired \(\rho\) and the pre-profiled transfer functions for different transformer models on diverse applications. The internal register stores these transfer functions loaded from memory before running transformer evaluation. If the output of the comparator is zero, we set the corresponding mask bit to one. Here, we represent the lines carrying mask information in grey and those carrying activation/weight information in black. #### Iii-B6 Sparsity-aware Acceleration To exploit sparsity and skip ineffectual activations and weights, and reduce memory footprint, AccelTran uses a binary-mask scheme to encode the sparse data and perform computations directly in the encoded format. Compared to the regular dense format, the pre-compute sparsity module compresses data by removing all the zero elements. In order to retain the shape of the uncompressed data, we use an extra binary mask [12]. The binary mask has the same shape as the uncompressed data, where each binary bit in the mask is associated with one element in the original data vector. If the entry in the mask is 1, it means that the corresponding activation/weight entry is ineffectual and should not be used for further computation. Fig. 8 illustrates the pre-compute sparsity module. It takes the zero-free data and binary mask vectors as inputs and generates an output mask and zero-free activations/weights for Fig. 8: Pre-compute sparsity module. Fig. 6: Architecture of the MAC Lane. Fig. 7: DynaTran module. The wires for mask bits are in grey. the MAC lanes, softmax modules, or the layer-norm module. The output binary mask indicates the common indices of non-zero elements in both the activation and weight vectors. The module computes this mask using a bit-wise **AND** function over the input activation and weight masks. The two **XOR** gates then generate the filter masks. Based on the filter masks, the filter prunes the activations/weights. Finally, the zero-collapsing shifter compresses the activations/weights to feed zero-free data to the compute modules for further computation [12]. Thus, we completely skip ineffectual computations, improving throughput and energy efficiency. #### Iii-B7 Simulator Flow Fig. 9 shows the simulation flow for evaluating the AccelTran architecture. We implement different modules presented above at the register-transfer level (RTL) with SystemVerilog. Design Compiler [35] synthesizes the RTL design using a 14nm FinFET technology library [36]. Capo [37], an open-source floorplacer, performs floorplanning. We did part of the floorplanning by hand. The net area reported is after floorplanning (including whitespaces). FinCACTI [38], a cache modeling tool for deeply-scaled FinFETs, models the on-chip buffers. NVSim [39] and NVMain [40] model the main memory. We then plug the synthesized results into a Python-based cycle-accurate simulator. #### Iii-B8 Smart Scheduling of Tiled Operations AccelTran simulates various operations in the transformer model in a tiled fashion. As discussed earlier, we tile each compute operation's activation/weight matrices. We then assign each such tileed operation to a designated module based on the type of compute operation. Modules that are not being used are power-gated to reduce leakage power draw. Transformer inference may run into either memory or compute stalls if the corresponding prerequisites are not met. As the names suggest, a memory stall halts a memory operation from being executed. Similarly, a compute stall halts a compute operation. There is a memory stall if the buffer is not ready to load/store more data as some data are already being written or read. Compute operations require some activations/weights in the buffers. There could be a compute stall if the required matrix is not yet loaded into the buffer. A memory stall can also occur if the compute modules are using current data in the buffer and there is no space left to add more data. This is true until the current data (that are required until compute operations finish) are evicted when the corresponding compute operations are done and the data are no longer required. A memory stall can also occur if the compute operation is not done before storing activation data. Finally, if all compute modules for a specific type of compute operation are busy, it could also lead to a compute stall. The control block schedules various compute and memory operations to maximize hardware utilization. Since transformer models execute the same sequence of operations for every attention head, assigning equal priority to each head would result in poor usage of specialized resources. Hence, AccelTran staggers the operation of different heads. For instance, in BERT-Tiny, it gives more priority to one head so that the relevant MAC operations are completed first for that head. Then, when the first head reaches the softmax operation, MAC lanes can be assigned to the second head. This results in simultaneous utilization of the MAC lanes and softmax modules, thus increasing hardware utilization and improving throughput. Fig. 10 presents a working schematic of the staggered implementation in BERT-Tiny's MAC and softmax operations (i.e., for two attention heads). In the staggered case, in Fig. 10(b), MAC lanes and softmax modules can be utilized simultaneously, resulting in a higher parallelization, thus leading to a higher throughput. ## IV Experimental Setup In this section, we present the setup behind various experiments we performed, along with the baselines considered for comparison. ### _Evaluation Models and Datasets_ To test the efficacy of our proposed dynamic inference method, DynaTran, we evaluate encoder-only models (because of their high parallelization capabilities [17]) on different tasks. We use BERT-Tiny [26] and BERT-Base [1], two commonly used pre-trained models. BERT-Tiny has two encoder layers, each with a hidden dimension \(h=128\) and two attention heads in the multi-head attention operation, as discussed in Section II-A. BERT-Base is a larger model with 12 encoder layers, each with a hidden dimension \(h=768\) and 12 attention heads. These encoder-only models can also be extended to Fig. 10: Scheduling with (a) equal priority and (b) staggered operations for BERT-Tiny’s MAC and softmax (SMX) operations. Fig. 9: Flow of simulation in AccelTran. machine translation [18] and language generation [19]. Testing these recent extensions on hardware forms part of future work. We test the two models on two _representative_ tasks, namely SST-2 [41] and SQuAD-v2 [42]. SST-2 is a popular benchmarking dataset that enables testing of model performance on sentiment analysis tasks. The dataset has 67K sequences in the training set and 872 in the validation set. The performance metric is the accuracy of correctly predicting label sentiment (positive or negative). SQuAD-v2 is a popular question-answering dataset. The training and validation sets have 130K and 12K examples, respectively. The performance metric is the F1 score [43]. While running DynaTran, we targeted both activation and weight sparsity. Weight sparsity is static and depends on pruning performed during model pre-training or fine-tuning (or even DynaTran's weight pruning, as described in Section V-A2). Activation sparsity changes for every input sequence and is reported as the average over the entire validation set. ### _The AccelTran Architectures_ We now present various design choices for our proposed framework. We introduce two accelerators, namely AccelTranEdge and AccelTran-Server. The first is for mobile/edge platforms with a limited energy budget. The second is aimed at cloud/server applications where throughput may be of utmost importance. Table II shows the associated design choices. We fixed the clock rate to 700 MHz based on the delay of all modules in the proposed architecture. We set the number of multipliers \(M\) to 16. We set IL = 4 and FL = 16. As mentioned in Section V-B, the dataflow [**b,i,j,k**] is the loop-unrolling scheme of choice. We set the tile sizes across **b**, **i**, and **j** to 1, 16, and 16, respectively. For the chosen RRAM process [44] in AccelTran-Server, we implement the memory in two tiers above the main accelerator tier in order to fit it within the footprint area. However, different transformer models would generally have a unique set of hardware hyperparameters that are optimal for the given architecture. Thus, one can search for an optimal transformer-accelerator pair over a diverse set of transformer models [45] and accelerator design choices [46]. ### _Evaluation Baselines_ We compare the performance of our proposed accelerator with many previously proposed baselines. For mobile platforms, we compare the inference of BERT-Tiny on AccelTranEdge with off-the-shelf platforms that include Raspberry Pi 4 Model-B [47] that has the Broadcom BCM2711 ARM SoC, Intel Neural Compute Stick (NCS) v2 [48] with its neural processing unit (NPU), and Apple M1 ARM SoC [49] with an 8-core CPU, an 8-core GPU, and 16 GB unified memory on an iPad (for easier evaluations, we performed experiments on a MacBook Pro laptop with the same SoC instead). For server-side platforms, we compare the inference of BERT-Base on AccelTran-Server with a modern NVIDIA A100 GPU (40GB of video RAM) and previously proposed accelerators, namely, OPTIMUS [17], SpAtten [15], and Energon [16]. We chose the maximum batch size possible for each platform, based on its memory capacity. To support inference on the Raspberry Pi, we implement the transformer models on an ARM distribution of the machine learning (ML) framework, PyTorch. We run transformer evaluation on the Intel NCS using the OpenVINO framework. Finally, for the Apple M1 SoC, we use the Tensorflow-metal plug-in to exploit the CPU and its embedded GPU. We quantize all models to FP16 before running our experiments. We normalize the throughput, energy, and chip area to 14nm FinFET technology using scaling equations [50]. We use the inverter delays for different technology nodes as proxies for throughput normalization. ## V Experimental Results In this section, we present the experimental results. ### _Dynamic Inference with the Transformer_ We first present the results of our experiments for the DynaTran method. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Accelerator** & **Module** & **Configuration** \\ \hline \multirow{6}{*}{**PEs**} & Main Memory & 1-Channel LP-DDR3-1600; Bandwidth = 25.6GB/s \\ & PEs & 64 \\ & MAC Lanes & 16 per PE \\ & Softmas Modules & 4 per PE \\ & Batch Size & 4 \\ & Buffer & Activation Buffer: 4MB; Weight Buffer: 8MB; \\ & & Mask Buffer: 1MB \\ \hline \multirow{6}{*}{**PEs**} & Main Memory & 2-channel Mono. 3D RRAM; Bandwidth = 256GB/s \\ & PEs & 512 \\ \cline{1-1} & MAC Lanes & 32 per PE \\ \cline{1-1} & Softmas Modules & 32 per PE \\ \cline{1-1} & Batch Size & 32 \\ \cline{1-1} & Buffer & Activation Buffer: 32MB; Weight Buffer: 64MB; \\ \cline{1-1} & & Mask Buffer: 8MB \\ \hline \hline \end{tabular} \end{table} TABLE II: Design choices for AccelTran-Edge and AccelTran-Server. Fig. 11: Accuracy on the SST-2 task and activation sparsity with (a) pruning threshold for DynaTran and (b) pruning “\(k\)” for top-\(k\) pruning. #### Iv-B1 Comparing DynaTran with the Baseline Figs. 11 and 12 present the profiled accuracy curves for BERT-Base on the SST-2 task for DynaTran and top-\(k\) pruning techniques. In Fig. 11, we show the effect of the pruning hyperparameters on sparsity. For DynaTran, the pruning threshold (\(\tau\)) is varied from 0 to 0.1 and the activations are pruned based on the pruning threshold (see Section III-A). For top-\(k\) pruning, we change \(k\) in powers of two in order to see the effect of _net_ activation sparsity, i.e., the sparsity in all activations rather than only the attention scores. Further, we also test pre-pruned models to see the impact on net activation sparsity when weights are also pruned. For this, we use the BERT-Base model pruned using the MP algorithm [30]. Using MP results in a higher activation sparsity (since the activations formed by matrix multiplications with weights are sparser when the weights are also sparse), but at the cost of lower accuracy. As also observed in previous works [16], both DynaTran and top-\(k\) methods see an initial increase in accuracy before a drop, as the sparsity increases. This could be attributed to the over-parameterization of the BERT model [51] and the corresponding pruning method acting as a regularizer, thus giving a slightly higher validation performance. We see similar results for other models and datasets. We store geometric mean curves, like the ones presented here, in the internal register of the DynaTran module with a low memory footprint. For the required activation sparsity, or even accuracy, we obtain the corresponding pruning threshold through the threshold calculator in the DynaTran module (explained in Section III-B5) to implement the desired dynamic inference. Fig. 12 plots accuracy curves against activation sparsity for the DynaTran and top-\(k\) methods with and without MP. We obtain these curves from those in Fig. 11 by plotting accuracy against the corresponding resultant activation sparsity for every pruning threshold (\(\tau\)) or the pruning \(k\), as per the chosen method. We can see the trend of a slight increase in accuracy here as well. DynaTran achieves a higher accuracy (0.46% higher for BERT-Base without MP and 0.34% higher with MP) and a higher possible activation sparsity without much accuracy loss for both cases, i.e., with and without MP. For the same accuracy (the highest achievable by top-\(k\)), DynaTran enables 1.17\(\times\) and 1.20\(\times\) higher activation sparsity for each case, respectively. On the other hand, DynaTran can achieve up to 1.33\(\times\) (1.23\(\times\)) higher sparsity in absolute terms without MP (with MP). Here, we use \(\tau<0.1\), which yields reasonable accuracy values. We now compare the compute cost of the top-\(k\) method with that of DynaTran. Fig. 13 shows the normalized throughputs of the two methods for BERT-Tiny and BERT-Mini on two devices. These are a 2.6 GHz AMD EPYC Rome CPU with 128 cores and 768GB memory and an A100 GPU with 40GB VRAM. DynaTran achieves up to 96.38\(\times\) higher throughput on the GPU and up to 5.35\(\times\) higher throughput on the CPU. This is due to the use of low-overhead comparators with a pre-determined threshold. Even with the specialized top-\(k\) engine used in SpAtent and the approximation scheme used in Energon [16], they use more than one clock cycle, whereas DynaTran uses just one clock cycle. This is because the threshold calculator only needs a simple look-up operation and the comparators can execute within a clock cycle. #### Iv-B2 Testing if Weight Pruning is Effective in DynaTran DynaTran implements magnitude-based pruning of all activations at runtime. However, we can also leverage it to prune model weights before running the transformer. We call this weight pruning (WP) since we only prune the transformer weights. In this approach, we do not need downstream training, as opposed to MP, which iteratively trains model weights while also pruning them. Fig. 14 presents the accuracies and F1-scores on the SST-2 and SQuAD datasets, respectively, with and without the use of WP. Net sparsity represents the combined sparsity of weights and activations. WP results in slightly higher net sparsity, however, with a significant loss in performance. The high ratio of activations compared to weights (see Fig. 1) results in only marginal gains in net sparsity. Hence, we do not employ WP in DynaTran. We use movement-pruned models instead, resulting in high weight and activation sparsities (with DynaTran) at negligible performance loss. Fig. 12: Accuracy on the SST-2 task with activation sparsity for DynaTran and top-\(k\) methods. The annotations correspond to the maximum achieved accuracy or activation sparsity for each case. Fig. 13: Normalized throughput of DynaTran compared with the top-\(k\) method on a CPU and a GPU. Annotations are presented over each bar. ### _Dataflows and Data Reuse_ We can pass on different tiles to available resources based on the four for-loops shown in Fig. 3. We can arrange these four for-loops in \({}^{4}P_{4}=24\) ways without changing the output. However, based on the compute resource constraints, different loop-unrolling strategies, or dataflows, can result in the reuse of local tiled weights or activations. Fig. 15 compares these dataflows for various matrix multiplication operations. The multiplication, \(\mathbf{W}\times\mathbf{A}\), is carried out using four MAC lanes in this simple example. We observe that dynamic energy is minimized by dataflows [**b,i,j,k**] and [**k,i,j,b**]. We use the former dataflow for subsequent experiments. These two dataflows also have maximum reuse instances for all three matrix multiplications. A reuse instance indicates if a weight or activation tile is reused in the internal register of a MAC lane. Many dataflows have the same energy and reuse instances due to symmetry. Since AccelTran hides data transfer overheads, due to the optimized control flow, the net latency is the same for all dataflows (this also results in the same leakage energy). Next, we test the effect of the different dataflows on real-world traces with the BERT-Tiny model on AccelTran-Edge. However, we observed negligible energy differences among the dataflows. This could be attributed to massive parallelization being at odds with data reuse. For instance, to reuse the same set of tiled weights in a PE's register, the next operation using those weights would have to be assigned to the same PE rather than exploit other free PEs, thus limiting parallelization. Hence, as per Fig. 15, the advantages of data reuse can only be exploited in highly resource-constrained accelerators. ### _Design Space Exploration_ Fig. 16 shows a plot of the number of compute and memory stalls when evaluating BERT-Tiny with different number of PEs and buffer sizes. We use a 4:8:1 size ratio for the activation, weight, and mask buffers. We found this ratio to be close to the optimal based on empirical studies on memory access patterns for the BERT-Tiny model. Next, we sweep the net buffer size from 10MB to 16MB. Finally, we choose the following number of PEs: 32, 64, 128, and 256. The figure shows that the number of compute stalls gradually increases as both the number of PEs and buffer size are reduced. We justify this as follows. A lower number of PEs results in increased compute stalls since the compute operations have to wait for resources to free up in order to execute them, limiting available parallelization. In addition, a small buffer size results in memory stalls since memory store operations have to wait for the corresponding compute operations to finish before the current activations or weights, initially required by those compute operations, can be evicted from the buffer. Fig. 16 shows the chosen point for AccelTran-Edge. This set of design choices (64 PEs and 13MB net buffer size) represents a reasonable trade-off between the number of stalls (that directly increase latency) and hardware resources (that directly increase area and power consumption). An automatic hardware-software co-design ap Fig. 16: Number of stalls with hardware resources. Fig. 14: Accuracy/F1-score plotted against net sparsity on the (a) SST-2 and (b) SQuAD benchmarks. In DynaTran, WP was implemented with a fixed threshold. Fig. 15: Comparison of energy and reuse instances for all 24 dataflows under three matrix multiplication (\(\mathbf{W}\times\mathbf{A}\)) scenarios: (a) \(\mathbf{W}\in\mathbb{R}^{4\times 64\times 64},\mathbf{A}\in\mathbb{R}^{4 \times 64\times 64}\), (b) \(\mathbf{W}\in\mathbb{R}^{4\times 64\times 64},\mathbf{A}\in\mathbb{R}^{4 \times 64\times 128}\), and (c) \(\mathbf{W}\in\mathbb{R}^{4\times 128\times 64},\mathbf{A}\in\mathbb{R}^{4 \times 64\times 64}\). Bar plots represent dynamic energy and dashed lines represent reuse instances. proach [52] could also _efficiently_ test different buffer sizes, along with the corresponding ratios that may be optimal for each transformer model. We defer this automated co-design method to future work. ### _Hardware Performance and Utilization_ Fig. 17 shows the power consumption and resource utilization of BERT-Tiny on AccelTran-Edge during inference of one batch. Hardware utilization remains at zero until around 51K cycles (see Fig. 17(b)) when the accelerator loads the word and position embeddings into the weight buffer (accounting for around 60% of the weight buffer). However, these load operations only occur once and subsequent transformer evaluations on different sequences reuse these embeddings. The rest of the process sees high utilization of MAC lanes or softmax modules. At certain times, the accelerator uses both MAC lanes and softmax modules due to the staggered implementation of attention head operations. The leakage power is low, as we show in Fig. 17(a), due to the power-gating of unused modules. Buffer usage drops suddenly, in Fig. 17(c), at certain instances when data are evicted in order to make space for new data for the active compute operations. Table III shows the hardware performance measures for the proposed accelerator architectures, namely AccelTran-Server and AccelTran-Edge, along with a low-power (LP) mode that we support for AccelTran-Edge. The LP mode only works with half of the compute hardware at any given time, resulting in lower net power draw, which is often a constraint in edge devices that rely on a battery source. We show the chip area first. AccelTran-Server is a massive chip with an area of 1950.95 mm\({}^{2}\), although still lower than that of the A100 GPU (3304 mm\({}^{2}\) normalized to a 14nm process [50]). This can reduce the yield. However, we can leverage intelligent placement of PEs and binning to improve apparent yield rates [53]. We also show the tera-operations per second (TOP/s) performance measure for both architectures. AccelTran-Server can theoretically achieve a peak performance of 372.74 TOP/s, assuming all compute modules are operational simultaneously. We also present the minimum main memory size required for each accelerator. The net size of the embeddings and weights for BERT-Base and BERT-Tiny are 3467.30MB and 52.88MB (assuming a conservative 50% weight sparsity ratio [30]), respectively. However, transformer evaluation does not require all weights at any given time. Thus, the weight buffer can be much smaller. Similarly, even though the net size of activations is much higher (see Fig. 1), we can use a much smaller activation buffer. Finally, we present the power breakdowns for both the accelerators and the LP mode for AccelTran-Edge. The LP mode reduces power consumption by 39.1%, while lowering throughput by 38.7%, for BERT-Tiny. Fig. 18 shows the area and power breakdowns for different compute modules in AccelTran-Edge. The 1024 MAC lanes only take up 19.2% of the area, while the specialized 256 softmax and 64 layer-norm modules take up 44.7% and 10.3% of the area, respectively. Pre- and post-compute sparsity modules comprise 15.1% area, while the dataflow, the DynaTran modules, and the DMA occupy 10.7% of the chip area. Fig. 17: Evaluation of BERT-Tiny on AccelTran-Edge: (a) power consumption, (b) resource utilization of compute modules, and (c) resource utilization of buffers. \begin{table} \begin{tabular}{l|c c c|c c c c} \hline \hline \multirow{2}{*}{**Accelerator/Operation**} & \multirow{2}{*}{**Area (mm\({}^{2}\))**} & \multirow{2}{*}{**TOP/s**} & \multirow{2}{*}{**Main Mem. (MB)**} & \multicolumn{3}{c}{**Power Breakdown (W)**} \\ & & & & **PEs** & **Buffers** & **Main Mem. & **Total** \\ \hline AccelTran-Server & 1950.95 & 372.74 & 3467.30 & 48.25 & 10.40 & 36.86 & 95.51 \\ AccelTran-Edge & 55.12 & 15.05 & 52.88 & 3.79 & 0.08 & 2.91 & 6.78 \\ AccelTran-Edge (LP mode) & 55.12 & 7.52 & 52.88 & 2.31 & 0.05 & 1.77 & 4.13 \\ \hline \hline \end{tabular} \end{table} TABLE III: Area, theoretical peak TOP/s, and minimum main memory requirements, along with power consumption breakdown for different parts of the proposed accelerator architectures. The LP mode for AccelTran-Edge is also considered. Fig. 18: Breakdown of (a) area and (b) power consumption by compute modules in AccelTran-Edge. Fig. 18(b) shows the average power breakdown. Since most operations in the transformer involve matrix multiplication or softmax, they also draw most of the power (39.3% for MAC lanes and 49.9% for softmax modules). The high power consumption of the softmax modules can be attributed to the calculation of the exponential sum over the entire tile in a parallel manner. ### _Effect of Sparsity on Throughput and Energy_ Fig. 19 shows the effect of increasing sparsity on accelerator throughput and energy consumption. As the net sparsity increases from 30% to 34% for the BERT-Tiny model (with a conservative 50% weight sparsity estimate and accordingly tuned DynaTran's thresholds), throughput improves by 5% whereas energy consumption drops by 2%, when implemented on AccelTran-Edge. Here, accuracy drops by only 3% due to the low performance loss of DynaTran. ### _Performance Improvements_ Fig. 20 shows performance comparisons of AccelTran architectures with baseline platforms. For edge applications, we compare the inference of BERT-Tiny on AccelTran-Edge with that on Raspberry Pi CPU, Intel NCS NPU, M1 CPU, and M1 GPU. AccelTran-Edge achieves 330,578\(\times\) higher throughput at 93,300\(\times\) lower energy consumption relative to Raspberry Pi. On the server side, we compare the performance of BERT-Base on AccelTran-Server with that of A100 GPU and some recently proposed accelerators, namely, OPTIMUS [17], SpAtten [15], and Energon-Server [16]. The throughput and energy values for SpAtten and Energon are normalized with respect to the A100 GPU. AccelTran-Server achieves 63\(\times\) (5.73\(\times\)) higher throughput at 10,805\(\times\) (3.69\(\times\)) lower energy consumption when compared to off-the-shelf A100 GPU (state-of-the-art Energon co-processor). These gains can be attributed to the execution of the DynaTran algorithm at runtime along with sparsity-aware modules that skip ineffectual computations. The specialized softmax and layer-norm modules also speed up the respective operations, otherwise implemented as matrix multiplications in the A100. Further, monolithic-3D RRAM has much lower data-retrieval latency than HBM in the A100. These contributions enable AccelTran to achieve high throughput gains over the A100 GPU. We study the effects of these contributions next. ### _Ablation Analysis_ Table IV presents an ablation analysis for the inference of BERT-Tiny on AccelTran-Server. The first row corresponds to the selected AccelTran configuration as per Table II, with 50% weight sparsity implemented through MP and 50% activation sparsity at runtime through DynaTran. The second row corresponds to the case not leveraging DynaTran. Then, we test the accelerator when the BERT model is not weight-pruned using MP. Third, we test it without employing the pre- and post-sparsity modules to skip ineffectual MAC operations. Finally, we present results when AccelTran-Server utilizes an off-chip LP-DDR3 DRAM instead of a high bandwidth monolithic-3D RRAM. Although the use of DRAM leads to a lower net average power consumption than when monolithic-3D RRAM is used, its total energy is higher due to a much lower throughput. ## VI Discussion In this section, we discuss the implications of the proposed accelerator in the field of machine learning (ML) acceleration and future work directions. Fig. 19: Effect of sparsity on throughput and energy consumption. BERT-Tiny is simulated on AccelTran-Edge. Normalized throughput and energy are shown as bar plots on the left, and accuracy is shown as a dashed line plot on the right. \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{**Accelerator Configuration**} & **Throughput** & **Energy** & **Net Power** \\ & **(seq/s)** & **(mJ/seq)** & **(W)** \\ \hline **AccelTran-Server** & **172,180** & **0.1396** & 24.04 \\ w/o DynaTran & 93,333 & 0.1503 & **14.03** \\ w/o MP & 163,484 & 0.2009 & 32.85 \\ w/o Sparsity-aware modules & 90,410 & 0.2701 & 24.43 \\ w/o Monolithic-3D RRAM & 88,736 & 0.1737 & 15.42 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Ablation analysis for inference of BERT-Tiny on AccelTran-Server Fig. 20: Normalized throughput (left) and energy (right) comparison for AccelTran with baseline platforms targeted at (a) edge and (b) server applications. ### _Dynamic Inference with Transformers_ Previous works leverage complex pruning mechanisms, like top-\(k\) pruning, MP, etc. Implementing such pruning steps at runtime significantly slows down transformer evaluation. This has been a bottleneck in the widespread adoption of transformers on mobile platforms. In this work, we proposed a lightweight but powerful pruning mechanism: DynaTan. In essence, DynaTan implements magnitude-based pruning. However, we propose many novelties beyond vanilla magnitude-based pruning in terms of the algorithm and specialized hardware in order to obtain high gains relative to previous works. First, unlike previous works [55, 56], we prune not only the weights but also all the activations, which are formed at runtime. Second, we store pre-profiled curves in the internal register of the DynaTan module. The threshold calculator selects the threshold for pruning at runtime based on user-defined constraints on accuracy or throughput. This enables dynamic adjustment of the desired accuracy or throughput at runtime (see trade-off shown in Fig. 19). Third, the specialized DynaTan hardware module implements the algorithm in a single clock cycle, enabling high gains in throughput and reducing the bottlenecking effects of model pruning. Finally, DynaTan can easily incorporate any pre-processed weight pruning strategy [55, 56] into its pipeline. In our work, we show how we leverage movement-pruned models to enable higher sparsity in weights and activations. DynaTan results in better accuracy than the top-\(k\) hardware-aware pruning mechanism and significantly improves throughput. ### _ML Accelerators_ Various proposed ML accelerators target specific architectures. CNN accelerators [11, 12, 57, 58] focus on the convolution operation. Some works exploit sparsity in CNN models to reduce computation and memory footprint [12, 59, 60]. Certain works also exploit dynamism in model representation to minimize performance loss while leveraging low-bit computation. Two recent works, DUET [61] and Energon [16], employ dynamic mixed-precision computation. On the other hand, SPRING [12] implements stochastic rounding [62] with a fixed-precision format to maintain accuracy during training of CNNs. These extensions are orthogonal to the AccelTran framework and can easily be added to boost performance further. Table V compares the AccelTran framework with popular transformer accelerators. We take motivation from SPRING and reuse some hardware modules with minor changes, like the MAC lane (we add the GeLU activation), the pre-sparsity module, and the post-sparsity module. However, we design many new modules, namely, specialized RTL modules for the softmax and layer-norm operations, a module to carry out the DynaTan operations in a single clock cycle, and a novel control block that maps the transformer computational graph to hardware-implementable tiled operations. The control block is also responsible for choosing among various dataflows, originally not supported in SPRING. Unlike SPRING, it implements smart scheduling of operations to enable higher throughput in transformer evaluations (see Section III-B8). This is especially relevant to transformers with homogeneous operations throughout the model depth. Finally, AccelTran implements a lightweight dynamic inference algorithm for transformers, which SPRING does not support. One could evaluate vision transformers (ViTs) [4] in AccelTran. However, this would require specialized hardware modules and data-processing pipelines to support image-to-sequence conversion in order to run ViT inference. AccelTran only supports model inference and specialized modules are required to accelerate the backpropagation process in transformer training. We leave these extensions to future work. ### _Hardware-software Co-design_ In addition to leveraging sparsity in transformers, as explained in Section II-B, many more techniques have been proposed to obtain efficient transformers for pragmatic hardware implementation. These include low-bit quantization, knowledge distillation [26], approximation of the self-attention operation [31, 63], and weight pruning [29, 30, 64]. Further, researchers have proposed hardware-aware neural-architecture search to guide the exploration of efficient transformer architectures with hardware feedback [25]. However, these works are only limited to certain embedded devices [25], FPGAs [20, 21, 22], or off-the-shelf microcontrollers [65] that are far from being optimized for large and compute-heavy transformer models. Leveraging the various design decisions in the AccelTran framework can enable efficient and fast co-design of the transformer architecture and hardware accelerator. This could incorporate user-defined constraints on model accuracy and target power envelopes in diverse deployments [45]. We leave this to future work. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline **Work** & \begin{tabular}{c} **Transformer** \\ **Acceleration** \\ \end{tabular} & \begin{tabular}{c} **ASIC-based** \\ **Acceleration** \\ \end{tabular} & \begin{tabular}{c} **Monolithic** \\ **3D-RRAM** \\ \end{tabular} & \begin{tabular}{c} **Tiled** \\ **Mat. Mult.** \\ \end{tabular} & \begin{tabular}{c} **Dataflow** \\ **Support** \\ \end{tabular} & \begin{tabular}{c} **Sparsity-aware** \\ **Inference** \\ \end{tabular} \\ \hline SPRING [12] & & ✓ & ✓ & & & ✓ & \\ \hline FTRANS [20] & ✓ & & & & & & \\ \hline FPGA Transformer [21] & ✓ & & & & & & \\ \hline A3 [14] & ✓ & ✓ & & & & ✓ & \\ \hline IMTransformer [54] & ✓ & ✓ & & & & ✓ & \\ \hline OPTIMUS [17] & ✓ & ✓ & & & ✓ & \\ \hline SpAten [15] & ✓ & ✓ & & & ✓ & ✓ \\ \hline Energon* [16] & ✓ & & & & ✓ & ✓ \\ \hline **AccelTran (Ours)** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} TABLE V: Comparison of our proposed AccelTran framework with related works along different dimensions. *Energon is not an accelerator but a co-processor. ## VII Conclusion In this work, we presented AccelTran, a cycle-accurate accelerator simulator that efficiently runs dynamic inference with a given transformer model. We proposed a novel, low-overhead dynamic inference scheme, DynaTran, that increases the sparsity of activations at runtime with controllable accuracy loss. DynaTran achieves higher accuracy than the state-of-the-art top-\(k\) hardware-aware pruning strategy while enabling up to 1.33\(\times\) higher sparsity. We further implement this method on two accelerator architectures: AccelTran-Edge and AccelTran-Server, specialized for mobile and cloud platforms, respectively. AccelTran-Edge achieves 330K\(\times\) higher throughput at 93K\(\times\) lower energy when compared to a Raspberry Pi device. Finally, AccelTran-Server achieves 5.73\(\times\) higher throughput and 3.69\(\times\) lower energy consumption relative to the state-of-the-art transformer co-processor, Energon. ## Acknowledgments The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing at Princeton University.
2309.04664
Compact: Approximating Complex Activation Functions for Secure Computation
Secure multi-party computation (MPC) techniques can be used to provide data privacy when users query deep neural network (DNN) models hosted on a public cloud. State-of-the-art MPC techniques can be directly leveraged for DNN models that use simple activation functions such as ReLU. However, these techniques are ineffective and/or inefficient for the complex and highly non-linear activation functions used in cutting-edge DNN models. We present Compact, which produces piece-wise polynomial approximations of complex AFs to enable their efficient use with state-of-the-art MPC techniques. Compact neither requires nor imposes any restriction on model training and results in near-identical model accuracy. To achieve this, we design Compact with input density awareness and use an application-specific simulated annealing type optimization to generate computationally more efficient approximations of complex AFs. We extensively evaluate Compact on four different machine-learning tasks with DNN architectures that use popular complex AFs silu, gelu, and mish. Our experimental results show that Compact incurs negligible accuracy loss while being 2x-5x computationally more efficient than state-of-the-art approaches for DNN models with large number of hidden layers. Our work accelerates easy adoption of MPC techniques to provide user data privacy even when the queried DNN models consist of a number of hidden layers and trained over complex AFs.
Mazharul Islam, Sunpreet S. Arora, Rahul Chatterjee, Peter Rindal, Maliheh Shirvanian
2023-09-09T02:44:41Z
http://arxiv.org/abs/2309.04664v2
# Compact: Approximating Complex Activation Functions for Secure Computation ###### Abstract. Secure multi-party computation (MPC) techniques can be used to provide data privacy when users query deep neural network (DNN) models hosted on a public cloud. State-of-the-art MPC techniques can be directly leveraged for DNN models that use simple activation functions (AFs) such as ReLU. However, DNN model architectures designed for cutting-edge applications often use complex and highly non-linear AFs. Designing efficient MPC techniques for such complex AFs is an open problem. Towards this, we propose Compact, which produces piece-wise polynomial approximations of complex AFs to enable their efficient use with state-of-the-art MPC techniques. Compact neither requires nor imposes any restriction on model training and results in near-identical model accuracy. We extensively evaluate Compact on four different machine-learning tasks with DNN architectures that use popular complex AFs SiLU, GeLU, and Mish. Our experimental results show that Compact incurs negligible accuracy loss compared to DNN-specific approaches for handling complex non-linear AFs. We also incorporate Compact in two state-of-the-art MPC libraries for privacy-preserving inference and demonstrate that Compact provides 2x-5x speedup in computation compared to the state-of-the-art approximation approach for non-linear functions -- while providing similar or better accuracy for DNN models with large number of hidden layers. + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote †: *Part of the work done while author was an intern at Visa Research + Footnote † †: *Part of the work done while author was an intern at Visa Research + Footnote † †: *Part of the work done while author was an intern at Visa Research + Footnote † †: *Part of the work done while author was an intern at Visa Research + Footnote † First, unlike ReLU like simple non-linear AFs, approximating complex more non-linear AFs (e.g., SiLU, GeLU, Mish) accurately is hard near the region close to zero as illustrated in Figure 1. We argue ensuring precise approximation in this _harder to approximate region_ is crucial to minimize inference accuracy loss. This is because the majority of the input to the complex AF also falls to this region, primarily due to a normalization step known as batch normalization (BN) applied to the inputs prior to forwarding them to these complex AFs (refer to SS 2.1 for more details about this phenomenon). Second, it is unclear how approximation error, introduced by such MPC-friendly approximations of complex AFs, affects the inference accuracy of the DNN models. There is a chance that slight errors introduced by such approximation in each layer can accumulate and consequently affect the inference accuracy of DNN models significantly. This effect may be even more pronounced for deeper and wider models with more hidden layers and neurons per layer. Lastly, the approximation approach needs to handle the tradeoffs between performance overhead and inference accuracy loss carefully. Presumably, a generated MPC-friendly approximation with fewer low-degree polynomials would reduce performance overhead but might lead to significant inference accuracy loss. Conversely, an approximation with many high-degree polynomials would reduce inference accuracy loss but suffer from elevated performance overhead. Additionally, during approximation, it's essential to account for the ring size used by the MPC library to represent fixed points (FXP) as existing MPC libraries typically perform computation over FXP representation instead of commonly used floating point (FLP) numbers. While working with a smaller range and resolution in FXP enhances efficiency, this can trigger overflow or underflow during conversion from FLP to FXP, significantly compromising inference accuracy with high probability. On the other hand, using a larger FXP range avoids such overflow/underflow issues but introduces performance overhead. We handle these challenges empirically and via careful synergy of machine learning techniques and MPC. More specifically, we tailor our approximation scheme to harmonize the intricate architecture of the state-of-the-art DNN models while maintaining compatibility with existing general-purpose MPC libraries. As a first step, we recognize that instead of relying on cubic spline-based piece-wise interpolation as prior work on handing complex AFs (Sandhi, 2017), Chebyshev sequence-based interpolation (Sandhi, 2017) is more suited for approximating complex AFs. This is because Chebyshev interpolation generally outperforms other techniques when approximated functions are smooth and monotonic (Sandhi, 2017), as is the case with complex AFs. We note NFGen (Friedman, 2017), being a contemporary work to ours, also adopts a similar insight for generating MPC-friendly approximation of non-linear functions. Nevertheless, as we describe next, our distinctive optimizations tailored for DNN models make Compact sustain inference accuracy loss even as DNN models increase in depth without sacrificing performance. Next, we observe that state-of-the-art DNN models apply normalization in between performing linear transformations and applying non-linear AFs (Figure 2). Such normalization leads to majority of the inputs to the non-linear AFs fall into specific places near the region close to zero with high probability -- while a small portion of inputs will fall into places with low probability. Thus, we hypothesize that an MPC-friendly approximation scheme for complex AFs; taking this observation into account will help mitigate the cumulative impact of errors introduced by MPC-friendly approximations from one layer to subsequent layers of a deeper/wider DNN model. Lastly, to balance performance overhead and inference accuracy loss, we devise an empirical strategy employing simulated annealing (SA) search technique (Sandhi, 2017). This approach revolves around strategically adjusting the following three parameters: the numbers of polynomials (m), degrees associated with each polynomial (k), and ring sizes of FXP (\(\mathcal{R}\)) used for approximation. Lowering any of these parameters inherently reduces performance overhead. Leveraging this progressive relationship, we initiate the searching process with an initial solution featuring high m, k, and \(\mathcal{R}\) -- resulting in pronounced performance overhead and an imbalance between performance and accuracy. To rectify this, we explore adjacent solutions through random local adjustments to mitigate performance overhead while preserving minimal accuracy loss, all within the framework of the SA search technique. Importantly, exhaustive iterations over reasonable m, k, and \(\mathcal{R}\) values would be computationally infeasible due to the time-intensive nature of evaluating inference loss for each generated MPC-friendly approximation. Additionally, we propose a systematic way to find the appropriate approximation threshold -- an important component of the approximation-based approaches -- through binary search, moving away from setting a fixed threshold as NFGen (Friedman, 2017) does (refer to SS 4.2.3). Furthermore, we introduce a DNN-specific modification (refer to SS 4.2.5) to enhance the performance efficacy of Compact. We implement Compact and incorporate its generated MPC-friendly approximation of complex AFs to two state-of-the-art secure inference MPC libraries ABY\({}^{3}\)(Sandhi, 2017) and CryptFlow2 (Sandhi, 2017). Our experiments reveal that Compact generated MPC-friendly approximation of complex AFs shows negligible inference accuracy loss while being computationally efficient for four image classification tasks of various complexities. We are in the process of open-sourcing Compact with the next version of the paper. **Summary.** Our contributions, in summary, are as follows: * We present Compact, that can generate MPC-friendly piece-wise polynomial approximations for popular complex non-linear AFs. The generated approximation is generic and can be easily incorporated into state-of-the-art multi-party computation scenarios (SS 4.1). * The approximation technique used in our method is input density aware and accurately approximates regions with high input probability density while coarsely estimating regions with low input probability density (SS 4.2). * We empirically handle the innate challenge to find an approximation that does not degrade inference accuracy and introduce much performance overhead using an application-specific design of simulated annealing (SA) searching technique (SS 4.3). * Extensive experiments using four different state-of-the-art DNN models on diverse classification tasks demonstrate that Compact and NFGen incur negligible accuracy loss compared to existing other DNN-specific approaches (Sandhi, 2017; Sandhi, 2017; Sandhi, 2017) (SS 5.3). We further compare the performance overhead of Compact and NFGen by incorporating their generated approximation to two state-of-the-art MPC libraries. We find that our DNN model-specific optimizations make Compact 2\(\times\)-5\(\times\) computationally efficient than NFGen (Hendrycks et al., 2015) -- for DNN models having a high number of hidden layers, all while maintaining negligible accuracy degradation (SS 5.4). ## 2. Background and Related Work This section summarizes relevant background and prior work from deep neural networks (SS 2.1) and cryptographic techniques developed for solving the secure inference problem (SS 2.2). ### Deep Neural Network Preliminaries **Activation Functions (AFs).** AFs are used for adding non-linearity to the learning process and play a major role in enhancing the training capabilities and accuracy of the DNN models. Many contemporary models use ReLU AF, which makes a hard gating decision based on the input sign (Figure 1). Despite being theoretically simple, ReLU provides remarkably faster convergence and performance in practice (Srivastava et al., 2015; Sutskever et al., 2015). However, ReLU outputs a value of zero whenever the input is negative, and as such, the neural network loses a certain amount of valid information as soon as inputs become negative. This drawback prompted ML communities to develop complex AFs, overcoming the limitations of ReLU. **Complex AFs.** In recent years, a range of complex AFs, such as SiLU (Hendrycks et al., 2015), GeLU (Gulter et al., 2016), and Mish (Mish et al., 2017), has emerged surpassing the performance of ReLU in state-of-the-art DNN models applied across computer vision, natural language processing, and reinforcement learning applications. These AFs as shown in Figure 1, are smooth and continuous, can handle small weight changes, and aid in effectively regularizing DNN models. For example, Hendrycks et al. empirically illustrated the robustness of GeLU-trained DNN models against noisy inputs, often surpassing the accuracy of ReLU-trained models (Gulter et al., 2016). Ramachandran et al. used automatic search techniques to uncover SiLU (also called Swish). This complex AF improved image classification accuracy of Inception-ResNet-v2 model by 0.6% and by 0.9% of Mobile NASNET-A model (Rahdan et al., 2017) by simply substituting it with ReLU. Misra et al. proposed the self-regularized AF Mish that exhibits superior performance compared to AFs for YOLOv4, ResNet models (Mish et al., 2017). Hence, complex AFs offer a compelling advantage in building better-performing models in terms of convergence and classification accuracy when compared to ReLU. Unfortunately, unlike ReLU, which is relatively easy to compute for secure evaluation, these complex AFs exhibit a higher degree of non-linearity as shown in Figure 1. This makes their use with existing MPC techniques challenging. In this work, we address this limitation by designing an MPC-friendly version of these three complex AFs. We refer more interested readers to Appendix D for additional details on other complex AFs used in neural networks that lie outside the scope of this work. **Batch normalization.** Batch normalization (BN) is used to address _internal covariance shift_ problem in neural networks -- which happens when a layer's input distribution changes abruptly due to its dependency on previous layers (Srivastava et al., 2015). BN lends stability to the training process by reducing dependence on initial parameter selection and requiring a lower learning rate and number of epochs. BN is performed on the outputs of the linear transformations, and normalized outputs are forwarded to non-linear AFs. Thus, non-linear AFs receive normalized inputs. Figure 2 illustrates BN process for \(\ell^{\text{th}}\) layer where input to the linear operations is \(h^{\ell}\) and output is \(\mathbf{a}^{\ell}=w^{\ell T}h^{\ell}\). Assume \(\mathbf{a}^{\ell}=\left(a_{1}^{\ell},a_{2}^{\ell},\cdots,a_{d}^{\ell}\right)\) is \(d\)-dimensional. If the population mean and variance are \(\mathbb{E}[\mathbf{a}^{\ell}],\operatorname{Var}[\mathbf{a}^{\ell}]\) respectively, then \(\mathbf{a}^{\ell}\) is normalized to \(\overline{\mathbf{a}}^{\ell}\) using Eq. 1 such that the probability distribution of \(\overline{a}^{\ell}\) follows a normal distribution with mean and variance zero and 1, respectively. \[\overline{\mathbf{a}}^{\ell}_{k}=(\mathbf{a}^{\ell}_{k}-\mathbb{E}[\mathbf{a }^{\ell}_{k}])/\operatorname{Var}[\mathbf{a}^{\ell}_{k}] \tag{1}\] BN is widely used in state-of-the-art DNN models to calibrate the input to the non-linear AFs during both training and inference phase. This makes it a good estimator of the input density to complex AFs in DNN models during inference. Our scheme leverages this estimation to improve the generated MPC-friendly approximation. ### Secure Inference for DNN models State-of-the-art MPC techniques enable computation on encrypted data and have been used to address the secure inference problem. Generally, a client encrypts their input and sends the encrypted input to a cloud service. The cloud service performs inference using trained DNN models over the encrypted input. Typically, MPC techniques are optimized for linear transformations (e.g., addition, vector multiplications, etc.). Therefore, computing non-linear operations involved in secure inference (e.g., non-linear AFs) is one of the main challenges. **ReLU specific secure inference.** Given ReLU is one of the most popular AFs used in practical deployments of DNNs, recent research has mostly focused on the use of ReLU in MPC paradigms (Rahdan et al., 2017; Rahdan et al., 2017; Rahdan et al., 2017; Rahdan et al., 2017). For example, Rathee et al. propose a novel 2PC protocol for secure comparison and division for efficient evaluation of ReLU in semi-honest settings (Rahdan et al., 2017). Follow-up works extend this protocol to the malicious client threat model (Rahdan et al., 2017; Rahdan et al., 2017). However, ReLU specific optimizations proposed in the aforementioned methods do not generalize to other complex AFs. Another set of methods uses Garbled Circuits (GC) for secure evaluation of ReLU (Rahdan et al., 2017; Rahdan et al., 2017; Rahdan et al., 2017; Rahdan et al., 2017). However, communication overhead limits its applicability to shallow DNN models (less than seven layers). It is challenging to generalize these methods to wide DNN models that use complex AFs other than ReLU for secure inference. A different approach for computing non-linear AFs efficiently in the encrypted domain is by restricting the way DNN models are trained. For example, Riazi et al. (Riazi et al., 2018) leverage GC based protocol for secure inference on binary neural networks (BNN). However, retraining proprietary models with these restrictions is costly and oftentimes practically infeasible. Imposing such limitations on the training process can also impact the performance of DNNs in practice. Pereteanu et al. (Pereteanu et al., 2017) introduce the notion of partially private DNN models such that the middle part of the model is sent in plaintext to clients to reduce communication overhead. However, in practice, cloud service providers would want to keep their full part of the DNN model secret test revealing any part of the property model leaks sensitive information, resulting in business consequences. In summary, a plethora of research work has focused on secure inference for ReLU-based DNNs. Our work focuses on novel AFs that have been shown to outperform ReLU and are getting traction in the ML community. We refer the readers to (Wang et al., 2019) for a recent comprehensive survey. **Secure inference for other non-linear AFs.** A common approach for secure inference involving non-linear AFs is by approximating them with low-degree polynomials. These polynomials are easy to compute for MPC frameworks and thus are MPC-friendly. The challenge is not to degrade the inference accuracy, as the approximation error can cause incorrect results. Delphi (Dolphin et al., 2019), for example, runs a planner that balances which AF can be replaced with low-degree polynomials without introducing too many inaccuracies and achieving a significant communication benefit. CryptoNet (Kumar et al., 2019) CryptoDL (Kumar et al., 2019), MiniONN (Kumar et al., 2019) also, use similar ideas for approximating non-linear AFs. However, they are application-specific, and switching to another application degrades accuracy significantly (Kumar et al., 2019) due to small errors getting propagated resulting in numeric instability. In addition, MiniONN (Kumar et al., 2019) is heavily focused on sigmoid AF -- which is essential for logistic regression models. However, as we will show in SS 5.3, using their recipe of piece-wise polynomial approximation drastically decreases the inference accuracy when we use their approach for approximating the complex AFs that we focus in this work. Recently, Fan et al. proposed NFGen (Fan et al., 2019) a technique capable of converting popular non-linear functions into MPC-friendly ones. One may also choose to use this approximation-based approach to do the same for complex non-linear AFs. In fact, NFGen is the closest related work to ours as we also follow a similar approach -- generating MPC-friendly approximations of the complex AFs using a number of piece-wise polynomials. However, NFGen is not specifically customized for widely used complex AFs inside DNN models. Absence of such customized techniques makes NFGen computationally less efficient when we compare it with our scheme through extensive experiments (SS 5.4). ## 3. Problem Overview ### Problem Formulation and Scenario Setup **Problem formulation.** We refer to the server holding the DNN model by \(\mathcal{S}_{\text{owner}}\). The DNN model consists of \(L\) layers, each comprising first linear transformations and then applying non-linear complex activation function (AF) \(F_{\text{act}}\). In between linear and non-linear complex AF, batch normalization is also present. We assume a machine learning as a service (MLasS) (Beng et al., 2019) inspired scenario where weights \(\mathbf{W}\) of all layers have already been trained, and the trained model is being used to provide cloud-based inference \(z\) over client (\(C\)) uploaded input \(\mathbf{X}\) using Eq. 2. \[z\coloneqq F_{\text{act}}(\mathbf{W}_{L}\cdot\cdots F_{\text{act}}(\mathbf{W} _{2}\cdot F_{\text{act}}(\mathbf{W}_{1}\cdot\mathbf{X}))) \tag{2}\] The problem secure inference tackles is how to compute Eq. 2_obliviously_ to satisfy the privacy needs of both \(C\) and \(\mathcal{S}_{\text{owner}}\). This requires designing a system such that \(\mathcal{C}\) knows nothing about model weights \(\mathcal{W}=[\mathbf{W}_{1},\mathbf{W}_{2},\cdots,\mathbf{W}_{L}]\) of the DNN model and \(\mathcal{S}_{\text{owner}}\) learns nothing about \(\mathbf{X}\); yet \(\mathcal{C}\) can get the inference result \(z\) from these received SS. Moreover, we need to achieve this privacy need without significantly degrading inference accuracy and performance overhead. **Scenario Setup.** MPC techniques provide a generic solution to solve this problem using the \(N\)_server scenario_. In this scenario, the Figure 1. Complex activation functions (AFs) considered in our work \(f(x)\in\{\text{SiLU},\text{GeLU},\text{Mish}\}\) and their second derivatives \(f^{\prime\prime}(x)\). While SiLU, GeLU, Mish can be accurately approximated in domain regions where \(f^{\prime\prime}(x)=0\), accurate approximation is difficult in domain regions close to zero where \(f^{\prime\prime}(x)>0\). This is problematic for DNN models that leverage these AFs since majority of normalized input to AFs falls in domain regions that are inherently difficult to accurately approximate (Figure 2). In contrast, first derivative of ReLU(\(x\)) is 0. This makes it easy to approximate ReLU AF using only two piecewise polynomials \(\{f_{1},f_{2}\}\)–one when \(x>0\) via \(f_{1}(x)=0\) and another when \(x>0\) via \(f_{2}(x)=x\). Figure 2. The output of the linear operations (\(\alpha^{\ell}\)) are normalized to \(\overline{\alpha}^{\ell}\) using Eq. 1 before they are forwarded for applying non-linear operations involving complex activation functions (AFs). DNN model owner \(\mathcal{S}_{\text{owner}}\) and the client \(\mathcal{C}\) uses a set of \(N\) non-colluding servers to achieve their privacy requirements. \(\mathcal{S}_{\text{owner}}\) generates \(N\) secret shares (SS) of the model weights (i.e., \(\mathcal{W}=\llbracket\mathcal{W}_{1},\mathcal{W}_{2},\cdots\mathcal{W}_{N} \rrbracket\)), and \(\mathcal{C}\) does the same for its private input (i.e., \(\mathbf{X}=\llbracket\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{N} \rrbracket\). Then, these \(N\) SS are distributed by \(\mathcal{S}_{\text{owner}}\) and \(\mathcal{C}\) with the \(N\) servers. The servers then perform the secure inference computing Eq. 2 securely on their shares of the information and return the results \(\llbracket 2_{1},z_{2},\cdots,z_{N}\rrbracket\) to \(\mathcal{C}\), who can combine the shares for constructing \(z\). We summarize the adversary's power \(\mathcal{A}\) for the \(N\) server scenario typically assumed by MPC protocols in SS 3.3. Following other works, we focus on running experiments for two most common scenarios: 2PC (\(N=2\)) (Gilton et al., 2016; Gilton et al., 2017; Gilton et al., 2018) and 3PC (\(N=3\)) (Gilton et al., 2017; Gilton et al., 2018; Gilton et al., 2018) -- although our scheme does not necessarily depend on the specifics of \(N\). We pictorially present a 3PC scenario in Figure 3. One of the motivating realizations of this scenario can be in the medical domain. In particular, where a DNN model has been trained by a trusted organization (e.g., NIH1) leveraging substantial computational resources and exclusive access to users' private health records. To preserve the privacy of the proprietary DNN model, NIH can generate SS of the model and distribute them across \(N\) different non-colluding servers, possibly hosted by different hospitals. When patients submit their private health data, they can generate \(N\) SS and share them with the \(N\) different hospitals. In this way, the patient learns the final result without learning anything about the model or revealing their private information to any hospital. Footnote 1: National Institutes of Health at: [https://www.nih.gov/](https://www.nih.gov/) **Difficulty in computing non-linear AFs.** A major bottleneck is securely computing \(F_{\text{act}}(x)\) shown in Eq. 2. This is because \(F_{\text{act}}(x)\) is non-linear, which consumes most of the communication and latency costs of the overall total protocol execution, as illustrated by many prior work (e.g., Rathee et al. c.f., (Rathee et al., 2019) Table 6). Linear operations (i.e., vector multiplication) are relatively less expensive. ### Design Goals While designing Compact, we want to ensure DNN model designers are not restricted to the set of AFs and model architectures that MPC platforms support. We distill four criteria for this and show how prior work on secure inference, mainly from 2019 onwards, fail to satisfy one or more of these design goals in Table 1. **Support complex AF.** We want our scheme to be compatible with the majority of the DNN models used by inference services. Therefore, in this work, we do not use ReLU-specific optimizations. A large amount of prior work are devoted to optimize ReLU and fail to satisfy this design goal (Gilton et al., 2017; Gilton et al., 2017; Gilton et al., 2018; Gilton et al., 2018). Few works rely on garbled circuits (GC) to evaluate AFs, but experimental evaluations are limited to ReLU AF (Gilton et al., 2017; Gilton et al., 2018; Gilton et al., 2018; Gilton et al., 2018)2. Therefore, it is unclear if these GC-based protocols can generalize to other complex AFs such as SiLU, GeLU, and Mish. We marked them as _unclear_ in the first column of Table 1. Footnote 2: Although Delphi uses GC to evaluate non-linear layers, the MPC friendly **Supports large number of hidden layers.** The error introduced due to replacing \(F_{\text{act}}\) with its MPC-friendly approximation \(\widetilde{F}_{\text{act}}\) in Eq. 2 can accumulate and possibly lead to a significant loss in accuracy as the number of hidden layers increases. Unfortunately, few prior works (Gilton et al., 2017; Gilton et al., 2018; Gilton et al., 2018) that support complex AFs show significant accuracy loss for DNN models with a high number of hidden layers. NFGen, however, does not exhibit this accuracy loss as the number of hidden layers increases, but this negligible accuracy loss comes at the cost of paying high-performance overhead. We want our scheme to endure such accuracy loss as the number of hidden layers increases without increasing the performance overhead significantly. **Compatible with MPC libraries.** The secure inference procedure we develop should not only support a wide variety of AFs but \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & \begin{tabular}{c} Supp. \\ cmpl. AF \\ \end{tabular} & \begin{tabular}{c} Supp. \\ many HL. \\ \end{tabular} & \begin{tabular}{c} Comp. \\ w/ MPC. libs \\ \end{tabular} & \begin{tabular}{c} Supp. any \\ training proc. \\ \end{tabular} \\ \hline \hline CryptFlow2 (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ SIMC (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Cheeta (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Delphi (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ XONN (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ SHENN (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ MNE (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ SecureNN (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ FALCON (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ NFGen (Gilton et al., 2017) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \hline \hline \end{tabular} [FOOTNOTE:2]Footnote 2: Refer to § 3.2 for more details about the design goals indicated by columns \(\ also should be easy to implement. Implementations that require new cryptographic primitives for secure inference will be hard and slow to deploy. Therefore, in this work, we aim to design a scheme that is generic to MPC library currently in use. To realize our solution, the MPC library only needs to support addition, multiplication, and comparison operations. This would also allow seamless transitioning from inference service using ReLU based DNN models to complex AF-based DNNs. Except NFGen, other prior works do not satisfy this design goal. **No restriction on training.** To handle accuracy loss with an increasing number of hidden layers, a few prior works change how DNN models are traditionally trained. For example, XONN requires restricting the weights of the DNN model to binary values (i.e., \(\pm 1\)); similarly, Delphi replaces certain AFs (i.e., ReLU) with a square function during training. We believe this type of restriction poses additional constraints as already trained DNN models are most likely trained traditionally without these restrictions, and further attempts to adjust the weight of the already trained DNN models (e.g., fine-tuning the trained DNN model by applying these restrictions) to comply with these protocols would be expensive. Therefore, we aim to design Compact without any restriction on the training process of the DNN models. In summary, recent proposals in secure inference literature are making great strides toward realizing secure inference; but they do not focus on satisfying the above-mentioned generability, deployability, and scalability aspects important for realizing secure inference in the real world. We aim to bridge this gap via our designed scheme Compact. ### Threat Model We assume a general setup of an MPC scheme and henceforth inherit their security requirements. More specifically, in MPC schemes, the adversary \(\mathcal{A}\) is parameterized by four dimensions (Zhou et al., 2017). They are i) corruption strategy (static, adaptive, proactive), ii) type of \(\mathcal{A}\) in terms of how they are following the protocol (semi-honest, malicious, covert), iii) corruption ability \(\mathcal{A}\) has (honest, dishonest majority), and iv) power of the \(\mathcal{A}\) (informational, computational-secure). Our techniques do not apply any restriction on how the adversary \(\mathcal{A}\) is modeled by the MPC scheme. In our experiments, we use MPC schemes that are secure against honest-but-curious adversaries. ## 4. Compact: Overview and Design In this section, we first give an overview of our scheme in SS 4.1. Then, we detail our scheme gradually in SS 4.2 and SS 4.3. We sketch our scheme in Figure 4 with a summary of notations used in Table 2. ### Overview of Compact **Piece-wise polynomial approximation approach.** Our scheme Compact follows the idea of approximating a complex activation function (AF) using a number of piece-wise polynomials. First, we observe that complex AF can be approximated easily using linear functions outside a certain range. (Fan et al. made similar observations for sigmoid (Fan et al., 2016)) Therefore, we only need to focus on a small range of \(x\) values, say \([s,e]\), which require approximation. We will approximate \(F_{\text{act}}\) using a piece-wise polynomial function, with m pieces \([f_{1},f_{2},\cdots,f_{m}]\) defined as follows: \[\widehat{F}_{\text{act}}(x)=\sum_{i=0}^{m+1}l_{i}\cdot f_{i}(x) \tag{3}\] where \(l_{i}(x)=1\) if \(x\in(x_{i-1},x_{i}]\), and \(0\) otherwise, for all \(i\in\{0,1,\ldots,m\}\), \(x_{-1}=-\infty\), \(x_{0}=s\), \(x_{m}=e\), and \(I_{m+1}(x)=1\), if \(x>e\), and \(0\) otherwise. The functions \(l_{i}\) define the pieces, and functions \(f_{i}\) define the polynomials. We impose an additional constrain that all polynomials must be of degree k or less, \(\forall\)i Deg(\(f_{i}\)) \(\leq\) k as shown in Eq. 4 \[f_{i}(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots+a_{k}x^{k} \tag{4}\] The above-mentioned approach is not specific towards ReLU and can approximate complex AFs (e.g., SiLU, GeLU, Mish) -- satisfying design goal **A** as described in SS 3.2. Furthermore Eq. 3,4 comprise of three math operations ADD, MUL, and COMP and the majority of the MPC libraries support these three math operations -- making Compact generic to the MPC library being used and thus satisfying design goal **A**. However, generally, approximation-based approaches tend to be inaccurate (Zhou et al., 2017). Thus, maintaining negligible accuracy loss with increasing hidden layers (design goal **A**) and imposing no restriction on training (design goal **A**) become challenging. We address this challenge by developing the techniques described next. ### Generating Accurate Approximations To approximate \(F_{\text{act}}\) for a given region \([s,e]\) using m piece-wise polynomials with degree at most k and has negligible accuracy loss, we use an opportunistic approach GenAccurateApprox as shown in Figure 4. Note that our approach, as mentioned earlier, \begin{table} \begin{tabular}{l l} \hline \hline Symbol & Description of the symbol \\ \hline \(F_{\text{act}}(x)\) & complex activation function. \\ \(\widehat{F}_{\text{act}}(x)\) & MPC friendly piece-wise polynomial approximation of \(F_{\text{act}}(x)\). \\ \(\mathcal{E}\) & distance metric to estimate the approximation \\ & error between \(F_{\text{act}}(x)\) and \(\widehat{F}_{\text{act}}(x)\) \\ \(\delta\) & maximum threshold for approximation error. \\ m & \# of piece-wise polynomials used for approximation. \\ k & maximum degree of each of m piece-wise polynomials. \\ \(\mathcal{R}_{(n,d)}\) & ring of size \(n\) is used in MPC library with last \(d\) \\ & bits representing the fractional parts. \\ \(f_{i}(x)\) & single polynomial approximating when \(x\in[x_{i},x_{i+1}]\) \\ \([\alpha,\beta]\) & continuous closed interval between \(s\) and \(e\) \\ \([s,e]\) & the interval over which we are trying to approximate \\ P(\(x\)) & probability distribution of \(x\) input the activation function. \\ \hline \hline \end{tabular} \end{table} Table 2. Common notations used in this paper. shares similarities with NFGen(Krishan et al., 2017) on the interpolation aspect only (described in SS 4.2.6 for completeness). Other techniques, described next, are tailored to complex AF used in DNN models and unique to Compact. This makes Compact computationally more efficient than NFGen as the number of hidden layers increases (experimentally illustrated in SS 5.4). We now describe these techniques employed by Compact in detail as follows. #### 4.2.1. Computing \(P(x)\) We aim to design approximate polynomials that are close to accurate on likely values of \(x\), meaning higher probability according to \(P(x)\), while may have higher error on values of \(x\), which are less likely. A challenge, however, is how to estimate the distribution \(P\). Interestingly, in DNN models, the inputs to an AF are first batch normalized (BN) using Eq. 1, to help the network converge faster during training (discussed earlier in SS 2.1). Therefore, the set of values the AF is computed on is distributed (approximately) as a Gaussian distribution with zero mean and unit standard deviation. The approximating piecewise polynomial, therefore, should ensure low error on highly likely inputs, whereas on low probable inputs, it may make a higher error. Our key insight is that \(P(x)\) can guide us to focus on approximating those regions more accurately where \(P(x)\) is high. In contrast, we can get away with a less accurate approximation where \(P(x)\) is close to zero without degrading the accuracy significantly. Moreover, due to BN applied on DNN layers prior to applying the AFs, we estimate \(P(x)\) using a standard normal distribution \(\mathcal{N}(0,1)\). However, incorporating \(P(x)\) to the approximation procedure is not straightforward, and we use a customized function to compute the approximation error that takes into account \(P(x)\) as we describe next. We remark one caveat of this design choice -- Compact becomes reliant on BN (discussed further in SS 6). #### 4.2.2. Designing \(\mathcal{E}\) To incorporate \(P(x)\) to the approximation procedure, we customize an approximation error which we refer to as _weighted mean approximation error_ denoted by \(\mathcal{E}_{\text{Mean}}\). \[\mathcal{E}_{\text{Mean}}(P,\widehat{F}_{\text{act}},\alpha, \beta)=\\ \frac{1}{(\beta-\alpha)}\int\limits_{\alpha}^{\beta}P(x)\cdot|F_{ \text{act}}(x)-\widehat{F}_{\text{act}}(x)|dx \tag{5}\] As shown in Eq. 5, in addition to considering how accurately \(\widehat{F}_{\text{act}}\) estimates \(F_{\text{act}}(x)\) for a given region between \(s\) and \(c\), \(\mathcal{E}_{\text{Mean}}\) also takes \(P(x)\) into account. Prior work NFGen(Krishan et al., 2017) uses _max approximation error_, which we denote by \(\mathcal{E}_{\text{Max}}\) -- as a way to design \(\mathcal{E}\). \[\mathcal{E}_{\text{Max}}(\widehat{F}_{\text{act}},\alpha,\beta)=\max_{x\in[ \alpha,\beta]}|F_{\text{act}}(x)-\widehat{F}_{\text{act}}(x)|\] We choose to use \(\mathcal{E}_{\text{Mean}}\) over \(\mathcal{E}_{\text{Max}}\) as it is easy to guide the approximation process via \(P(x)\) using \(\mathcal{E}_{\text{Mean}}\). #### 4.2.3. Selecting a threshold \(\delta\) for approximation error \(\mathcal{E}_{\text{Mean}}\) A straightforward ad hoc way to ensure the accurateness of the approximation is to set a fixed approximation error threshold (\(\delta\)) and Figure 4. (Right-top) FindBestPiecePoly procedure to find an MPC-friendly approximation \(\widehat{F}_{\text{act}}\) of the complex activation function (AF) \(F_{\text{act}}\). The procedure balances the trade-off between inference accuracy loss and performance overhead using an application-specific optimization approach (simulated annealing). It uses two sub-procedures--GenerateNeighbour to generate a random neighbor \(\theta^{\prime}\) from a given \(\theta\) (shown Right-bottom) and GenAccurateApprox to approximate the region \([s-e]\) accurately using a set of at most \(m\) polynomials (shown Left) with degree \(\leq k\). consider an approximation accurate if approximation error calculated via \(\mathcal{E}\) is \(\leq\delta\). NFGen also follows this ad hoc approach and sets \(\delta=10^{-3}\). Via empirical experimentation, they observed that if \(\mathcal{E}_{\mathrm{MAX}}\leq 10^{-3}\), then the generated approximation, when used in logistic regression and \(\chi^{2}\) testing, does not degrade accuracy without adding much performance overhead. We refrain from setting a fixed \(\delta\) for Compact as the appropriate \(\delta\) may vary well from one DNN model/dataset to another. Also, Compact should systematically find the appropriate \(\delta\), relieving the practitioners of the additional burden of finding an appropriate \(\delta\) on their own. Thus, Compact discovers an appropriate \(\delta\) by performing a binary search (BS) over \(\delta\) and finding the highest \(\delta\) such that the approximation corresponding to \(\delta\) incurs a negligible inference accuracy loss. This is sound due to the monotonic relationship between approximation error and inference accuracy. Lastly, one challenge of this approach is checking if the inference accuracy loss is negligible or not -- at each step of BS. We describe a solution to this challenge next. #### 4.2.4. Measuring accuracy loss One of the crucial components of GenAccurateApproxis for finding the appropriate \(\delta\) is to tell apart if the generated \(\widetilde{F}_{\mathrm{act}}\) renders a negligible accuracy loss. Unfortunately, it is difficult to tackle this challenge analytically. We attempt to handle this challenge empirically by relying on that well-known _closed-world_ assumption used in machine learning -- that is, if something performs well on the training/validation dataset, it will perform equally well on the test data. More specifically, we replace the original \(F_{\mathrm{act}}\) with the generated MPC-friendly approximation \(\widetilde{F}_{\mathrm{act}}\) and calculate the inference accuracy over the training dataset. We call this inference accuracy \(\eta_{2}\) and compare it with the plaintext inference accuracy \(\eta_{1}\) which uses the original \(F_{\mathrm{act}}\) over the same training dataset. If \((\eta_{1}-\eta_{2})/\eta_{1}\leq v\), we consider \(\widetilde{F}_{\mathrm{act}}\) to be accurate enough, where \(v\) is a small value representing the accuracy loss the practitioner can tolerate. #### 4.2.5. Designing \(\widetilde{F}_{\mathrm{act}}^{\mathrm{red}}\) We also added another DNN model-specific optional optimization. Instead of approximating the original \(F_{\mathrm{act}}(x)\), we manually introduce a crude MPC-friendly approximation3 of \(F_{\mathrm{act}}(x)\), which we call \(\widetilde{F}_{\mathrm{act}}^{\mathrm{red}}\). Then, set to approximate \(F_{\mathrm{act}}(x)-\widetilde{F}_{\mathrm{act}}^{\mathrm{red}}(x)\) using GenAccurateApprox instead. The final approximation of an AF would be \(\widetilde{F}_{\mathrm{act}}^{\mathrm{red}}(x)+\widetilde{F}_{\mathrm{act}}(x)\). Note that \(\widetilde{F}_{\mathrm{act}}^{\mathrm{red}}\) is designed to be simple and linear, making it easy to use with standard MPC libraries. We found this approach significantly improves the approximation procedure Footnote 3: For simplicity this is not shown in Figure 4 GenAccurateApprox. For SiLU AF, since SiLU\((x)=x\cdot\mathrm{sigmoid}(x)\), we can simply borrow the structure of the MPC-friendly approximation for \[\mathrm{sigmoid}(x)\approx\max(0,\min(x+0.5,1)) \tag{54}\] We tweak it slightly to be more precise and multiply it by \(x\) to get \(\widetilde{F}_{\mathrm{silu}}^{\mathrm{red}}\) as shown in Eq. 6. \[\widetilde{F}_{\mathrm{silu}}^{\mathrm{red}}(x)=x\cdot\max\left(0,\min(6x+0.5,1)\right) \tag{6}\] For GeLU AF, since GeLU\((x)\approx x\cdot\mathrm{sigmoid}(1.702x)\), in a similar way we can write crude MPC-friendly approximation of GeLU AF by leveraging the same structure of MPC approximation for sigmoid as shown in Eq. 7. \[\widetilde{F}_{\mathrm{GeLU}}^{\mathrm{red}}(x)=x\cdot\max\left(0,\min(10x,0. 5)\right) \tag{7}\] Since Mish cannot be expressed easily in terms of sigmoid, we denote crude MPC friendly approximation of it by ReLU as shown in Eq. 8. \[\widetilde{F}_{\mathrm{Mish}}^{\mathrm{red}}(x)=\max(0,x) \tag{8}\] #### 4.2.6. Performing interpolation We interpolate \(f(x)\) between range \([\alpha,\beta]\) by a k degree polynomial \(f\) (Eq. 4) using the InterPolate procedure. To find the best performing \(f(x)\), similar to NFGen, we adopt Chebyshev interpolation (Le and others, 2017) over other alternatives, such as cubic spline or uniform polynomial. This is due to an established fact in the area of function approximation theory that Chebyshev polynomial interpolation generally has superior performance to cubic spline or uniform polynomials interpolation when \(f(x)\) is smooth and monotonic (c.f., (Peters and others, 2017) Table 5.1) as it is in our case for complex AF used in DNN models (Figure 1). **GenAccurateApprox procedure.** Now we can piece together the above-mentioned techniques and describe the procedure to approximate \(F_{\mathrm{act}}\) within region \([\alpha,\beta]\) using a number of piece-wise polynomials in detail (as shown in GenAccurateApprox Figure 4). First, we set a step size \(\Delta\), and at each step, we increase the pointer \(\beta\) by \(\Delta\). Before moving \(\beta\), we check if the adjusted approximation error \(\delta^{\prime}\) in the region \([\alpha,\beta]\) is more than the expected approximation error \(\delta/\mathrm{m}\). If this is the case, we approximate the region \([\alpha,\beta]\) using a polynomial determined using the Chebyshev interpolation algorithm, add that polynomial piece to \(\widetilde{F}_{\mathrm{act}}\), and update \(\alpha\) to \(\beta\). Next, we update \(\beta\) by \(\Delta\) and perform the above-mentioned check again until we have approximated the whole region \([s,e]\). ### Finding Computationally Efficient Approximation Now that we can generate MPC-friendly approximations \(\widetilde{F}_{\mathrm{act}}\) using GenAccurateApprox procedure that have negligible accuracy loss, we can search over all possible values of \(\langle\mathsf{m},\mathsf{k},\mathcal{R}\rangle\) and select the \(\widetilde{F}_{\mathrm{act}}\) that is computationally efficient. We abuse the notation slightly and use \(\theta\) to represent \(\langle\mathsf{m},\mathsf{k},\mathcal{R}\rangle\). Unfortunately, because of the systemic approach we take to find the appropriate \(\delta\), GenAccurateApprox becomes time consuming. This is because determining if the accuracy loss is negligible at each step of binary search with reasonable confidence requires performing inference over the large training dataset (as explained in SS 4.2.4), and it makes exhaustively iterating over all possible \(\theta\) infeasible. Instead, we devise an application-specific searching technique based on simulated annealing (SA) (Stein et al., 2017). One advantage of sketching SA-based searching for optimal \(\theta\) is that it is gradient-free -- suiting our needs, overcoming the difficulty to underpin an analytical formula of \(\nabla_{\theta=\langle\mathsf{m},\mathsf{k},\mathcal{R}\rangle}\)GenAccurateApprox(.). That being said, other gradient-free searching techniques may also work as well (Grover et al., 2019), and we detail additional discussions in Appendix E. One important characteristic of SA -- we need to model for this case -- is how to avoid being trapped in a local suboptimal solution. To this extent, we follow suggestions from prior work (Grover et al., 2019; Grover et al., 2019), and probabilistically move towards a new solution \(\theta_{i}\) even if is computationally less efficient approximation than the current best solution (\(\theta_{\text{cur}}\)). More precisely, if at \(i\)-th iteration, we denote the MPC-friendly approximation from \(\theta_{i}\) as \(\widetilde{F}_{\text{act}}^{i}\), then we always update our current best solution \(\theta_{\text{cur}}\) to \(\theta_{i}\) if \(\widetilde{F}_{\text{act}}^{i}\) is computationally more efficient than \(\widetilde{F}_{\text{act}}^{\text{cur}}\) (i.e., \(\widetilde{F}_{\text{act}}^{i}\)) \(>\) Time(\(\widetilde{F}_{\text{act}}^{i}\))). Otherwise, we update \(\theta_{\text{cur}}\) to \(\theta_{i}\) with a certain acceptance probability. This probability depends on two factors. First, the temperature at \(i\)-th iteration called \(T_{i}\) -- which is initially high, meaning we have a high tendency to accept a less computationally efficient solution, but after a few more iterations \(T_{i}\) decreases and so does our tendency to accept a computationally less efficient solution. Second, the amount of computation less efficient \(\widetilde{F}_{\text{act}}^{i}\) is compared to \(\widetilde{F}_{\text{act}}^{\text{cur}}\). In other words, we accept \(\theta_{i}\) using the following equation. \[\theta_{\text{cur}}=\begin{cases}\theta_{i}&\text{if Time}(\widetilde{F}_{ \text{act}}^{\text{cur}})>\text{Time}(\widetilde{F}_{\text{act}}^{i})\text{ or}\\ &\text{exp}\left((\text{Time}(\widetilde{F}_{\text{act}}^{\text{cur}})- \text{Time}(\widetilde{F}_{\text{act}}^{i}))/T_{i}\right)>r\\ &\text{where }r\leftrightarrow\text{$\text{$\text{$\text{$\text{$\text{$\text{ \text{$\text{$\text{$\text{$\text{$\text{$\text{$}}$}$}$}}}}$}}$}$}U_{[0,1]}\\ \theta_{\text{cur}}&\text{otherwise}\end{cases}\] Here Time(\(\cdot\)) is the procedure that returns the average time it takes to complete secure inference using the corresponding approximation, which can be logged simultaneously when we run the secure inference using \(\widetilde{F}_{\text{act}}\) inside GenAccurateApprox. We have to design two more parameters carefully. One is the neighborhood generation heuristic for \(\theta\), and the other is setting a cooling schedule for the temperature \(T_{i}\). Without careful handling of these two parameters SA may lead to undesired approximations (Beng et al., 2017). Neighbour generation heuristicAt iteration \(i\), we generate a new neighbor \(\theta_{i}=\langle\text{m}^{\prime},\text{k}^{\prime},\mathcal{R}^{\prime}\rangle\) from \(\theta=\langle\text{m},\text{k},\mathcal{R}\rangle\) in the following way: for \(\text{m}^{\prime},\text{k}^{\prime}\) we randomly sample two integers numbers \(z_{1},z_{2}\in\mathbb{Z}\) such that \(P(X=z)=1/3\cdot 2^{|x|}\) and set \(\text{m}^{\prime}\leftarrow\text{m}+z_{1}\) and \(\text{k}^{\prime}\leftarrow\text{k}+z_{2}\). This means the chances of moving further away from the current value m and k decreases exponentially. Handling \(\mathcal{R}_{\langle n,d\rangle}\) requires a bit more consideration. Note to specifying a \(\mathcal{R}\), we need two numbers: i) \(n\), the size of the ring used in MPC library, and ii) \(d\), the number of last bits to represent the fractional parts. Typically MPC libraries uses \(\mathcal{R}\) sizes of \(\{128,84,64,32\}\). We i.i.d sample a ring size from these for \(n\), and regarding values of \(d\) we set it to \(d\leftarrow\lfloor n/\gamma_{2}\rfloor\) where \(\gamma_{2}\) is i.i.d sample from \(\gamma_{2}\in_{\mathcal{R}}\{3/2,2,5/2,3,7/2,4\}\). Coaling scheduleAs for the cooling schedule, we use the classical logarithmic series \(T_{i}\leftarrow\chi_{0}/\text{log}(i+1)\) at iteration \(i\) from Hajek et al. (Hajek et al., 2019). This choice ensures that initially, \(T_{i}\) would be high, thereby increasing the chances of accepting a computationally less efficient approximation during the early iterations. But as the number of iterations increases, \(T_{i}\) progressively decreases, lowering this chance. We simply set \(\chi_{0}=0.2\) for all of our experiments, yielding \(T_{1}\approx 0.67\) and \(T_{10}\approx 0.2\). We show the pseudocode for finding computationally efficient approximation FindBestPiecePoly and the procedure for generating neighbors at each iteration GenerateNeighbour in Figure 4. ## 5. Experimental Evaluation We conduct experiments to address the following questions: 1. _Model Accuracy_ (SS 5.3): What is the impact on model inference accuracy of using MPC-friendly activation functions \(\widetilde{F}_{\text{act}}(x)\) generated using our scheme Compact and other existing approaches (Hajek et al., 2019; Duchi et al., 2019; Duchi et al., 2020; Duchi et al., 2020)? 2. _Inference Time_ (SS 5.4): What is the inference time overhead of Compact compared to NFGen (Hajek et al., 2019) as the number of hidden layers increases without losing any significant loss in inference accuracy? ### Implementation Details Our SchemeWe implement our scheme using Python 3.8 in \(\approx 1,200\) LoC. We approximate the region between \(x\in[-10,10]\) for all activation functions (AFs) as beyond that region, they can be easily approximated using polynomials. We also use SymPy (Duchi et al., 2019) library for the majority of mathematical operations, including calculating the approximation error between a given region of a polynomial using Eq. 5 and performing Chebyshev interpolation as mentioned in SS 4.2.6. Our scheme requires testing if the generated approximation has negligible accuracy loss by checking \((\eta_{1}-\eta_{2})/\eta_{1}\leq v\) (as described in SS 4.2.4). We also configure FindBestPiecePoly with ten iterations (\(l_{max}=10\)) to find a computationally efficient approximation and set \(\chi_{0}=0.2\). For the initial solution \(\theta_{0}=\langle\text{m}_{0},\text{k}_{0},\text{$\mathcal{R}_{0}$}\rangle\), we set \(\text{m}_{0}=10^{4}\) and \(\text{k}_{0}=10\); default parameters taken from NFGen (Hajek et al., 2019). For \(\mathcal{R}_{\langle\text{m}_{0},\text{$d_{0}$}\rangle}\), we used \(\langle\text{$n_{0},\text{$d_{0}$}\rangle$}=\langle 128,64\rangle\) -- a popular choice of ring size by many MPC libraries. With this configuration, FindBestPiecePoly took less than 25 minutes on commodity hardware to finish the four tasks and three complex AFs we detail in SS 5.2. Table 6 shows the appropriate \(\text{m},\text{k},\mathcal{R}\) we find via FindBestPiecePoly for all tasks and complex AFs. We note Compact can also be implemented by adding our DNN-specific optimizations (as discussed in SS 4) on top of NFGen code base, and we expect to see similar efficacy over NFGen -- to what Fan et al. propose in (Hajek et al., 2019). However, when NFGen was publicly released4, we were already towards the end of the development cycle of our code base, and hence, we did not reuse their code base for implementing techniques employed by Compact. Footnote 4: [https://github.com/Fannxry/NFGen](https://github.com/Fannxry/NFGen) **Other Approaches.** We consider four state-of-the-art approaches for comparison: NFGen (Hajek et al., 2019), MiniONN (Duchi et al., 2020), MPCFormer (Duchi et al., 2020) and SIRNN (Duchi et al., 2020). Additionally, we consider a rudimentary base approach: replacing the complex AF with a popular MPC-friendly AF ReLU. We consider this approach as ReLU is relatively MPC-friendly because it can be computed using only two piece-wise polynomials. For NFGen, we add a wrapper class to the author's open-source implementation to measure the inference accuracy and computational overhead for the four tasks. Besides that, we keep their implementation unchanged -- using \(\mathcal{E}_{\text{Max}}\) (Eq. 4.2.2) to measure the approximation error, setting \(\delta=10^{-3}\), k = 10, and m = \(10^{4}\). In (Duchi et al., 2020), Liu et al. describe an approach called MiniONN for generating MPC-friendly approximations of sigmoid AF. Since there is no publicly available implementation of MiniONN, we implement it ourselves to the best of our abilities and extend the approach to generate MPC-friendly versions of complex AF \(F_{\text{act}}\in\) (SiLU, Mish, GeLU) (see Appendix B for further details of their approach). MPCFormer (Marcus et al., 2017) approximates GelU using a polynomial \(\text{GeLU}(x)=0.125x^{2}+0.25x+0.5\). This approximation was motivated by the need to perform secure inference for transformer-based DNN models where GeLU activation is used extensively. Since Li et al. (Li et al., 2018) did not provide any recipes that can be generalized directly to other AF, we only compare the accuracy and computational overhead for GeLU. Lastly, Rathee et al. present a library called SIRNN (Rathee et al., 2017) that computes complex mathematical operations (e.g., \(e^{x}\), \(\ln(x)\), \(\frac{1}{x}\)) securely using a combination of lookup tables and numerical methods (e.g., Goldschmidt's iterations). Thus, complex AFs can be computed sequentially by performing the aforementioned operations and combining the intermediate results using ADD, MUL, COMP operators to evaluate \(F_{\text{act}}\). Recently, Hao et al. (Hao et al., 2019) extended their approach to computing GeLU activation function in an efficient manner by reducing one additional network call. Nevertheless, this work uses the open source C++ implementation of SIRNN (Rathee et al., 2019). ### Experimental Setup **Task details.** To demonstrate that inference accuracy and performance overhead is negligible for secure inference using our scheme, we consider four state-of-the-art image classification tasks as shown in Table 3 and three complex activation functions (AFs) \(F_{\text{act}}\in\{\text{SiLU},\text{GeLU},\text{Mish}\}\). We train the four models corresponding to each complex AF for each task. While training these models, we preserve the widely use parameters as proposed in the literature for all models (e.g., the overall architecture of the model, # of epochs, learning rate, optimizer, etc.) -- including a batch normalization layer before inputs are being fed to complex AFs of each hidden layer as illustrated in Figure 2. Below, we provide brief details about these four classification tasks, and further details are in Appendix A. **Four classification tasks.** For the first task, we consider a simple classification task of MNIST dataset (Krizhevsky et al., 2015) using a three-layer deep fully connected network (FCN) with one input, output, and hidden layer. MNIST dataset contains 70 K 28x28 handwritten digits grey images, and the three-layer deep FCN achieves close to 0.99 training accuracy for the three complex AFs. We refer to this task as DigitRecognition in the paper. Next, we move towards a more complex classification task of CIFAR-10 dataset (Zheng et al., 2017) -- which we refer to as CIFAR10Classification. CIFAR-10 consists of 60 K 32x32 color images with 6 K images per 10 classes. For performing classification on this dataset, we use a a convolutional neural network (ConvNet) (Zheng et al., 2017) with five hidden layers and train it over the 50 K training images of CIFAR-10 dataset using three different complex AF. For the third task, we consider performing classification on ImageNet-1K dataset which has been one of the challenging benchmark dataset in image classification (Krizhevsky et al., 2015). The dataset contains around 1 million annotated images with 50 K validation images and 100 K test images. We use a deep residual neural network (ResNet9) (Rathee et al., 2017) model having eight hidden layers over the training images for 50 epochs for three complex AFs and achieved a validation accuracy of around 0.74. We refer to this task as ImageNet1KClassification in this paper. Lastly, we perform experiments to detect spoofed images in CelebA-Spoof (Zheng et al., 2017) dataset. We refer to this task as SpoofFaceDetection. This is a large-scale face anti-spoofing dataset use to train anti-spoofing DNN models. CelebA-Spoof contains 625 K facial images from \(>\)10 K subjects with each image has 43 attributes; 40 of them correspond to indicating facial components of real images and three of them correspond to attributes of spoofed facial images. For training, we perform an 80-20 split of the CelebA-Spoof dataset and adopted the EfficientNetB0 (Zheng et al., 2017) model, which is the state-of-the-art top-performing anti-spoofing detection model and winner of the CVPR challenge of detecting spoofed face images(Zheng et al., 2017). EfficientNetB0 model consists of 17 hidden layers, and after training the model for 25 epochs, it achieved a training accuracy of 0.98. **Machine specification.** We train the models on a Linux machine with an Intel Core i9 processor having 128 GB RAM and Nvidia GTX 1080 GPU. We use the training split of each dataset for training the models, and after the training is completed, we save these models. We assume \(\mathcal{S}_{\text{owner}}\) holds these saved models and does not want to reveal them to \(\mathbf{C}\) while performing secure inference. We simulate the \(\mathbf{C}\)'s input \(\mathbf{X}\) using the testing split of the corresponding datasets for each task. ### Model Accuracy We first measure the inference accuracy of the trained models over the testing split of the dataset by using the (non-MPC-friendly) complex AF as it is and refer to it as _plaintext accuracy_ (\(\eta_{1}\)). Then, we replace the complex AF with its MPC-friendly approximation generated by different approaches and measure its inference accuracy (\(\eta_{2}\)). Thus, \((\eta_{1}-\eta_{2})/\eta_{1}\) gives the inference accuracy loss introduced by MPC-friendly approximations. Table 3 shows the inference accuracy loss (in percentage) for Compact generated MPC-friendly approximation and other state-of-the-art approaches generated MPC-friendly approximation (Krizhevsky et al., 2015; Rathee et al., 2017; Rathee et al., 2017) -- for each task across the three complex AFs SiLU, GeLU, Mish. Now we discuss the inference accuracy loss for different approaches, and throughout the discussion, we conservatively consider accuracy loss negligible if \((\eta_{1}-\eta_{2})/\eta_{1}<10^{-2}\). **ReLU based rudimentary approach.** We observe that although for the first DigitRecognition task, the inference accuracy loss is within 1.54%-2.68% for the last three tasks accuracy loss is higher -- at least 45.66% -- making this approach unsatisfactory. **SIRNN (Rathee et al., 2017).** For SIRNN, we observe that for DigitRecognition task, we observe less significant accuracy loss (0.95% -- 2.37%). Furthermore, for SpoofFaceDetection the accuracy does not degrade too much -- by 0.48%-1.78%. However, for CIFAR10Classification and ImageNet1KClassification task the accuracy degradation is evidently not negligible -- suffering from an accuracy loss of 2.58%-16.31%. We hypothesize such accuracy degradation is primarily due to two reasons: 1) intermediate steps overflow in the fixed point representation, and 2) error is introduced while one computing one complex math operation propagates and accumulates while using that result as input for another complex math operation. This further motivates the need to take a piece-wise polynomial approximation-based approach for designing MPC-friendly approximation of complex AF when state-of-the-art DNN models are used, confirming findings from prior work (Kumar et al., 2018). **MPCFormer**(Kumar et al., 2018) For MPCFormer, we observe a negligible accuracy loss for DigitRecognition and SpooffFaceDetection tasks of 0.18% and 0.09% respectively. However, similar to SIRNN, it exhibits a non-negligible accuracy loss of 9.4% and 7.07% for CIFAR10Classification and ImageNet1KClassification task respectively. We suspect this is because GeLU activation approximation by MPCFormer relies on _knowledge distillation_ (KD) (Kumar et al., 2018)- which is essentially fine-tuning the sequence-to-sequence-based pre-trained model for efficiency. In absence of KD, a simple plug-and-play replacement of polynomial approximation of GeLU activation proposed by MPCFormer does not work well. **MiniONN**(Kumar et al., 2018). For MiniONN, we observe that the inference accuracy loss becomes significant when we use their recipe to generate a friendly approximation of complex AFs SiLU, GeLU, Mish. The accuracy loss becomes catastrophically high, especially for ImageNet1KClassification, (27.12%-39.89%). This shows that although the recipe proposed by MiniONN does not show accuracy degradation for sigmoid AF for simplistic logistic regression models, there is a generalization gap when such recipes are used for DNN models trained on diverse datasets involving complex AFs. **Compact and NFGen**(Kumar et al., 2018) We observe that for all tasks, in general, Compact and NFGen generated MPC-friendly approximations have negligible accuracy loss of \(<1\%\). For one instance, though, ImageNet1KClassification task involving SiLU AF, NFGen has an accuracy loss of 1.36% -- marginally higher than the aforementioned threshold. When When comparing the two approaches, generally, Compact generated approximation has lower accuracy loss, except for two instances showing a slight deviation. The first one is DigitRecognition task involving GeLU (0.37% vs 0.23%) and SpooffFaceDetection task involving Mish AF (0.66% vs 0.53%). **Results summary.** We conclude from these experiments that NFGen and Compact are resistant towards significant accuracy loss -- when we use their generated MPC-friendly approximation instead of the original complex AF -- compared to other approaches we consider. Keeping that in mind, we can now investigate the next important aspect of secure inference, measuring performance overhead. We narrow down our experiments to NFGen and Compact -- excluding the other approaches -- as their accuracy loss is significantly high. ### Inference Time We benchmark the inference time of NFGen and Compact to measure the performance overhead. While benchmarking, we instantiate each party in the protocol by machines running on commodity-type hardware -- having an Intel Core i7 processor with 64 GB RAM and connected over a 252 Mbits/sec network link. We use the average inference time for a single image calculated over the testing split of the datasets and include both computational and communication costs while reporting the results. We consider two state-of-the-art MPC libraries (Shen et al., 2017; Wang et al., 2018) designed for secure inference -- one for a 2PC scenario and the other for a 3PC scenario (both scenarios are described earlier in SS 3.1). **3PC results.** First, for the 3PC scenario, we consider ABY\({}^{3}\)(Shen et al., 2017) that uses replicated secret sharing (SS) based secure inference protocol. Table 4 compares the performance overhead of Compact and NFGen for the 3PC setting using ABY\({}^{3}\) library. We observe that Compact outperforms NFGen 2\(\times\)-5\(\times\) for the last three classification tasks involving a high number of layers. However, Compact's performance efficacy for the first task DigitRecognition is comparable to NFGen -- exhibiting similar inference time. We hypothesize this \begin{table} \begin{tabular}{l c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Task Name} & \multirow{2}{*}{Model \({}^{\dagger}\)} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(F_{\text{act}}\) accuracy} & \multicolumn{3}{c}{Plainted accuracy} & \multicolumn{3}{c}{Accuracy loss \(\%^{2}\)} \\ & & & & accuracy & ReLU & NFGen(Kumar et al., 2018) & MiniONN(Kumar et al., 2018) & MPCFormer(Kumar et al., 2018) & SIRNN(Kumar et al., 2018) & Compact (Ours) \\ \hline \multirow{2}{*}{DigitRecognition} & \multirow{2}{*}{FCN} & \multirow{2}{*}{MNIST} & SiLU & 98.73 & 2.31 & 0.43 & 20.88 & ✗ & 2.37 & 0.17 \\ & & & GeLU & 98.45 & 1.54 & 0.23 & 42.31 & 0.18 & 1.32 & 0.97 \\ & & & Mish & 99.07 & 2.68 & 0.19 & 30.41 & ✗ & 0.95 & 0.06 \\ \hline \multirow{2}{*}{CIFAR10Classification} & \multirow{2}{*}{ConvNet} & \multirow{2}{*}{CIFAR-10} & SiLU & 86.53 & 49.80 & 0.51 & 18.50 & ✗ & 2.58 & 0.49 \\ & & & GeLU & 87.11 & 45.66 & 0.64 & 30.04 & 7.07 & 4.01 & 0.25 \\ & & & Mish & 89.30 & 57.07 & 0.27 & 57.07 & ✗ & 13.64 & 0.11 \\ \hline \multirow{2}{*}{ImageNet1KClassification} & \multirow{2}{*}{ResNet9} & \multirow{2}{*}{ImageNet-1K} & SiLU & 72.89 & 98.39 & 1.36 & 27.12 & ✗ & 10.59 & 0.91 \\ & & & GeLU & 75.43 & 77.66 & 0.05 & 36.21 & 9.43 & 6.68 & 0.03 \\ & & & Mish & 75.78 & 98.97 & 0.61 & 39.89 & ✗ & 16.31 & 0.55 \\ \hline \multirow{2}{*}{SpooffFaceDetection} & \multirow{2}{*}{EfficientNetB0} & \multirow{2}{*}{CelebA-Spoof} & SiLU & 90.87 & 71.72 & 0.14 & 4.27 & ✗ & 1.75 & 0.08 \\ & & & GeLU & 92.19 & 75.94 & 0.20 & 9.75 & 0.09 & 0.48 & 0.77 \\ \cline{1-1} & & & Mish & 92.23 & 77.71 & 0.53 & 1.32 & ✗ & 1.78 & 0.66 \\ \hline \hline \end{tabular} \(\mathcal{X}\) Denotes the corresponding approach does not propose MPC friendly version of \(F_{\text{act}}(x)\) \({}^{3}\) Accuracy loss is reported by comparing the inference accuracy \(\eta_{1}\) and \(\eta_{2}\) obtained using AF \(F_{\text{act}}\) and \(\widehat{F}_{\text{act}}\), respectively. Accuracy loss = \((\eta_{1}-\eta_{2})/\eta_{1}\), and reported in percentage (%). Accuracy losses of \(<10^{-2}\) or \(<1\%\) are highlighted in gray. \({}^{\dagger}\) For all models, batch normalization is used before each activation layer. \end{table} Table 3. **Inference accuracy of MPC-friendly approximation of three complex activation functions (AF) for four different tasks using state-of-the-art approaches. Except for NFGen and our approach other DNN-specific approaches show a significant drop in inference accuracy if we use their generated MPC-friendly version of complex AF. We compare the performance overhead of our approach with NFGen in § 5.4 and show results in Table 4.** is because the number of hidden layers is only one for the DNN model used in this first task. In contrast, the number of hidden layers for the other classification tasks is 5, 8, and 17 respectively. Because of this, there is a higher chance of the approximation errors introduced in one hidden layer propagating to the next hidden layers. Our DNN-specific techniques discussed in SS 4 can effectively curl out this approximation error from propagating to the hidden next layers without sacrificing much performance overhead compared to NFGen. Thus, Compact's superior performance becomes more pronounced as number of hidden layers becomes high. 2PC resultsFor the 2PC scenario, we consider CryptFlow2 (Wang et al., 2019) -- another state-of-the-art library for secure inference based on a novel protocol for _millionaries' problem_(Zhu et al., 2019) and division over fixed-point arithmetic We experiment with the oblivious transfer (OT) based construction of CryptFlow2 but believe performance results would also be similar for homomorphic encryption (HE) based construction. We observe a performance efficiency trend of Compact with NFGen similar to the 3PC scenario. We present detail results in Appendix C. ## 6. Discussion and Future Work Compact enables fast secure inference for DNN models that use complex activation functions (AFs) while protecting the secrecy of the client's inputs as well as the proprietary DNN model. Deploying Compact is straightforward as it is compatible with standard MPC libraries. Here we discuss other deployment considerations for practitioners to use Compact. Accelerating secure inference time using GPUIn the plaintext setting, impressive ML inference time has been achieved by harnessing GPUs, which support highly parallelizable workloads. This boost in inference speed also extends to secure scenarios. Indeed, recent works have shown how to run MPC operations inside GPUs and gain a significant speedup in machine learning training and inference (Wang et al., 2019; Wang et al., 2019). In our experiments, we did not use GPUs for inference, and thus the inference times reported in Table 4 can be further improved by porting these techniques to ABY\({}^{3}\)and CryptFlow2. We note that Compact is compatible with any MPC protocol that supports the three basic operations, and both ABY\({}^{3}\)and CryptFlow2 support them. We acknowledge that this endeavor of making ABY\({}^{3}\)and CryptFlow2 GPU-friendly requires considerable development time and thus we leave that for future explorations. Nevertheless, we believe the relative gain in performance improvement will remain the same, as the operations (ADD, MUL, COMP) used in our technique are very similar to those used in prior work. Easy Adoption of Compact in practiceOur approach focuses on complex AFs, and we believe complex AF gets precedence over ReLU for two key reasons. Firstly, complex AFs, in many cases, result in performing robust, noise-resistant, better-performing DNN models (e.g., Mish have higher accuracy in computer vision and object detection tasks than ReLU(Wang et al., 2019), DNN models trained on SiLU are more noise-resistant than the ones trained on ReLU (Wang et al., 2019), etc.). Secondly, the DNN models have been trained over a complex AF in many cases. Retaining/fine-tuning the DNN model further using ReLU AF to make it compatible with secure inference protocol specific to ReLU for enabling privacy-preserving inference is difficult for practitioners (e.g., state-of-the-art face anti-spoofing detection model EfficientNetB0 by default is pre-trained on SiLU AF (Wang et al., 2019)). Compact is more useful than the state-of-the-art generic MPC-friendly approximation approach for non-linear functions NFGen when the number of hidden layers is more than one. This phenomenon is illustrated by Compact's similar performance to NFGen for DigitRecognition where the number of hidden layers is one. However, as the number of hidden layers increases for the other three tasks, Compact outperforms NFGen by \(2\times\)-\(5\times\). It is also worth noting that our work is experimentally evaluated on the three most popular complex AFs used in the ML community, and one can easily use our approach to approximate other less widely used complex AFs (e.g., tanh, Smish, etc.). Dependence on batch normalizationOur piece-wise approximation is dependent on the presence of batch normalization (BN). Generally, BN is employed before AFs by the majority of state-of-the-art DNN models in computer vision. Compact leverages the phenomenon that BN shifts the input distribution to have zero mean and unit variance (similar to standard normal distribution \(\mathcal{N}(0,1)\)). We believe another type of normalization, typically used in natural language processing, called layer normalization, can also be leveraged to design an approach similar to ours in the future. For now, it is clear that if one has a DNN model with a large number of hidden layers, which is already trained using complex AF and batch normalization, Compact would be useful to do secure inference on that model without sacrificing the accuracy or requiring any retraining. This will significantly simplify deployability. Moreover, compatibility with existing MPC libraries makes it easy for practitioners to easily switch to Compact if they are already using a secure inference. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Task Name & \# HLs1 & \(F_{\text{act}}\) & NFGen & Ours & **Speedup** \\ \hline \multirow{3}{*}{DigitRecognition} & \multirow{3}{*}{1} & SiLU & 40 & 43 & 0.93\(\times\) \\ & & GelU & 35 & 32 & 1.09\(\times\) \\ & & Mish & 52 & 49 & 1.06\(\times\) \\ \hline \multirow{3}{*}{CIFAR10Classification} & \multirow{3}{*}{5} & SiLU & 114 & 58 & 1.96\(\times\) \\ & & GeLU & 194 & 94 & 2.05\(\times\) \\ & & Mish & 117 & 62 & 1.89\(\times\) \\ \hline \multirow{3}{*}{ImageNet11KClassification} & \multirow{3}{*}{8} & SiLU & 359 & 102 & 3.52\(\times\) \\ & & GelU & 446 & 106 & 4.17\(\times\) \\ & & Mish & 473 & 104 & 4.52\(\times\) \\ \hline \multirow{3}{*}{SpoofFaceDetection} & \multirow{3}{*}{17} & SiLU & 204 & 47 & 4.34\(\times\) \\ & & GeLU & 221 & 45 & 4.91\(\times\) \\ \cline{1-1} & & Mish & 195 & 41 & 4.75\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of inference time (ms) of three activation functions (\(F_{\text{act}}\)) over four different classification tasks for \(N=3\) servers using ABY\({}^{3}\) MPC library. Since the DNN model used in DigitRecognition task has only one hidden layer (# HLs-1), the performance of NFGen task is similar to Compact. However, for complex DNN models with a large number of hidden layers used in the three other tasks, Compact outperforms NFGen—exhibiting a \(2\times\)-\(5\times\) speedup compared to NFGen. Conclusion To protect the privacy of clients' private input sent to a proprietary DNN model hosted on a cloud-based inference service, we present a scheme to generate MPC-friendly piece-wise polynomial approximations of complex activation functions (AFs). The generated MPC-friendly approximations can directly replace existing complex AFs used in DNNs in a plug-and-play manner and obviate the need to retrain DNN models. Our extensive experiments show that the proposed approach achieves better performance over-head computer vision models trained on challenging datasets while maintaining similar model accuracy.
2309.13318
Spanish Resource Grammar version 2023
We present the latest version of the Spanish Resource Grammar (SRG), a grammar of Spanish implemented in the HPSG formalism. Such grammars encode a complex set of hypotheses about syntax making them a resource for empirical testing of linguistic theory. They also encode a strict notion of grammaticality which makes them a resource for natural language processing applications in computer-assisted language learning. This version of the SRG uses the recent version of the Freeling morphological analyzer and is released along with an automatically created, manually verified treebank of 2,291 sentences. We explain the treebanking process, emphasizing how it is different from treebanking with manual annotation and how it contributes to empirically-driven development of syntactic theory. The treebanks' high level of consistency and detail makes them a resource for training high-quality semantic parsers and generally systems that benefit from precise and detailed semantics. Finally, we present the grammar's coverage and overgeneration on 100 sentences from a learner corpus, a new research line related to developing methodologies for robust empirical evaluation of hypotheses in second language acquisition.
Olga Zamaraeva, Lorena S. Allegue, Carlos Gómez-Rodríguez
2023-09-23T09:24:05Z
http://arxiv.org/abs/2309.13318v2
# Spanish Resource Grammar version 2023 ###### Abstract We present the latest version of the Spanish Resource Grammar (SRG). The new SRG uses the recent version of Freeling morphological analyzer and tagger and is accompanied by a manually verified treebank and a list of documented issues. We also present the grammar's coverage and overgeneration on a small portion of a learner corpus, an entirely new research line with respect to the SRG. The grammar can be used for linguistic research, such as for empirically driven development of syntactic theory, and in natural language processing applications such as computer-assisted language learning. Finally, as the treebanks grow, they can be used for training high-quality semantic parsers and other systems which may benefit from precise and detailed semantics. ## 1 Introduction Among the various approaches to computational linguistics, formal grammars are a link between linguistic theory and natural language processing (NLP). By formal grammars we mean fully explicit linguistic formalisms developed by linguistic theorists independently of specific NLP needs or tasks. In other words, we put Minimalism, Lexical Functional Grammar, Head-driven Phrase Structure Grammar, and similar into this category while we would put neither the Penn-Treebank convention nor Universal Dependencies there. Grammars take a long time to develop and the structures produced by them are harder to use than annotation schemes developed specifically for NLP --for example, parsing can be much slower and the software stack generally needs to be more complicated --but they remain one of the few clear and long-term links between linguistics and NLP. In recent practice, grammars of the kind we are talking about here have been used for computer assisted language learning (CALL) applications (Flickinger and Yu, 2013; da Costa et al., 2016; Morgado da Costa et al., 2020) as well as to create high quality collections of semantic structures to train semantic parsers (Buys and Blunsom, 2017; Chen et al., 2018; Lin et al., 2022) and perform tasks like text generation (Hajdik et al., 2019). Lin et al. (2022) in particular report a 35% error reduction and 14% absolute accuracy gain compared to purely neural models, due to their use of the grammar-generated precise semantic representations. In this paper, we present the latest version of a grammar which can be used to create such high quality training data for Spanish. The Spanish Resource Grammar (Marimon, 2010a; Marimon et al., 2014) is the second biggest grammar of its type (see SS2.1). The latest version that we present here does not depend on non-fully open-source parsers (unlike the previous version) and uses a newer version of morphophonological analyzer. In addition, here we report the SRG's accuracy on a portion of the TIBIDABO corpus (Marimon, 2010b) for the first time. In this paper, we present work that is unusual in the sense that we are breathing new life into a valuable resource which remained dormant for at least 10 years. Unlike other software, grammars do not become obsolete inasmuch as they encode robust linguistic theories. For that reason, we are convinced that the SRG should be reintegrated into the computational linguistics landscape, providing the community with a resource similar to the English grammar. Like other software, grammars become obsolete inasmuch as they depend on tools which may become outdated, and fixing such dependencies can be expensive. We present here a year of work that went into enabling the SRG to work with a better parser and into establishing its accuracy on 2K sentences -- a time consuming process which has to be done once before automatic tools can be leveraged to quickly compare new iterations. Building upon this foundation, the grammar itself can be expanded such that its coverage and accuracy improve. The paper is organized as follows. In SS2, we explain briefly which formalism is behind the grammar implementation and what treebanking means in the context of grammar engineering. We also dedicate a section to the previous version of the grammar. Section 3 describes what we did to bring the SRG up to date with the SOTA grammar engineering tools and gives an overview of a set of phenomena that are covered, as revealed by a specially constructed test suite which is used to check the updated grammar in Section 4. Section 5 presents the results of parsing with the grammar 2000 sentences from a Spanish news corpus and 100 sentences from a Spanish learner corpus. The latter experiment is a pointer in the direction of using the grammar in CALL applications, while the former is a stepping stone towards creating high quality training data for a wide range of applications. ## 2 Background ### Grammar engineering and DELPH-IN consortium Grammar engineering is a discipline and a methodology of implementing syntactic theory on the computer. Parsers and generators then can take such grammar implementations as input. As already mentioned in the introduction, the theories underlying the formalisms used in grammar engineering are complex and are motivated by linguistic research rather than NLP tasks. This makes them a resource for bridging NLP with other fields. There are several grammar engineering initiatives, couched in various formalisms Collins and Stabler (2016); Butt and King (2002); Muller (2015). DELPH-IN (DEep Linguistic Processing with Hpsg INitiative)1 stands out as one with active international collaborations and emphasis on practical applications. The English Resource Grammar (ERG) Flickinger (2000, 2011) is the largest engineered grammar we are aware of (including outside of DELPH-IN), and it empowered the creation of a large high-quality treebank originally published as Oepen et al. (2004) with regular updates with each ERG release.2 Another unique initiative within DELPH-IN is the Grammar Matrix Bender et al. (2002, 2010); Zamaraeva et al. (2022), a system for automatic grammar creation based on typological description. The Grammar Matrix outputs grammar fragments which can then be developed for wider coverage. Other notable DELPH-IN projects include grammars of Japanese, Chinese, Singaporean English, Hausa, German, Indonesian, Norwegian, Portuguese, Bulgarian, and more.3 Footnote 1: [https://delph-in.github.io/docs/home/](https://delph-in.github.io/docs/home/) Footnote 2: [http://svn.delph-in.net/erg/tags/2023](http://svn.delph-in.net/erg/tags/2023) ### HPSG and MRS Our grammar engineering work is couched within the HPSG and the MRS formalisms.4 Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag (1994) is a constraint-based unification theory of syntax. The formalism is fully explicit and serves as the foundation for multiple grammar engineering initiatives. HPSG describes syntactic structures as a hierarchy of phrasal and lexical types which can be instantiated as graphs containing feature-value pairs. Values belong to the type hierarchy which determines which values are compatible (can unify) and which are not. During unification-based parsing, first lexical analysis is performed and then a parse chart is built bottom-up, attempting to account for the entire input string with one feature structure that is compatible with the _root conditions_ (a set of constraints defining a full sentence). If something in a candidate feature structure cannot unify, such structure is discarded. Two sample HPSG structures are presented in Figure 1. They come from a real example from a learner corpus (1). Footnote 4: For a more detailed overview of the relationship between the HPSG theory and computational linguistics, see Bender and Emerson (2021). 1. Mis abuelos my.3pl grandparent.masc.pl son personas be.3pl.pres.ind person.fem.3pl famosos. famous.masc.pl 2. Intended: 'My grandparents are famous people.' [spa; Yamada et al. (2020)] These structures (though greatly simplified for presentation) illustrate how two words from the input are parsed to have incompatible agreement values. The values such as _3pl_ and _fem_ come from the type hierarchy while the orthographies come from the lexicon, in this case paired with the morphological analyzer. While Figure 1 shows a very simple example of gender agreement, HPSG allows us to model syntactic complexity in full detail. This is particularly useful when semantic nuances accessible through the syntax-semantics interface matter. HPSG has been used in particular to solve issues related to negation scope [22, 2] and semantic compositionality generally [15]. Semantics in DELPH-IN is modeled via the Minimal Recursion Semantics formalism (MRS; Copestake et al., 2005). An MRS is a bag of predications which include information about various semantic properties of the structure, including quantifier scope, negation and modification scope, tense and aspect of events, person, number, and gender of entities, information structure, and so on. The MRS for the corrected sentence from example (1) (the word _famosos_ was corrected to _famosas_ to fix the agreement error) is given in Figure 2. Where the level of detail provided by MRS is not needed or desired, it can be automatically converted to a dependency MRS (Figure 3).5 Footnote 5: Both figures were generated by the DELPH-IN online demo: [http://delph-in.github.io/delphin-viz/demo](http://delph-in.github.io/delphin-viz/demo). ### The Spanish Resource Grammar, old version The Spanish Resource Grammar (SRG) [13, 14, 15] is the second biggest DELPH-IN grammar. It has 226 phrase structure types, 504 lexical rule types, and 543 lexical types, and a lexicon of 54,510 lemmas. The morphophonological analysis is done externally by Freeling [13, 14].6 An input sentence is first run through Freeling which outputs a sequence of tags (one or more possible tag for each word, with probabilities). Then the token lattice along with the tags is put in a special format which is then passed to the parser. The parser is a separate tool which takes any DELPH-IN grammar as input along with the input sentence. The parser should be able to map the provided tags to the corresponding lexical rules that are part of the grammar. In this case, the lexical rules are not concerned with the input token orthography (that is the job of the external analyzer) but they do make sure the terminal structure is unified with the appropriate feature values such as specific values for gender, number, etc. When the SRG was first developed, the parser used was the PET parser [12]. It has since stopped being supported, and using it today would not make sense because a much faster parser exists (see SS3). Footnote 6: [https://nlp.lsi.upc.edu/freeling/](https://nlp.lsi.upc.edu/freeling/) In the initial stage of its development [13, 14, 15], its coverage and accuracy were never published (as far as we can tell). The true coverage and, more importantly, accuracy7 of the SRG remained unknown due to at least two issues: (1) parser limitations, and (2) cost of treebanking. This paper aims to improve the situation. We have run a state-of-the-art HPSG parser on the data and have manually verified the accuracy of the results on sentences up to and including length 10. Length 10 still corresponds to relatively short sentences in Spanish but in any case, the accuracy figures are reported for the first time. Footnote 7: Coverage: how many grammatical sentences are assigned some (any) structure by the grammar; accuracy: how many grammatical sentences get assigned the desired semantic structure ### DELPH-IN Treebanking In the context of DELPH-IN grammars, treebanking is in a sense the opposite to the treebanking in settings such as UD. Treebanks like PTB or UD are created manually with the goal to then train statistical tools on them. Conversely, DELPH-IN tree Figure 1: Two abbreviated feature structures produced by the SRG. Note the incompatible gender agreement values. banks are created fully automatically by the manually built grammar. This entails that these treebanks have to be verified manually for accuracy. This process is quite time consuming but it should be faster than creating a treebank manually from scratch.8 Crucially, the entire treebank can be regenerated automatically at any time, if there is any (even small) change in the grammar. Then what needs to be done manually is akin regression testing: the treebanking tool (such as FFTB Packard, 2015) or [incr tsdb()] (Oepen, 1999)) shows the differences between the previous and the current version of the treebank and the differences can be assessed as regressions or improvements and addressed accordingly. Another difference between DELPH-IN treebanks and annotation schemes like UD is that the structures produced by DELPH-IN grammars are fully-fledged HPSG structures motivated by syntactic theory and not by NLP desiderata such as ease of use or simplicity. This means, on the one hand, that DELPH-IN treebanks are harder to work with and that they are independently motivated and provide a link between NLP and linguistic theory, on the other. Footnote 8: Of course, the reverse is true about the grammar: it takes a lot of time to build, compared to a statistically trained resource. Manual treebanking in DELPH-IN is the necessary step to train parse-selection models, which is required in most realistic applications. Human language syntax is hugely ambiguous, and the desired semantic structure is often determined only by pragmatics. That is outside of the scope of a syntactic theory, meaning an HPSG grammar will dutifully produce all the structures that it considers _syntactically_ possible, and statistically trained tools are required to be able to choose the _pragmatically_ best one. At the moment, DELPH-IN grammars such as the ERG and the SRG use HMM-based parse selection models Toutanova and Manning (2002). In the future neural models should be trained, but for the SRG, this means bigger treebanks should be verified first. ## 3 Srg-2023 In this iteration of the SRG development, we had four main objectives: (1) have it work with the current best HPSG parser, ACE Crysmann and Packard (2012), which has been continuously improved,9 (2) have it use the most recent version of the Freeling morphophonological analyzer, with better named entity recognition; (3) establish the current coverage and accuracy of the grammar on (at least the portion of) the TIBIDABO corpus; (4) Figure 3: Dependency MRS for the sentence _My grandparents are famous people_. Figure 2: MRS for the sentence _My grandparents are famous people_. use the grammar on a learner corpus, as a step towards using it in CALL applications. In this section, we report on all these four objectives. Note that actually adding analyses to the grammar in order for it to support more phenomena is future work; first we needed to establish where it is now. The version of the SRG that we present here is a complete overhaul of the older version's interface with the external morphophonological analysis; the old version of the SRG relied on an inflexible set up which could only be used with older, slower parsers, one of which was not fully open source. We have (1) revised the portion of the grammar responsible for the inflectional lexical rules to match the latest version of the Freeling morphophonological analyzer; (2) implemented a new, Python interface between Freeling, the grammar, and the state-of-the-art ACE parser; (3) obtained the previously parsed data from the previous grammar developer, Montserrat Marimon; (4) re-parsed the data with the ACE parser, which only became possible with the newly written Freeling interface; (4) with the help of a student intern, we manually verified the accuracy of the grammar on the sentences up to and including length 10; (5) we explored the current grammar coverage and accuracy and documented them in the form of GitHub issues;10 (6) with the help of a student intern, we prepared a new dataset based on an existing learner corpus, allowing us to explore the grammar's overgeneration at a new level. In summary, we present a version of the grammar which is ready to use in the same settings as the ERG (something that was not possible before) and regarding which it is much clearer now what it covers and what its limitations are. The new version of the SRG can be found on GitHub under Releases11. The latest release includes the manually verified treebanks. Footnote 10: [https://github.com/delph-in/srg/issues](https://github.com/delph-in/srg/issues) Footnote 11: [https://github.com/delph-in/srg/releases/tag/v0.3.3](https://github.com/delph-in/srg/releases/tag/v0.3.3) Footnote 12: [https://github.com/delph-in/docs/wiki/MatrixMrsTestSuite](https://github.com/delph-in/docs/wiki/MatrixMrsTestSuite) ## 4 Exploring the grammar with the MRS test suite The MRS test suite is a collection of sentences illustrating semantic phenomena which are accessible through syntax.12 The name 'MRS' refers to minimal recursion semantics structures and implies that, across languages, the MRS structures for the listed sentences will be similar if the grammar adequately covers the phenomena in question. In other words, the MRS test suite is a quick way to assess a grammar's quality with respect to a wide range of linguistic phenomena. It was first compiled for English in the context of the ERG development, and the English suite consists of 107 sentences.13 The phenomena include different kinds of dependencies (arguments of the verb), scope of negation, scope of adjective (modifiers), implicit arguments (ellipsis), interrogatives, imperatives, and so on. The expectation is that we can compile a similar test suite for any language, as we expect to find such phenomena in most languages of the world. We also expect some differences because languages vary in to what degree certain semantic phenomena are exposed through syntax. As a simple example, consider the fact that some languages (e.g. Spanish) have relatively free word order compared to e.g. English, which means the dependencies between the verb and its arguments will have to be illustrated by more sentences in Spanish where only one English sentence suffices. Footnote 13: [https://github.com/delph-in/docs/wiki/MatrixMrsTestSuiteEn](https://github.com/delph-in/docs/wiki/MatrixMrsTestSuiteEn) We had at our disposal the original MRS test suite for Spanish, compiled along with the original release. We have edited the test suite to better reflect the facts of the Spanish language such as flexible word order, focus constructions, and on the other hand, corrected some mistakes where a Spanish sentence was identified as an equivalent to an English one whereas in reality the sentence had different semantics. After adding some examples which seemed missing and removing some examples which seemed redundant, our updated test suite consists of 106 sentences.14 Footnote 14: [https://github.com/delph-in/src/wiki/MatrixMrsTestSuiteEn](https://github.com/delph-in/src/wiki/MatrixMrsTestSuiteEn) Running the grammar on the test suite revealed that the SRG currently has 81% accuracy over it. Examining the items for which the grammar did not yield a correct analysis has allowed us to document some issues which point to areas where the grammar should be improved. In particular, we have opened 11 new issues in the SRG GitHub repository including: Missing analysis of imperfective and perfective aspect distinction in some cases; missing possessive relations in some cases; missing interrogative semantics in many cases (underexplication between a question and a proposition, which is expected in Spanish yes-no questions but not in e.g. _wh_-questions); broken dependencies in some complex clauses including relative clauses and subordinate clauses, again, in some cases; insufficient implementation of the semantics associated with object clitics and the clitic _se_ (a structure similar to the correct structure is yielded by the grammar but the dependency between the subject and the clitic is broken). All of these issues are major but it is expected that the grammar does not yet handle all of them perfectly because it is still a relatively young grammar in terms of the time that went into its development so far. The point of the MRS test suite is to provide a good estimate of where the grammar is now and where to go next. ## 5 Treebanks ### Tibidabo The TIBIDABO treebank [14] is a subcorpus of the AnCora, the major Spanish news corpus [13]. For the purposes of exploring the SRG, it was sorted by sentence length, since the limitations of HPSG parsers were severe in terms of handling longer sentences. Some sentences may take several minutes to parse, while others will result in such big parse charts that the parser will give up or will run out of RAM. While a publication on the TIBIDABO treebank exists [14], it does not report the number of sentences in the treebank; we assume that the intention was that TIBIDABO contains the same sentences as AnCora (which has 17K sentences) but parsed with HPSG. marimon2010 reports coverage (but not accuracy) on sentences up to length 40, of which only some were processed (meaning most of the longer sentences were not parsed either due to the grammar or the parser limitations). We were able to recover 5894 sentences representing sentence length 1-19, which is 33% of the AnCora corpus. The more 'normal' sentences, showing typical sentence structure, start at about length 7, the shorter ones mainly being titles, short dialog replies, greetings, etc. The rest of the TIBIDABO treebank appears to have been lost. We intend to rebuild it in time, with better tools now available. For the 5894 sentences we have recovered, we had parse forests which were partially verified. But since Freeling 4.0 updates resulted in some incompatibility with the previous version, we had to look at each and every tree again, even though the process was facilitated by FFTB to a degree.15 For the latest release, we have managed to examine the parse forests and verify the presence of the correct tree for sentences up to and including length 10, 2,291 sentences in total. This process was time consuming and, together with the morphological analyzer interface overhaul, constitutes the main contribution of this paper (with the changes to the SRG syntactic analyses forthcoming).16 Footnote 15: This does raise questions about the long-term desirability of the Freeling dependency; it may be possible to instead model the morphology directly in the grammar. Footnote 16: We thank Lorena Suarez Allegue for undertaking some of the manual treebanking as part of her internship. Table 1 shows the results we have so far on the TIBIDABO corpus. The coverage seems stable at around 92% which is a good result for an engineered grammar that has not been in development for as long as the ERG (though we expect the coverage figures to suffer on longer sentences until the parser performance issues are fixed, as there will be a much bigger percentage of RAM limit hits). The accuracy, which is the percentage of semantically desired analyses, goes down as length goes up, which is also to be expected as it is harder to analyze a more complex sentence correctly. We have documented many of the issues which we have seen while assessing the accuracy. Apart from the ones already documented in relation with the MRS test suite, they include: (i) some remaining issues with the Freeling interface; (ii) agreement semantics lost in some constructions with adverbs, suggesting such examples should be added to the MRS test suite (probably not only for Spanish); (iii) issues related to multiword expressions -- this is not an easily solved problem because there is no universal treatment of MWE (in computational linguistics generally) that would not involve serious trade-offs. Note once more that fixing the issues in the treebanks does not involve manipulating the treebanks; instead, the process is as follows: as issues are identified, appropriate improvements to the grammar (including the Freeling interface) can be made; then the grammar is run of all the sentences again, and then the new treebanks are compared automatically with the old ones. Barring any major differences in the Freeling interface, the process is fast. ### Cowslh2 COWSLH2 is a corpus of written Spanish learner language developed at UC Davis [15]. 2020). The corpus contains over 100K sentences in the form of essays written by college students. Some sentences are annotated for gender and number agreement learner usages as well as for the differential object marking learner usages. We have extracted a subcorpus of 1168 reconstructed target sentences where two annotators agreed on the annotation (after fixing some issues such as wrong annotation format, for some of the items). Of these, for the purposes of this paper, we semi-randomly selected 100 sentences of length up to 8, of them 36 are considered "ungrammatical" in the sense that they showcase some learner usage not characteristic of proficient Spanish speakers.17 The rest 64 are grammatical sentences. We ran the SRG on the sentences. Ideally, we would like the SRG to parse only the 64 grammatical ones. As for the ungrammatical ones, a grammar can reject them, or, if a grammar was adapted for learner usage, it might parse them with a special leaner construction. Such learner constructions are not yet part of the SRG and so ideally we would expect it to reject the ungrammatical sentences. Of course we know the SRG is not yet perfect, so the purpose of this exercise is to see where the room for improvement is. Footnote 17: The second author, whose first language is Spanish, verified the grammaticality of the sentences. Table 2 shows the results of running the SRG on the 100 short sentences from the learner corpus. The coverage is 100% meaning all of the 64 grammatical sentences were assigned some HPSG structure. However, that does not mean the corresponding semantics is the desired one; in that sense, we should look at accuracy, and we observe the accuracy of 87%, which matches the average accuracy of the grammar over the short sentences in TIBIDABO. But the learner corpus gives us a rare opportunity to assess the grammar's _overgeneration_: how many ungrammatical sentences get a parse?18 We see that on this sample dataset, SRG shows large overgeneration of 61%. This is not at all unexpected; controlling for overgeneration requires regularly testing the grammar with ungrammatical sentences, which is done routinely in e.g. the Grammar Matrix project (Bender et al., 2010) but since larger grammars typically prioritize coverage over large corpora, overgeneration can grow. Footnote 18: The alternative is to construct ungrammatical sentences by hand, which people seldom dedicate time to. 1. Mis abuelos my.3pl grandparent.masc.pl son personas be.3pl.pres.ind person.fem.3pl famosos. flamous.masc.pl Intended: 'My grandparents are famous people.' [spa; Yamada et al.2020] Consider one example of how a learner corpus helps us quickly find bugs in the grammar. The familiar sentence (1) repeated here as (2) is actually parsed by the SRG. Examining the assigned structure, we see that the adjective _famosos_ is attached high in the tree, somewhat unexpectedly associating itself with the verb phrase rather than \begin{table} \begin{tabular}{l l l l l} sentence length & number of sentences & coverage & accuracy & times hit RAM limit \\ 1 & 65 & 1.0 & 1.0 & 0 \\ 2 & 177 & 0.94 & 0.94 & 0 \\ 3 & 181 & 0.91 & 0.89 & 0 \\ 4 & 219 & 0.91 & 0.86 & 0 \\ 5 & 229 & 0.92 & 0.87 & 0 \\ 6 & 211 & 0.91 & 0.83 & 0 \\ 7 & 246 & 0.91 & 0.76 & 0 \\ 8 & 278 & 0.93 & 0.82 & 0 \\ 9 & 326 & 0.92 & 0.78 & 5 \\ 10 & 359 & 0.91 & 0.76 & 3 \\ \hline all & 2291 & 0.92 & 0.82 & 8 \\ \end{tabular} \end{table} Table 1: SRG accuracy on the first 10 portions of the TIBIDABO treebank \begin{table} \begin{tabular}{l l} \hline coverage & accuracy & overgeneration \\ \hline 100\% & 87\% & 61\% \\ \hline \end{tabular} \end{table} Table 2: SRG accuracy and overgeneration on 100 learner sentences the noun (Figure 4). The semantics of such a structure is nonsensical. It appears as if either the phrase structure rule responsible for the apposition of the verb phrase and the "prepositional" phrase (more generally a modifier) is losing the agreement information (easily fixable by adding the agreement constraints), or, the unary rule that turns the adjectival phrase into a prepositional phrase applies too freely (which normally can also be blocked, although may involve less conventional constraints). In future development, such cases can be addressed, and then the accuracy of the modified grammar should be assessed on all of the treebanks verified so far (automatically, with only the differences requiring manual attention). In any case, using a learner corpus provides a great data driven approach to finding bugs in the grammar. We thus argue that learner corpora should be leveraged to control for overgeneration more systematically. That being said, some of the seemingly ungrammatical sentences are in fact syntactically possible but pragmatically implausible constructions. A good parse ranking model can help filter such examples out. ## 6 Conclusion and future work We presented the latest version of the Spanish Resource Grammar and have published its accuracy over a portion of the TIBIDABO treebank for the first time. The grammar's wide coverage can serve linguistic research (such as testing formal syntactic hypotheses) and NLP applications such as grammar coaching. The treebank, especially as it grows in the future, can serve training high-quality semantic parsers for Spanish. There are two main avenues for future work apart from general grammar development towards higher coverage and accuracy: reducing overgeneration and improving parsing speed. Addressing overgeneration is a grammar engineering objective that will naturally be targeted in the next iterations of the SRG development. Slow parsing remains a serious problem which requires applying new methods. Recent experiments with training supertaggers for the English Resource Grammar are very promising [23] however, due to a lot less training data, such a supertagger for the SRG will probably require a more complex method and/or bigger resources, such as a bigger and/or multilingual language model. ## 7 Limitations The main limitation of this work is the time cost of grammar engineering and treebanking. Due to the time costs involved, what we present here is work in progress, in the sense that the grammar does not yet cover some syntactic phenomena and some of its existing analyses can be improved: the overgeneration and the ambiguity should be reduced, for example. The results we present are only for sentences up to length 10, and some sentences cannot currently be parsed due to the parser limitations.
2303.18216
Hydrodynamical constraints on bubble wall velocity
Terminal velocity reached by bubble walls in first order phase transitions is an important parameter determining both primordial gravitational-wave spectrum and production of baryon asymmetry in models of electroweak baryogenesis. We developed a numerical code to study the real-time evolution of expanding bubbles and investigate how their walls reach stationary states. Our results agree with profiles obtained within the so-called bag model with very good accuracy, however, not all such solutions are stable and realised in dynamical systems. Depending on the exact shape of the potential there is always a range of wall velocities where no steady state solutions exist. This behaviour in deflagrations was explained by hydrodynamical obstruction where solutions that would heat the plasma outside the wall above the critical temperature and cause local symmetry restoration are forbidden. For even more affected hybrid solutions causes are less straight forward, however, we provide a simple numerical fit allowing one to verify if a solution with a given velocity is allowed simply by computing the ratio of the nucleation temperature to the critical one for the potential in question.
Tomasz Krajewski, Marek Lewicki, Mateusz Zych
2023-03-31T17:16:40Z
http://arxiv.org/abs/2303.18216v2
# Hydrodynamical constraints on bubble wall velocity ###### Abstract Terminal velocity reached by bubble walls in first order phase transitions is an important parameter determining both primordial gravitational-wave spectrum and production of baryon asymmetry in models of electroweak baryogenesis. We developed a numerical code to study the real-time evolution of expanding bubbles and investigate how their walls reach stationary states. Our results agree with profiles obtained within the so-called bag model with very good accuracy, however, not all such solutions are stable and realised in dynamical systems. Depending on the exact shape of the potential there is always a range of wall velocities where no steady state solutions exist. This behaviour in deflagrations was explained by hydrodynamical obstruction where solutions that would heat the plasma outside the wall above the critical temperature and cause local symmetry restoration are forbidden. For even more affected hybrid solutions causes are less straight forward, however, we provide a simple numerical fit allowing one to verify if a solution with a given velocity is allowed simply by computing the ratio of the nucleation temperature to the critical one for the potential in question. ## 1 Introduction Phase transitions are a common feature of particle physics models. If they are first order they can open a path to numerous phenomena such as generation of the baryon asymmetry [1; 2; 3; 4] and production of a stochastic background of GWs [5; 6; 7; 8]. Significant progress has been made recently in understanding fine details of the dynamics of such transitions necessary to describe the intricate relation between these possibilities [9; 10; 11; 12; 13; 14; 15; 16; 17]. Despite that evaluation, the bubble-wall velocity in the stationary state remains to be one of the more problematic issues. Given its impact both on the amplitude of the gravitational-wave signal as well as the production of the baryon asymmetry this has to be solved to finally pinpoint the interplay between the two signals. Contrary to nucleation temperature or transition strength, the wall velocity is not a straightforward consequence of the shape of the effective potential. The standard WKB method of computing the velocity involves solving a set of Boltzmann equations in the vicinity of the bubble wall in order to find the friction the plasma will enact on the expanding wall. However, the result still crucially relies on the hydrodynamical solution for the plasma profile [11; 14; 18]. It is a standard practice to use the plasma behaviour obtained in the bag model in these studies. The obvious drawback of this approach is that the bag equation of state (EOS) inherently neglects all knowledge of the potential except the energy difference between its minima [19]. In this work, we investigate the impact of detailed features of the potential on hydrodynamical solutions for the plasma. To this end, we perform lattice simulations tracking the real-time evolution of the bubble-wall profiles. We focus on the hydrodynamical solutions for a single expanding bubble using novel methods that allow us to resolve shocks properly and prevent the appearance of unphysical artefacts. Our method involves algebraic flux-corrected transport (FCT), described in [20; 21; 22; 23] with an improved version of Zalesak's limiter [24]. This allows us to study in detail how the system reaches a stationary state and compare these late-time profiles with analytical approximations. We take a closer look at the problem of hydrodynamical obstruction and find a large class of unstable solutions that constitute a forbidden range for bubble-wall velocities below the Jouguet velocity. The paper is structured as follows. In section 2 we discuss the details of the set-up including the exact model and form of equations of motions that govern its evolution as well as analytical approximations for the solution. Sec. 3 is devoted to the results of our simulations including the dependence of the solutions on the temperature and vacuum expectation value (vev) of the field. Here we also give a simple fit allowing one to approximate the forbidden region in any potential. We conclude in section 4. Appendix A contains finer details of our numerical set-up. ## 2 Modeling In this section, we discuss details of the model we will work with as well as its analytically solvable simplification. We derive equations of motion for the scalar field coupled to the perfect fluid and discuss nucleation conditions which we later use to initialize the evolution. Finally, we briefly discuss the steady state description known as the bag model. ### Scalar field coupled to perfect fluid In this work, we investigate a well-known system consisting of the scalar field \(\phi\) coupled to the perfect fluid described by its temperature \(T\) and local flow four-velocity \(u\)[25; 26; 27; 28; 29]. The equation of state is given by \[\epsilon(\phi,T) =3aT^{4}+V(\phi,T)-T\frac{\partial V}{\partial T}\,, \tag{1}\] \[p(\phi,T) =aT^{4}-V(\phi,T)\,, \tag{2}\] where \(a=(\pi^{2}/90)g_{*}\) and \(w\equiv\epsilon+p\). For the effective potential \(V(\phi,T)\) we use a simple polynomial potential augmented with high-temperature corrections parameterized as \[V(\phi,T)=\frac{1}{2}\gamma(T^{2}-T_{0}^{2})\phi^{2}-\frac{1}{3}\delta T\phi^ {3}+\frac{1}{4}\lambda\phi^{4}\,. \tag{3}\] The energy-momentum tensor of the system is a sum of energy-momentum tensors for the field and the fluid: \[T_{\rm field}^{\mu\nu} =\partial^{\mu}\phi\partial^{\nu}\phi-g^{\mu\nu}\left(\frac{1}{2} \partial_{\alpha}\phi\partial^{\alpha}\phi\right)\,, \tag{4}\] \[T_{\rm fluid}^{\mu\nu} =wu^{\mu}u^{\nu}+g^{\mu\nu}p\,, \tag{5}\] where \(p\) is the pressure of the perfect fluid. We use spherical coordinates in space as they capture the symmetry of a single growing bubble that we intend to simulate 1. The line element \(ds\) for the flat space-time takes the following form in these coordinates: Footnote 1: We neglect the possible instabilities that have been suggested in planar wall propagation [30]. \[ds^{2}=-dt^{2}+dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right)\,. \tag{6}\] The energy-momentum tensor of the system is conserved (\(\nabla_{\mu}T^{\mu\nu}=0\)), however, both contributions are not conserved separately due to extra coupling term parameterised by the effective coupling of the fluid and scalar: \[\nabla_{\mu}T^{\mu\nu}_{\rm field} =\frac{\partial V}{\partial\phi}\partial^{\nu}\phi+\eta u^{\mu} \partial_{\mu}\phi\partial^{\nu}\phi\,, \tag{7}\] \[\nabla_{\mu}T^{\mu\nu}_{\rm fluid} =-\frac{\partial V}{\partial\phi}\partial^{\nu}\phi-\eta u^{\mu} \partial_{\mu}\phi\partial^{\nu}\phi\,, \tag{8}\] where \(\eta\) is a constant parametrizing strength of this interaction. The left-hand side of the equation (7) contains wave equation in spherical coordinates and leads to the equation of motion \[-\partial_{t}^{2}\phi+\frac{1}{r^{2}}\partial_{r}(r^{2}\partial_{r}\phi)- \frac{\partial V}{\partial\phi}=\eta\gamma(\partial_{t}\phi+v\partial_{r} \phi)\,. \tag{9}\] Due to the spherical symmetry of our problem, the four-velocity of the perfect fluid takes the form \(u=(\gamma,\gamma v,0,0)^{T}\) with \(\gamma:=(1-v^{2})^{-1}\). We will determine the equations governing the evolution of two parameters \(v\) and \(p\) considering time (\(\nu=0\)) and radial component (\(\nu=1\)) of eq. (8): \[\nabla_{\mu}T^{\mu 0}_{\rm fluid} =\nabla_{\mu}\left(wu^{\mu}u^{0}+g^{\mu 0}p\right)=\partial_{t} (w\gamma^{2}-p)+\frac{1}{r^{2}}\partial_{r}(r^{2}w\gamma^{2}v)\,, \tag{10a}\] \[\nabla_{\mu}T^{\mu 1}_{\rm fluid} =\nabla_{\mu}\left(wu^{\mu}u^{1}+g^{\mu 1}p\right)=\partial_{t} (w\gamma^{2}v)+\frac{1}{r^{2}}\partial_{r}\left(r^{2}w\gamma^{2}v^{2}\right)+ \partial_{r}p\,. \tag{10b}\] Introducing new variables \(Z:=w\gamma^{2}v\) and \(\tau:=w\gamma^{2}-p\) we get \[\partial_{t}\tau+\frac{1}{r^{2}}\partial_{r}(r^{2}(\tau+p)v) =\frac{\partial V}{\partial\phi}\partial_{t}\phi+\eta\gamma( \partial_{t}\phi+v\partial_{r}\phi)\partial_{t}\phi\,, \tag{11a}\] \[\partial_{t}Z+\frac{1}{r^{2}}\partial_{r}\left(r^{2}Zv\right)+ \partial_{r}p =-\frac{\partial V}{\partial\phi}\partial_{r}\phi-\eta\gamma( \partial_{t}\phi+v\partial_{r}\phi)\partial_{r}\phi\,. \tag{11b}\] The final step needed to evolve the system numerically is to discretize and solve system of equations presented above. All technical details of this procedure are described in Appendix A. ### Nucleation of the bubbles The system we are interested in consists of a single bubble growing in the fluid background. Each of our simulations is initialized with a recently nucleated bubble of the scalar field. robability of tunneling at temperature \(T\) is computed from the bubble nucleation rate [31; 32; 33; 34] \[\Gamma(T)=A(T)\mathrm{e}^{-S}\,. \tag{12}\] For tunneling in finite temperatures the Euclidean action \(S=\frac{S_{3}}{T}\) and \(A(T)=T^{4}\left(\frac{S_{3}}{2\pi T}\right)^{\frac{3}{2}}\). In order to obtain the critical bubble, one needs to find the nucleation temperature \(T_{n}\) at which the probability of a true vacuum bubble forming within a horizon radius becomes significant [35] \[N(T_{n})=\int_{T_{n}}^{T_{c}}\frac{dT}{T}\frac{\Gamma(T)}{H(T)^{4}}\approx 1, \tag{13}\] where \(T_{c}\) denotes the critical temperature in which both minima are degenerate. Assuming \(H(t)\approx\) const, this condition reduces to \[\frac{S_{3}}{T_{n}}\approx 4\log\left(\frac{T_{n}}{H}\right), \tag{14}\] which for temperatures around the electroweak scale gives \(S_{3}/T_{n}\approx 140\)[6]. In the case of polynomial potentials, the critical action can be easily evaluated using the semi-analytical approximation described in Appendix B. An important parameter characterizing transition strength is the amount of the vacuum energy released in the transition \(\alpha\), normalized to the energy of the radiation bath \(\rho_{r}\). In the fluid approximation, it can be defined as \[\alpha_{\theta}=\frac{\theta_{s}-\theta_{b}}{\rho_{r}}\Big{|}_{T=T_{n}}\,, \tag{15}\] where \(\theta\) is the trace anomaly in the symmetric (s) and broken (b) phase, given by the expression \[\theta=\frac{1}{4}(\epsilon-3p). \tag{16}\] Note, that such a definition of the trace anomaly applied to the equation of state (1)-(2) corresponds to a frequently used definition of \(\alpha=\frac{1}{\rho_{r}}\left(\Delta V-\frac{T}{4}\Delta\frac{\partial V}{ \partial T}\right)\)[36; 37]. ### Analytical approximation: bag model A simple model explaining analytically many important features of the late-time evolution is the bag model [19]. It assumes that the cosmic plasma coexists in two phases: * Symmetric phase outside the bubble * Broken phase inside the bubble. However, it does not include the scalar field explicitly. The equation of state in the bag model reads \[\epsilon_{s} =3a_{s}T_{s}^{4}+\theta_{s} \epsilon_{b} =3a_{b}T_{b}^{4}+\theta_{b} \tag{17}\] \[p_{s} =a_{s}T_{s}^{4}-\theta_{s} p_{b} =a_{b}T_{b}^{4}-\theta_{b}, \tag{18}\] where \(\theta_{s}\) and \(\theta_{b}\) are constants and usually one assumes \(\theta_{b}=0\). Therefore the strength of the transition can be consistently defined with the equation (15). Assuming that the plasma is locally in equilibrium, the energy-momentum tensor can be parameterized for the perfect fluid as: \[T^{\mu\nu}=wu^{\mu}u^{\nu}-g^{\mu\nu}p. \tag{19}\] Conservation of \(T^{\mu\nu}\) along the flow and its projection perpendicular to the flow respectively give \[\partial_{\mu}(u^{\mu}w)-u_{\mu}\partial^{\mu}p =0, \tag{20}\] \[\bar{u}^{\nu}u^{\mu}w\partial_{\mu}u_{\nu}-\bar{u}^{\nu}\partial_ {\mu}p =0, \tag{21}\] with \(\bar{u}_{\mu}u^{\mu}=0\) and \(\bar{u}^{2}=-1\). As there is no characteristic distance scale in the problem, the solution should depend only on the self-similar variable \(\xi=r/t\), where \(r\) denotes the distance from the center of the bubble and \(t\) is the time since nucleation. Changing the variables, equations (20) and (21) take the form \[(\xi-v)\frac{\partial_{\xi}\epsilon}{w} =2\frac{v}{\xi}+[1-\gamma^{2}v(\xi-v)]\partial_{\xi}v, \tag{22}\] \[(1-v\xi)\frac{\partial_{\xi}p}{w} =\gamma^{2}(\xi-v)\partial_{\xi}v \tag{23}\] and using the definition of the speed of sound in the plasma \(c_{s}\equiv\frac{\mathrm{d}p}{dT}/\frac{\mathrm{d}c}{dT}\) can be combined into the single equation describing the plasma velocity profile \(v(\xi)\) in the frame of the bubble center \[2\frac{v}{\xi}=\gamma^{2}(1-v\xi)\left[\frac{\mu^{2}}{c_{s}^{2}}-1\right] \partial_{\xi}v, \tag{24}\] with \(\mu=\frac{\xi-v}{1-\xi v}\) denoting the Lorentz-transformed fluid velocity. Solutions of the equation (24) in general depend only on the transition strength \(\alpha\) and bubble-wall velocity in the stationary state \(\xi_{w}\). In a similar way, analytical profiles for the enthalpy \(w\), temperature \(T\) and other thermodynamical quantities can be obtained. Later we will refer to them to compare the results of our simulations with the analytical solutions. Detailed derivations are described in [14; 19; 38; 39]. In general, there exist three types of the bubble-wall profiles: 1. **Deflagrations** are the solutions with subsonic bubble-wall velocity \(\xi_{w}\). In such a case, expanding bubble pushes the plasma in front of it, while behind the bubble wall plasma remains at rest. Typically value of \(v\) decreases with \(\xi\) in the range \([\xi_{w},c_{s}]\) and vanishes for \(\xi>c_{s}\). Therefore a shock front at \(\xi=c_{s}\) may appear if the transition is strong enough. 2. **Detonations** are supersonic solutions, for which bubble-wall velocity exceeds Jouget velocity. In this type of profile, the wall hits plasma which remains at rest in front of the bubble. As fluid enters the broken phase, it slows down smoothly and reaches zero at \(\xi=c_{s}\). 3. **Hybrids** are combinations of the two types mentioned above. They are realised for \(\xi_{w}\in[c_{s},c_{J}]\) and possess features of deflagrations (shock front in front of the wall) and detonations (non-zero plasma velocity behind the wall known as a rarefaction wave). All three types of solutions are schematically depicted in Fig. 1. The Jouget velocity \(c_{J}\) at which the shell around the bubble disappears and the solution shifts from hybrid to detonation is given by Chapman-Jouguet condition [14; 40] \[c_{J}=\frac{1}{\sqrt{3}}\frac{1+\sqrt{1+3\alpha^{2}+2\alpha}}{1+\alpha}. \tag{25}\] ## 3 Results from numerical simulations In this section, we will discuss the results of our numerical simulations. We start with the validation of our method on two benchmark points already studied in the past. Next, we move on to our main results on existence of the gap in the allowed fluid solutions impacting the realisation of hybrids. We discuss the role of key parameters that is the temperature of the transition and the vev of the field. Every simulation is performed on the lattice with \(\delta r=0.01\) GeV\({}^{-1}\) and \(\delta t=0.001\) GeV\({}^{-1}\). The time duration of the evolution is large enough to asymptotically achieve stationary states and is set to \(t_{max}=120\) GeV\({}^{-1}\). Similarly, the physical size of the lattice is fixed as \(R=ct_{max}\) which is large enough to prevent reaching the boundaries by the bubbles, since they expand subluminally. We initialize each simulation with the recently nucleated bubble, fixing the field configuration to the critical profile and setting \(T=T_{n}\) and \(v=0\) everywhere. The value of the friction parameter depends on the field content of the model and as a result, we keep it as a free parameter. We logarithmically vary it in the range \(\eta/T_{c}\in[0.01,1]\), independently checking around 75 values for every scalar potential which is enough to map all the allowed classes of solutions in each case. Figure 1: _Schematic representation of three different types of expanding bubbles. Colour saturation denotes the value of the plasma velocity, while black circles represent the position of the bubble wall._ ### Benchmark points In order to evaluate the performance of our code, we initialized runs with two benchmark points that were already studied in similar a context [28]. All important parameters characterizing these models are summarized in Table 1. For both of them, we perform a scan with respect to the friction parameter \(\eta\), which is the only free parameter. In accordance with previous studies, steady-state wall velocity \(\xi_{w}\) grows as the friction becomes smaller [27]. The general shape of this correspondence was also confirmed, however, the exact form of the curve depends on the choice of the potential parameter and will be discussed later in this paper. For the stronger transition (\(M_{2}\)) we managed to obtain all three types of solutions, while for the weaker one (\(M_{1}\)) no hybrids were realised. We, therefore, confirm the presence of the velocity gap in the region where one expects hybrid profiles. Such a \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model & \(T_{c}\) [GeV] & \(T_{0}\) [GeV] & \(\gamma\) & \(\delta\) & \(\lambda\) & \(T_{n}\) [GeV] & \(\alpha_{\theta}\) \\ \hline \(M_{1}\) & 100 & \(\frac{100}{\sqrt{2}}\) & \(\frac{1}{18}\) & \(\frac{\sqrt{10}}{72}\) & \(\frac{10}{648}\) & 86 & 0.005 \\ \(M_{2}\) & 100 & \(\frac{100}{\sqrt{2}}\) & \(\frac{2}{18}\) & \(\frac{\sqrt{10}}{72}\) & \(\frac{5}{648}\) & 80 & 0.05 \\ \hline \end{tabular} \end{table} Table 1: Model parameters for the benchmark points Figure 2: _Relation between the friction parameter \(\eta\) and bubble-wall velocity in the stationary state \(\xi_{w}\) (left panels). Overview of the corresponding velocity profiles for each point (right panels). The first row corresponds to the weaker transition (\(M_{1}\)), while the second to the stronger one (\(M_{2}\))._ gap appears in both cases, covering the whole \(\xi_{w}\in[c_{s},c_{J}]\) range for \(M_{1}\) and allowing to continue deflagration branch towards solutions with supersonic wall velocity for \(M_{2}\). The details of this phenomenon were not well understood so far and constitute the focus of our interest in the next section. Results of the scan are presented in Fig 2. The shapes of Figure 3: _Time evolution of the plasma shell profiles for plasma velocity \(v\) and enthalpy \(w\) as functions of the self-similar variable \(\xi=r/t\). In general, all three types of solutions were found: deflagrations (first row), hybrids (second row) and detonations (third row). Different shades of blue correspond with the time flow. Profiles evolve towards stationary states represented by darker colours. Red, dashed lines denote analytical profiles obtained within the bag model._ stationary profiles are in very good agreement with the predictions of the bag model 2. The comparison of the results of our simulations and the analytical profiles for three representative examples is shown in Fig. 3. As one can see, we managed to resolve the shocks and reproduce the form of hybrids with very good accuracy, which typically was challenging in previously existing results involving dynamical codes. Footnote 2: Recent \(N-\)body simulations [41] foregoing perfect fluid and treating plasma as individual particles also found qualitatively the same fluid solutions. ### Dependence on the vacuum expectation value As shown in the previous section, the exact form of the relation between the friction \(\eta\) and the terminal wall velocity \(\xi_{w}\), in general, depends on the parameters of the potential. In order to check the dependence on the vacuum expectation value of the scalar field, besides the transition strength \(\alpha_{\theta}\) we fixed also nucleation temperature \(T_{n}/T_{c}\) and check different realisations of such transitions. Fig. 4 shows that as \(\eta\sim\phi^{-1}\), field value in the true vacuum \(v_{0}\) fully determines the position of the gap in terms of friction parameter \(\eta\). Therefore we conclude that the shape of the \(\xi_{w}(\eta)\) relation can be completely explained within terms of \(\alpha_{\theta}\) and \(T_{n}/T_{c}\) exclusively. Moreover, this dependence should be universal for a much wider class of models involving polynomial potentials, as it does not explicitly involve any model-dependent couplings. ### Dependence on the temperature As we have already shown the exact form of the \(\xi_{w}(\eta)\) relation is not unique and depends not only on the strength of the transition \(\alpha\), but also on other parameters defining the scalar potential. To illustrate this, we study a set of different potentials for which \(\alpha_{\theta}\) and \(T_{c}\) are fixed, however, the parameters in the potential are chosen such that they predict different nucleation temperatures \(T_{n}\). Relations between \(\eta\) and \(\xi_{w}\) for \(\alpha_{\theta}\in\{0.05,0.1\}\) and a range of \(T_{n}\) are shown in Fig 5 (left panels). We can see that higher nucleation Figure 4: _Relation between the friction parameter \(\eta\) and bubble-wall velocity in the stationary state \(\xi_{w}\) for the fixed nucleation temperature \(T_{n}=76\) GeV (left panel) and \(T_{n}=92\) GeV (right panel). Different colours represent different positions of the true vacuum._ temperatures lead to a wider velocity gap, while for lower temperatures, almost the entire range of wall velocities can be covered. Note that nucleation temperatures very close to the critical temperature limit the bubble wall velocity for both deflagrations and hybrids so the speed of sound is never reached. This dependence is made clearer in the right column, where we show values of temperature at the peak of the bubble-wall profile for different nucleation temperatures and values of \(\alpha_{\theta}\). As we see in general it is not possible to find a stationary state if the temperature profile significantly exceeds the critical temperature. This is an important condition for the part of the velocity gap below \(c_{s}\) and indeed this hydrodynamical obstruction was already proposed in the small velocity limit [42], where the authors derived an approximation of the maximal subsonic wall-velocity. Our results agree roughly with those limits when the nucleation temperature is very close to the critical one. However, we found that a similar behaviour continues for much lower temperatures and eventually also supersonic solutions are affected. The mechanism itself in those cases becomes less straightforward, though, as the temperature reached within the shells is significantly below the critical one when the instability sets in. Fig. 6 shows the maximal wall velocity reached by the deflagration/hybrid solutions as a function of the nucleation temperature for different transition strengths. Given that in this limit of relatively large Figure 5: _Relation between the friction parameter \(\eta\) and bubble-wall velocity in the stationary state \(\xi_{w}\) (left panels) and the maximum of the plasma temperature along the profile (right panels). Potentials are chosen such that the strength of the transition is fixed to \(\alpha_{\theta}=0.05\) (upper row) and \(\alpha_{\theta}=0.1\) (lower row). The value of the nucleation temperature \(T_{n}\) is encoded with the colour._ strength the result for hybrids does not depend significantly on \(\alpha\) we found a simple fit \[\xi_{w}^{max}=\left(1-\frac{T_{n}}{T_{c}}\right)^{k}\quad\text{with}\quad k=0.276 8\pm 0.0055, \tag{10}\] which can be used as a rough approximation for the upper bound for wall velocities. It has a similar origin as the relation proposed in [42], but is augmented with additional suppression that we observed for low temperatures, described by the power \(k\). ## 4 Summary We investigate the fluid solutions realised in the presence of growing bubbles in cosmological first order phase transitions. We use numerical lattice simulations using spherical symmetry of the system and compare results to the well known analytical solutions. We found good agreement between the analytical profiles and our numerical results whenever the latter exist. Our key result, however, is that the hydrodynamical obstruction preventing the realisation of fast hybrids is very generic. In fact, we always find some solutions to be excluded and the gap in solutions becomes wider as the temperature at which bubbles nucleate predicted by the potential is closer to the critical temperature at which the minima in the potential are degenerate. In extreme cases where the temperatures are very close, no hybrid solutions are realised and as the friction drops the allowed solutions jump from subsonic deflagrations straight to detonations. The mechanism behind the obstruction is well understood in the case of deflagrations where the temperature profiles in the gap that are not realised would simply heat the plasma above the critical temperature and Figure 6: _Left panel: Coloured regions indicate the forbidden regions of wall velocities. Coloured dashed lines indicate the Jouguet velocities for the two values of \(\alpha\) (see Eq. (25)) while the dashed grey line labels the speed of sound. At high temperatures, Hydrodynamic obstruction limits the velocities of both detonation and deflagration solutions. At lower temperatures detonations are realised above the Jouguet velocity as expected, however, we find the obstruction limiting the maximal velocity of hybrids persists resulting in a range of hydrodynamical solutions that are not realised. High-temperature part of the limit on detonations relies on extrapolation and should be treated as a qualitative trend. Right panel: Our data for the maximal wall velocity on the deflagration/hybrid branch as a function of the nucleation temperature \(T_{n}\) together with the fit from Eq. (10) (solid line) and its variation within \(3\sigma\) of the best-fit parameters (dashed lines)._ reverse the transition. In the case of hybrids the mechanism is more complicated and even solutions that do not reheat to such dangerous levels are not realised. While the effect is yet to be confirmed directly in particular models we expect it to be general. Our calculations were performed for a simple toy potential, however, we express them in terms of general characteristics shared by all models predicting a first order transition. The existence of the velocity gap will have a crucial impact on predictions of models realising electroweak baryogenesis. This is due to the fact that the fastest walls that did not accelerate enough to become detonations are the ones most likely to be affected and the effect would persist even in low temperatures. Such solutions were recently shown to predict the largest baryon yields. Thus our results are likely to exclude parts of the parameter space of models most promising for electroweak baryogenesis and impact their viability as solutions to the problem of baryon asymmetry. ## Acknowledgements The authors would like to thank Jose Miguel No for fruitful discussions. This work was supported by the Polish National Agency for Academic Exchange within Polish Returns Programme under agreement PPN/PPO/2020/1/00013/U/00001 and the Polish National Science Center grant 2018/31/D/ST2/02048. T.K. was supported by grant 2019/32/C/ST2/00248 from the Polish National Science Centre. During the completion of this work, T.K. was supported by grant 2019/33/B/ST9/01564 from the Polish National Science Centre. T.K. acknowledges the hospitality of Rudolf Peierls Centre for Theoretical Physics at Oxford University, where parts of this work have been done. Discretization of the equations of motion In order to obtain a numerical approximation of solutions of eqs. (9) and (11) we use the finite elements method, both in time and space. To discretise in space we used the discontinuous Galerkin method. Our elements are just intervals of length \(\delta r\) in the computational domain \([0,R]\). We used values of \(R\) large enough to guarantee that the wall of the bubble is far from \(r=R\) during the whole simulation, thus the choice of the boundary condition at this point does not influence the results. Wave equation (9) describing the evolution of the scalar field is treated with a mixed scheme, i.e. we introduced auxiliary variable \(\psi=\partial_{r}\phi\) which is interpolated using discontinuous piece-wise linear interpolation functions. Using the generalised trapezoid rule as numerical quadrature we obtained a second order scheme which is a generalization of the central finite difference scheme of the second order in Cartesian coordinates. In the center of the bubble (\(r=0\)) we assumed the Neumann boundary condition for field \(\phi\) which in the mixed formulation is just \(\left.\psi\right|_{r=0}=0\). At the far edge of the computational lattice (\(r=R\)) we assumed the Dirichlet boundary condition setting the field value to the location of the false minimum. In order to discretize the equation of motion of the field in time we used the discontinuous Galerkin method [45; 46; 47; 48; 49; 50; 44; 45]. The discontinuous piece-wise linear interpolation functions for \(\phi\) and right-discontinuous linear interpolation for time derivative \(\dot{\phi}\) result in a scheme mimicking the well-known position version of Stromer-Verlet scheme. Deriving a numerical scheme for equations describing the evolution of plasma is somewhat more involved. We base our method on algebraic flux-corrected transport (FCT) proposed in [20; 21; 22; 23]. Since the fluxes in eq. (11) are determined in terms of both conserved and so-called primitive variables \(v\), \(T\) (and derived from them \(p\)) one has to determine primitive ones from \(\phi\), \(\tau\) and \(Z\) which are evolved in the code. In order to do so we combine (1) and (2) to find \[\tau+p(\phi,T)-\frac{1}{2}\left(w(\phi,T)+\sqrt{w(\phi,T)^{2}+4Z^{2}}\right)=0 \tag{12}\] which we solve using the Raphson-Newton method to find the value of the temperature \(T\). Then, \(w\) and \(p\) can be directly computed and the velocity \(v\) can be simply computed by inverting the definition of \(Z\). In order to derive the high-order (in our case second order) scheme for FCT procedure we used the local discontinuous Galerkin method with piece-wise constant interpolation functions for conserved quantities \(Z,\tau\), thus our scheme is similar to a finite volume method. To discretize the high-order scheme in time we use the midpoint method which can be derived as the discontinuous Galerkin method in time [45; 46; 47; 48; 49; 50; 44; 45]. Our low-order scheme is obtained by the algebraic up-winding of the high-order scheme as described in [20; 21; 22; 23]. The result is a scheme similar to the well-known Godunov scheme. To integrate the low-order scheme in time we used the backward Euler method since the forward Euler method turned out to be unstable for certain cases in the neighbourhood of center of the bubble \(r=0\) and we exchanged the speed of the simulations in favour of the robustness of results. The sacrifice is not very severe, since up-winded advection matrix is band-limited with non-zero terms above diagonal only for nodes with negative velocity \(v\) of plasma flow, so the implicit scheme can be efficiently implemented using Thomas' algorithm. The problem that we wanted to solve turned out to be demanding and we had to develop a new limiting procedure that will work properly during the whole simulation. Our attempt is based on well-known Zalesak's limiter [24] in its peak-preserving version [51] corrected by the idea inspired by [52] to restrict distances from which the conserved quantity values should be considered. The main observation is that the time step \(\delta t\) used to integrate in time equations needs to satisfy the Courant-Friedrichs-Lewy condition, i.e. \[\delta t\leq C\min\bigg{(}\frac{\delta r}{|v|}\bigg{)}, \tag{10}\] where \(C\) is the so-called Courant number which depends on the used discretization scheme and \(|v|\) is the maximal speed of propagation. For explicit time integration typically \(C\lesssim 1\), thus the distance from which conserved quantity can be transported in a time step to a node which is bounded by the product \(\delta t|v|\) must be smaller than the lattice spacing \(\delta r\). As a result, it is more consistent to use in the limiter the values of conserved quantity in a distance of \(\delta t|v|\) only, and not the values from the next node. Even though we conservatively assumed that the maximal speed is the speed of light, this correction significantly improved the robustness of our scheme. Finally, the right-hand side of equations (9) and (11) can be consistently discretized using the Galerkin method with interpolation functions introduced above. Even though, interpolation functions in time were chosen in such a way to obtain explicit schemes even when \(\eta\) dependent terms are included, the implicit term for \(\dot{\phi}\) arises from the right-hand side of (9). Fortunately, the dependence on \(\dot{\phi}\) and the implicit equation can be solved exactly. ## Appendix B General results for polynomial potentials The simplest polynomial renormalizable potential takes the form \[V(\phi)=m^{2}\phi^{2}-a\phi^{3}+\lambda\phi^{4}, \tag{11}\] where \(m^{2}\), \(a\) and \(\lambda\) may depend on the temperature. For such potential, there exists an accurate semi-analytical approximation of the critical action[39; 53]: \[\frac{S_{3}}{T}=\frac{a}{T\lambda^{3/2}}\frac{8\pi\sqrt{\beta_{1}\delta+\beta _{2}\delta^{2}+\beta_{3}\delta^{3}}}{81(2-\delta)^{2}}, \tag{12}\] where \(\delta=\frac{8\lambda m^{2}}{a^{2}}\) and \(\beta_{1}=8.2938\), \(\beta_{2}=-5.5330\), \(\beta_{3}=0.8180\). Therefore the nucleation rate for the potential (2.3) may be estimated as \[\frac{\Gamma}{H^{4}}=\frac{T^{4}\exp\left(-\frac{S_{3}}{T}\right)}{\rho_{r}+ \rho_{V}}=\frac{\exp\left(-\frac{8\pi\alpha(\frac{4\xi}{\lambda})^{3/2}(\beta_ {1}+\beta_{2}\delta+\beta_{3}\delta^{2})}{243(\delta-2)^{2}}\right)}{\left( \frac{T_{0}^{2}}{1-\frac{g^{2}a^{2}}{9\pi^{2}\gamma\lambda}\delta}\right)^{2} \left(\frac{1}{3M_{pl}^{2}}\right)^{2}\left(\frac{\pi^{2}q_{s}}{30}+(4\pi \alpha)^{4}\frac{(\sqrt{9-4\delta}+3)^{2}(\sqrt{9-4\delta}+3-2\delta)}{2\cdot 2 4^{4}\pi^{4}\lambda^{3}}\right)^{2}}.\] (B.3) This expression depends on the temperature only through dimensionless parameter \(\delta\), which varies from \(\delta=0\) at \(T_{0}\) to \(\delta=2\) at \(T_{c}\). Using this significantly simplifies the calculations, as the value of \(\delta\) for which \(\frac{\Gamma}{H^{4}}=1\) can be easily translated into \(T_{n}\).
2309.12760
Complex crystallographic reflection groups and Seiberg-Witten integrable systems: rank 1 case
We consider generalisations of the elliptic Calogero--Moser systems associated to complex crystallographic groups in accordance to \cite{EFMV11ecm}. In our previous work \cite{Argyres:2021iws}, we proposed these systems as candidates for Seiberg--Witten integrable systems of certain SCFTs. Here we examine that proposal for complex crystallographic groups of rank one. Geometrically, this means considering elliptic curves $T^2$ with $\Z_m$-symmetries, $m=2,3,4,6$, and Poisson deformations of the orbifolds $(T^2\times\mathbb{C})/\Z_m$. The $m=2$ case was studied in \cite{Argyres:2021iws}, while $m=3,4,6$ correspond to Seiberg--Witten integrable systems for the rank 1 Minahan--Nemeshansky SCFTs of type $E_{6,7,8}$. This allows us to describe the corresponding elliptic fibrations and the Seiberg--Witten differential in a compact elegant form. This approach also produces quantum spectral curves for these SCFTs, which are given by Fuchsian ODEs with special properties.
Philip C. Argyres, Oleg Chalykh, Yongchao Lü
2023-09-22T10:03:45Z
http://arxiv.org/abs/2309.12760v1
# Complex crystallographic reflection groups and Seiberg-Witten integrable systems: rank 1 case ###### Abstract We consider generalisations of the elliptic Calogero-Moser systems associated to complex crystallographic groups in accordance to [1]. In our previous work [2], we proposed these systems as candidates for Seiberg-Witten integrable systems of certain SCFTs. Here we examine that proposal for complex crystallographic groups of rank one. Geometrically, this means considering elliptic curves \(T^{2}\) with \(\mathbb{Z}_{m}\)-symmetries, \(m=2,3,4,6\), and Poisson deformations of the orbifolds \((T^{2}\times\mathbb{C})/\mathbb{Z}_{m}\). The \(m=2\) case was studied in [2], while \(m=3,4,6\) correspond to Seiberg-Witten integrable systems for the rank 1 Minahan-Nemeshansky SCFTs of type \(E_{6,7,8}\). This allows us to describe the corresponding elliptic fibrations and the Seiberg-Witten differential in a compact elegant form. This approach also produces quantum spectral curves for these SCFTs, which are given by Fuchsian ODEs with special properties. ###### Contents * 1 Introduction * 2 Complex crystallographic groups and Cherednik algebras in rank one * 2.1 Elliptic curves with symmetries * 2.2 Rational Cherednik algebra for \(W=\mathbb{Z}_{m}\) * 2.3 Elliptic Cherednik algebra of rank one * 2.4 Case of a punctured elliptic curve * 2.5 Hamiltonians * 3 Classical dynamics, Dunkl operator, Lax matrix, and the spectral curve * 3.1 Classical dynamics * 3.2 Elliptic Dunkl operators * 3.3 Lax matrix * 3.4 Spectral curves * 3.5 Elliptic fibration, integrability, and Seiberg-Witten differential * 4 Spectral curves and elliptic pencils * 5 Quantum curves * 6 Further connections * 6.1 Hitchin systems * 6.2 Local systems, star-shaped quivers, and generalised DAHAs * 6.3 Quantum curves and opers * 6.4 5d theories * A Elliptic functions and duality * B Quantum Hamiltonians * C Quantum curves as Fuchsian equations * C.1 Rational form * C.2 Polynomial form * D Classical spectral curves * E Algebraic integrability ## 1 Introduction The study of supersymmetry with eight supercharges has proven to be a fruitful area of research, providing valuable insights into the strong coupling dynamics of quantum field theory. Among the earliest examples of such theories are the Minahan-Nemeschansky theories [3, 4], which have inspired decades of flourishing development. Despite their lack of conventional Lagrangian descriptions, many interesting observables have been calculated, and their further study is expected to provide new insights. One of the most promising avenues for such study is through the Seiberg-Witten solutions [5, 6], which exhibit a close relationship to integrable systems [7]. In this paper, we provide new integrable systems and associated tools for understanding the Minahan-Nemeschansky theory. It has been long recognised that there is an integrable structure in the Seiberg-Witten geometry. Namely, there is a holomorphic symplectic structure on the fibration of associated Abelian varieties over the Coulomb branch which turn out to be Liouville integrable. Seiberg-Witten integrability, as it has become known, has provided profound insight into the strong coupling dynamics of field theories. However, there is no systematic method to recognise a Seiberg-Witten integrable system behind a given quantum field theory. Moreover, sometimes there is more than one possibility, typically due to the proposed integrable models being geometrically equivalent despite their rather different origins. In our previous paper [2], we proposed to study the so-called _crystallographic elliptic Calogero-Moser systems_ as a potential source of Seiberg-Witten integrable systems. Recall that, according to [1], each complex crystallographic reflection group has an associated family of elliptic Calogero-Moser systems constructed using the theory of _elliptic Cherednik algebras_. We expect that many of these systems can be viewed as Seiberg-Witten integrable systems for certain superconformal field theories (SCFTs). In [2], this was partly confirmed for the Inozemtsev system, a \(BC_{n}\)-version of the elliptic Calogero-Moser system associated to the group \(W=\mathbb{Z}_{2}\wr S_{n}\). In the present paper we turn our attention to groups (and SCFTs) of _rank one_. Geometrically, this means considering elliptic curves \(\mathcal{E}=T^{2}\) with \(\mathbb{Z}_{m}\)-symmetries, \(m=2,3,4,6\), and Poisson deformations of the orbifolds \(T^{*}\mathcal{E}/\mathbb{Z}_{m}=(T^{2}\times\mathbb{C})/\mathbb{Z}_{m}\). The classification of rank-1 4D \(\mathcal{N}=2\) SCFTs was previously addressed in [8, 9, 10, 11], and our study focuses primarily on a subset of this classification. Specifically, the \(m=2\) case corresponds to the SU(2) superconformal gauge theory with 4 flavors which possesses a \(D_{4}\) flavor symmetry. Meanwhile, the \(m=3,4,6\) cases correspond to Minahan-Nemeschansky theories [3, 4]. They possess the exceptional flavor symmetry algebras \(E_{6}\), \(E_{7}\), and \(E_{8}\) respectively, and notably lack conventional Lagrangian descriptions. Throughout the paper, we will refer to these theories as the \(D_{4}\), \(E_{6}\), \(E_{7}\), and \(E_{8}\) theories. This new perspective allows us to obtain the corresponding elliptic fibrations and the Seiberg-Witten differential in a systematic fashion. As a result, the mass parameters of those rank 1 SCFTs receive a transparent geometric interpretation, at the same time being directly linked to the deformation parameters of the corresponding elliptic Cherednik algebra. Further guided by the theory of those algebras, we find a natural quantisation of the spectral curves of the Minahan-Nemeshansky SCFTs of rank one. These _quantum curves_ are given by Fuchsian ODEs with special properties. Note that the elliptic fibration in terms of the Weierstrass model has already been established for these theories [6, 3, 4], but it seems less suitable for quantisation. Last but not least, our results pave the way to constructing spectral curves of higher-rank Minahan-Nemeshansky theories, which will be done in a subsequent paper [12]. The organisation of the paper is as follows. In Section 2 we consider Cherednik algebras for elliptic curves with symmetries, and describe the hamiltonians of the relevant integrable systems. Section 3 describes the classical dynamics of these integrable systems in geometric terms, using their Lax form. We observe a peculiar _duality_ of the Lax matrix, which leads to a compact formula for its spectral curve. This produces an elliptic fibration and a Seiberg-Witten differential for the appropriate SCFTs. In Section 4, we interpret these fibrations in terms of suitable elliptic pencils. In Section 5 we introduce quantum spectral curves by passing from the classical to the quantum hamiltonian. We further characterise the resulting families of Fuchsian ODEs. Finally, in Section 6 we discuss some other contexts in which the related structures appeared. The paper finishes with five appendices giving further details, explicit formulas, and additional properties of the classical and quantum spectral curves obtained in Sections 4 and 5. Complex crystallographic groups and Cherednik algebras in rank one If \(X\) is a complex manifold with an action of a finite group \(W\), that action naturally extends to the sheaf \({\cal D}[X]\) of regular differential operators on \(X\). Therefore, one may consider \({\cal D}[X]\rtimes W\) and \({\cal D}[X]^{W}\) as sheaves of algebras over \(X/W\). In such situation, Etingof constructs in [13] the global Cherednik algebra \(H_{c}(X,W)\) and its spherical subalgebra \(B_{c}(X,W)\) as certain (in fact, universal) deformations of \({\cal D}[X]\rtimes W\) and \({\cal D}[X]^{W}\), respectively. A special case of interest is when \(X={\mathbb{C}}^{n}/\Gamma\) is a complex torus, and \(W\subset{\rm GL}_{n}({\mathbb{C}})\) is a complex reflection group preserving the lattice \(\Gamma\), thus acting on \(X\). The semi-direct product \(G=\Gamma\rtimes W\) is an example of a _complex crystallographic group_; all such groups are classified in [14]1. In this case \(H_{c}(X,W)\) is referred to as the _elliptic Cherednik algebra_. The significance of \(X\) being a complex torus is that in that case, according to [1], the spherical subalgebra \(B_{c}(X,W)\) has a commutative subalgebra of dimension \(n\). This defines a family of integrable systems on \(X\), called _crystallographic elliptic Calogero-Moser systems_. In this paper we look into the simplest cases, namely, complex crystallographic groups and elliptic Cherednik algebras of rank 1, corresponding to elliptic curves with symmetries. Footnote 1: In general, \(G\) may not be a semidirect product of \(W\) and \(\Gamma\). Also, one may consider a more general situation than in [14] so that \(W\) is generated by reflections but \(G\) may not be (that is how, for example, extended Weyl groups arise). ### Elliptic curves with symmetries Let \({\cal E}={\mathbb{C}}/\Gamma\) with \(\Gamma=2\omega_{1}{\mathbb{Z}}+2\omega_{2}{\mathbb{Z}}\) be an elliptic curve. We use a \(q\in{\mathbb{C}}\) to represent a point on both \({\mathbb{C}}\) and \({\cal E}\). We follow the standard convention assuming \({\rm Im}\,(\omega_{2}/\omega_{1})>0\). In general, the only holomorphic automorphisms (symmetries) of \({\cal E}\) are: (1) translations, or (2) translations followed by the \({\mathbb{Z}}_{2}\)-symmetry \(q\mapsto-q\). Elliptic curves with larger automorphism groups arise when \(\Gamma\) has a rotational symmetry of order \(m>2\). As is well known, the only possibilities are \(m=3,4,6\), and the groups \(G=\Gamma\rtimes{\mathbb{Z}}_{m}\) with \(m=2,3,4,6\) (plus the trivial case \(G=\Gamma\)) exhaust all complex crystallographic groups of rank one. Let us choose \(\omega_{1,2}\) so that \[\omega_{2}/\omega_{1}=e^{\pi i/3}\quad({\rm when}\ m=3,6)\quad{\rm or}\quad \omega_{2}/\omega_{1}=e^{\pi i/2}\quad({\rm when}\ m=4)\,. \tag{2.1}\] The first case is known as _equianharmonic_ (with the hexagonal lattice \(\Gamma\)); the \(m=4\) case is called _lemniscatic_ (with the square lattice \(\Gamma\)). In each case, we think of \({\cal E}\) as having an extra symmetry \[s:q\mapsto\omega q\,,\qquad\omega=e^{2\pi i/m}\,, \tag{2.2}\] and write \({\mathbb{Z}}_{m}:=\{1,s,\ldots s^{m-1}\}\) for the multiplicative group of the \(m\)-th roots of unity, acting on \({\cal E}\). The generic \({\cal E}\) corresponds to \(m=2\). The point \(q=0\) is always fixed by \({\mathbb{Z}}_{m}\); other fixed points and their stabiliser groups are given in the table 1. These are also shown in figure 1. Let \(\sigma(q)=\sigma(q|2\omega_{1},2\omega_{2})\), \(\zeta(q)=\zeta(q|2\omega_{1},2\omega_{2})\), \(\wp(q)=\wp(q|2\omega_{1},2\omega_{2})\) be the Weierstrass \(\sigma\), \(\zeta\) and \(\wp\) functions associated with \(\Gamma\) and \(\mathcal{E}\). Since \(\omega\Gamma=\Gamma\), we have \[\sigma(\omega q)=\omega\sigma(q)\,,\quad\zeta(\omega q)=\omega^{-1}\zeta(q)\,, \quad\wp(\omega q)=\omega^{-2}\wp(q)\,. \tag{2.3}\] The general Weierstrass form of \(\mathcal{E}\), \(\wp^{\prime 2}=4\wp^{3}-g_{2}\wp-g_{3}\) (for \(m=2\)), specialises to \[\wp^{\prime 2} =4\wp^{3}-g_{3}\qquad(m=3,\,6)\,, \tag{2.4}\] \[\wp^{\prime 2} =4\wp^{3}-g_{2}\wp\qquad(m=4)\,. \tag{2.5}\] The quotient \(\mathcal{E}/\mathbb{Z}_{m}\) is isomorphic to \(\mathbb{P}^{1}\), which allows us to view \(\mathcal{E}\) as an \(m\)-fold branched covering of \(\mathbb{P}^{1}\). Namely, \[v^{m}=P_{m}(u)\,, \tag{2.6}\] where the elliptic functions \(u,v\) and the corresponding polynomial \(P_{m}\) are summarised in table 2. Thus, with appropriate \(e_{1},e_{2},e_{3}\), \[v^{2} =(u-e_{1})(u-e_{2})(u-e_{3}) (m=2)\,, \tag{2.7}\] \[v^{3} =(u-e_{1})(u-e_{2}) (m=3)\,,\] (2.8) \[v^{4} =(u-e_{1})(u-e_{2})^{2} (m=4)\,,\] (2.9) \[v^{6} =(u-e_{1})^{2}(u-e_{2})^{3} (m=6)\,. \tag{2.10}\] \begin{table} \begin{tabular}{c|c|c} \hline \(m\) & fixed points \(\neq 0\) & stabilisers \\ \hline 2 & \(\omega_{1,2,3}\) & \(\mathbb{Z}_{2}\) \\ \hline 3 & \(\eta_{1,2}\) & \(\mathbb{Z}_{3}\) \\ \hline 4 & \(\omega_{1,2}\) & \(\mathbb{Z}_{2}\) \\ & \(\omega_{3}\) & \(\mathbb{Z}_{4}\) \\ \hline 6 & \(\omega_{1,2,3}\) & \(\mathbb{Z}_{2}\) \\ & \(\eta_{1,2}\) & \(\mathbb{Z}_{3}\) \\ \hline \end{tabular} \end{table} Table 1: Non-zero fixed points and their stabiliser groups. Here we use the notation \(\omega_{3}=\omega_{1}+\omega_{2}\), \(\eta_{1}=2\omega_{3}/3\), \(\eta_{2}=2\eta_{1}\). Figure 1: Fundamental domain, group action, and fixed points. These curves have three or four branch points (one of these being \(e_{0}=\infty\)), and they can be recognised as the only _genus-one cyclic coverings_ of \(\mathbb{P}^{1}\). The action (2.2) naturally extends to a symplectic \(\mathbb{Z}_{m}\)-action on \(T^{*}\mathcal{E}\), so we may consider the orbifold \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\). A deformation of this orbifold can be constructed using _elliptic Cherednik algebras_; this deformation plays the central role in our work. ### Rational Cherednik algebra for \(W=\mathbb{Z}_{m}\) Write \(\mathcal{D}\) for the ring of differential operators in \(q\in\mathbb{C}\), with meromorphic coefficients. The group \(\mathbb{Z}_{m}=\{1,s,\dots,s^{m-1}\}\) acts by \[s:\,q\mapsto\omega q\,,\quad\omega=e^{2\pi i/m}\,, \tag{2.11}\] and this action naturally extends to \(\mathcal{D}\). We then have the following relations in the crossed product \(\mathcal{D}\rtimes\mathbb{Z}_{m}\): \[sq=\omega qs\,,\quad s\frac{d}{dq}=\omega^{-1}\frac{d}{dq}s\,,\quad\frac{d}{ dq}q=q\frac{d}{dq}+1\,. \tag{2.12}\] We may think of the elements in \(\mathcal{D}\rtimes\mathbb{Z}_{m}\) as acting on functions of \(q\). For \(c:=(c_{1},\dots,c_{m-1})\in\mathbb{C}^{m-1}\) and \(\hbar\neq 0\), define the Cherednik algebra \(H_{\hbar,c}=H_{\hbar,c}(\mathbb{Z}_{m})\) as the subalgebra in \(\mathcal{D}\rtimes\mathbb{Z}_{m}\) generated by \(\mathbb{C}[q]\rtimes\mathbb{Z}_{m}\) and the _Dunkl operator_, \[y=\hbar\frac{d}{dq}-\sum_{l=1}^{m-1}\frac{c_{l}}{q}\,s^{l}\,. \tag{2.13}\] Alternatively, \(H_{\hbar,c}\) can be described as the associative algebra generated by \(q,y,s\) subject to \[s^{m}=1\,,\quad sq=\omega qs\,,\quad sy=\omega^{-1}ys\,,\quad yq-qy=\hbar+\sum _{l=1}^{m-1}(1-\omega^{l})c_{l}s^{l}\,. \tag{2.14}\] The _spherical subalgebra_ of \(H_{\hbar,c}\) is defined as \(e\,H_{\hbar,c}\,e\), where \[e=\frac{1}{m}\sum_{j=0}^{m-1}s^{j}\,. \tag{2.15}\] \begin{table} \begin{tabular}{c|c c c c} \hline \(m\) & 2 & 3 & 4 & 6 \\ \hline \(\omega_{2}/\omega_{1}\) & any & \(e^{\pi i/3}\) & \(i\) & \(e^{\pi i/3}\) \\ \hline \(u\) & \(\wp(q)\) & \(\frac{1}{2}\wp^{\prime}(q)\) & \(\wp^{2}(q)\) & \(\wp^{3}(q)\) \\ \hline \(v\) & \(\frac{1}{2}\wp^{\prime}(q)\) & \(\wp(q)\) & \(\frac{1}{2}\wp^{\prime}(q)\) & \(\frac{1}{2}\wp(q)\wp^{\prime}(q)\) \\ \hline \(P_{m}(u)\) & \(u^{3}-\frac{1}{4}g_{2}u-\frac{1}{4}g_{3}\) & \(u^{2}+\frac{1}{4}g_{3}\) & \(u(u-\frac{1}{4}g_{2})^{2}\) & \(u^{2}(u-\frac{1}{4}g_{3})^{3}\) \\ \hline \end{tabular} \end{table} Table 2: The elliptic functions \(u,v\) and the corresponding polynomial \(P_{m}\). Each element of the spherical subalgebra, when acting on \(\mathbb{Z}_{m}\)-invariant polynomials \(\mathbb{C}[q^{m}]\), reduces to a \(\mathbb{Z}_{m}\)-invariant differential operator; this defines a faithful representation \[\theta\,:\;e\,H_{h,c}\,e\;\longrightarrow\;\mathcal{D}^{\mathbb{Z}_{m}}\,, \tag{2.16}\] called the _Dunkl representation_. Introduce \[\mu_{i}=\sum_{l=1}^{m-1}\omega^{-il}c_{l}\,,\quad i=0,\dots,m-1\,, \tag{2.17}\] \[u=q^{m}\,,\quad v=\hbar q\frac{d}{dq}\,,\quad w=\left(\hbar\frac{d}{dq}-\frac{ \mu_{m-1}}{q}\right)\dots\left(\hbar\frac{d}{dq}-\frac{\mu_{0}}{q}\right)\,. \tag{2.18}\] Then one finds that under \(\theta\), the elements \(e\,q^{m}\,e\), \(e\,y^{m}\,e\), and \(e\,qy\,e\) are mapped to \(u,w\), and \(v-\mu_{0}\), respectively. We denote by \(B_{h,c}\) the spherical subalgebra in the Dunkl representation: \[B_{\hbar,c}=\theta(e\,H_{h,c}\,e)\,,\qquad B_{h,c}\subset\mathcal{D}^{ \mathbb{Z}_{m}}\,. \tag{2.19}\] It is easy to show that, as an abstract algebra, \(B_{h,c}\) is generated by \(u,v,w\) subject to the relations \[[v,u]=\hbar mu\,,\quad[w,v]=\hbar mw\,,\quad uw=P_{\hbar}(v)\,,\quad wu=P_{ \hbar}(v+\hbar m)\,, \tag{2.20}\] where \(P_{\hbar}(t)=\prod_{j=0}^{m-1}\left(t-\mu_{j}-j\hbar\right)\). Setting \(\hbar=0\) in (2.14) and (2.20), we obtain the _classical analogues_, \(H_{0,c}\) and \(B_{0,c}\). The algebra \(B_{0,c}\) can therefore be described abstractly as the quotient \[B_{0,c}=\mathbb{C}[\bar{u},\bar{v},\bar{w}]/\{\bar{u}\bar{w}=P(\bar{v})\}\,, \qquad P(t)=\prod_{i=0}^{m-1}(t-\mu_{i})\,. \tag{2.21}\] When all \(\mu_{i}=0\), this is the algebra of functions on the _cyclic singularity_\(\mathbb{C}^{2}/\mathbb{Z}_{m}\). Therefore, the family \(B_{0,c}\) describes a Poisson deformation of the cyclic singularity, with the Poisson bracket induced from (2.20), \[\{\bar{v},\bar{u}\}=\bar{u}\,,\quad\{\bar{w},\bar{v}\}=\bar{w}\,,\quad\{\bar{ w},\bar{u}\}=P^{\prime}(\bar{v})\,. \tag{2.22}\] We remark that the algebras \(H_{0,c}\) and \(B_{0,c}\) can also be constructed similarly to \(H_{h,c}\) and \(B_{h,c}\), replacing the ring \(\mathcal{D}\) by the commutative ring \(\mathbb{C}(q)[p]\), where \(p\) replaces \(\hat{p}:=\hbar\frac{d}{dq}\). The classical analogues of the Dunkl operator (2.13) and of the generators (2.18) are: \[y^{c}=p-\sum_{l=1}^{m-1}\frac{c_{l}}{q}\,s^{l}\,,\qquad\bar{u}=q^{m}\,,\quad \bar{v}=qp\,,\quad\bar{w}=\left(p-\frac{\mu_{m-1}}{q}\right)\dots\left(p-\frac {\mu_{0}}{q}\right)\,. \tag{2.23}\] ### Elliptic Cherednik algebra of rank one We now proceed to define the elliptic version of \(H_{h,c}\), following the general framework of [13]. Let \(\mathcal{E}=\mathbb{C}/\Gamma\) be an elliptic curve with the symmetry group \(\mathbb{Z}_{m}\), \(m\in\{2,3,4,6\}\). The elliptic Cherednik algebra \(H_{h,c}(\mathcal{E})\) depends on a set of parameters chosen as follows. To every \(x_{i}\in\mathcal{E}\) and \(l=1,\dots,m-1\) such that \(s^{l}(x_{i})=x_{i}\), we assign a parameter \(c_{l}(x_{i})\), with an additional requirement that \(c_{l}(x_{i})=c_{l}(x_{j})\) whenever \(x_{j}=wx_{i}\) for some \(w\in W\). From Fig. 2 we observe that this amounts to 4 parameters in the \(m=2\) case, and 6, 7 or 8 parameters when \(m=3\), 4 or 6, respectively. We write \(c=(c_{l}(x_{i}))\) for the set of parameters. It will be convenient to extend the set \(c\) by setting \(c_{l}(x_{i})=0\) if \(s^{l}(x_{i})\neq x_{i}\). Later it will also be convenient to use the following combinations: \[\mu_{j}(x_{i})=\sum_{l=1}^{m-1}\omega^{-jl}c_{l}(x_{i})\,,\quad j=0,\ldots,m-1\,. \tag{2.24}\] Note that for any fixed point \(x_{i}\), the sum of \(\mu_{j}(x_{i})\) is zero, and if the stabiliser of \(x_{i}\) is a proper subgroup \(\mathbb{Z}_{m_{i}}\subset\mathbb{Z}_{m}\) then the set of \(\mu_{j}(x_{i})\) has repetitions. Let \(\mathbb{C}(\mathcal{E})\) denote the field of meromorphic functions on \(\mathcal{E}\) (i.e. elliptic functions in \(q\)), and \(\mathcal{D}_{\mathcal{E}}\) the ring of differential operators on \(\mathcal{E}\), with elliptic coefficients. Similarly to the rational case, we form the cross-product \(\mathcal{D}_{\mathcal{E}}\rtimes\mathbb{Z}_{m}\). To define \(H_{h,c}(\mathcal{E})\) as a sheaf of algebras over \(\mathcal{E}/\mathbb{Z}_{m}\), we need to describe its sections over an arbitrary \(\mathbb{Z}_{m}\)-invariant open chart \(U\subset\mathcal{E}\). Write \(\mathcal{O}_{U}\subset\mathbb{C}(\mathcal{E})\) for the ring of functions regular on \(U\). We define the _algebra of sections_ of \(H_{h,c}(\mathcal{E})\) over \(U\) to be the subalgebra \(H_{h,c}(\mathcal{E},U)\subset\mathcal{D}_{\mathcal{E}}\rtimes\mathbb{Z}_{m}\) generated by \(\mathcal{O}_{U}\rtimes\mathbb{Z}_{m}\) and an element \(y\) of the form \[y=\hbar\frac{d}{dq}-\sum_{l=1}^{m-1}b_{l}(q)s^{l}\,, \tag{2.25}\] where \(b_{l}(q)\) may have simple poles at the fixed points \(x_{i}\in U\) and are regular elsewhere, with \[\operatorname{res}_{q=x_{i}}b_{l}=c_{l}(x_{i})\quad\text{for all }x_{i}\in U. \tag{2.26}\] Informally, these conditions mean that near \(q=x_{i}\) such \(y\) should look like the rational Dunkl operator (2.13). Note that while there may be several such elements \(y\), the difference of any two of them belongs to \(\mathcal{O}_{U}\rtimes\mathbb{Z}_{m}\), so the definition of \(H_{h,c}(\mathcal{E},U)\) is unambiguous. This defines the sheaf \(H_{h,c}(\mathcal{E})\) of _elliptic Cherednik algebras_ on \(\mathcal{E}\). The sheaf of _spherical subalgebras_\(eH_{h,c}(\mathcal{E})e\) is obtained by replacing local sections \(a\in\mathcal{D}_{\mathcal{E}}\rtimes\mathbb{Z}_{m}\) by \(eae\). Again, these local sections can be realised as differential operators using the map (2.16). This produces a sheaf of algebras \(B_{h,c}(\mathcal{E}):=\theta\left(eH_{h,c}(\mathcal{E})e\right)\subset \mathcal{D}_{\mathcal{E}}^{\mathbb{Z}_{m}}\). The _classical version_\(H_{0,c}(\mathcal{E})\) of the elliptic Cherednik algebra is obtained in a similar fashion. Namely, the classical counterpart of the ring of differential operators is the commutative ring \(\mathbb{C}(\mathcal{E})[p]:=\mathbb{C}(\mathcal{E})\otimes\mathbb{C}[p]\), with the \(\mathbb{Z}_{m}\)-action extended by \(sp=\omega^{-1}ps\). The algebra of sections \(H_{0,c}(U)\) over any open \(\mathbb{Z}_{m}\)-invariant chart \(U\subset\mathcal{E}\) is defined as the subalgebra of \(\mathbb{C}(\mathcal{E})[p]\rtimes\mathbb{Z}_{m}\), generated by \(\mathcal{O}_{U}\rtimes\mathbb{Z}_{m}\) and an element \(y^{c}\) of the form \[y^{c}=p-\sum_{l=1}^{m-1}b_{l}(q)s^{l}\,,\] where \(b_{l}(q)\) satisfy the same residue conditions (2.26). The sheaves of classical spherical subalgebras \(eH_{0,c}(\mathcal{E})e\) and \(B_{0,c}(\mathcal{E})\) are defined in a similar way. _Remark 2.1_.: When \(c=0\), \(H_{\hbar,0}=\mathcal{D}[\mathcal{E}]\rtimes\mathbb{Z}_{m}\) and \(B_{\hbar,0}=\mathcal{D}[\mathcal{E}]^{\mathbb{Z}_{m}}\), where \(\mathcal{D}[\mathcal{E}]\) denotes the sheaf of regular differential operators on \(\mathcal{E}\). The classical analogue of \(\mathcal{D}[\mathcal{E}]\) is the sheaf \(\mathcal{O}(T^{*}\mathcal{E})\), so we get \(H_{0,0}=\mathcal{O}(T^{*}\mathcal{E})\rtimes\mathbb{Z}_{m}\) and \(B_{0,0}=\mathcal{O}(T^{*}\mathcal{E})^{\mathbb{Z}_{m}}\). The latter sheaf can be identified with \(\mathcal{O}(T^{*}\mathcal{E}/\mathbb{Z}_{m})\), hence, the sheaf \(B_{0,c}\) describes a deformation of the orbifold \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\). As we will see below, these deformations can be described geometrically as certain _rational elliptic surfaces_. ### Case of a punctured elliptic curve For \(U=\mathcal{E}_{0}:=\mathcal{E}\setminus\{0\}\) the algebra \(H_{\hbar,c}(\mathcal{E}_{0})\) admits a simple description. Denote by \(f(x,z)\) the following elliptic function: \[f(x,z)=\zeta(x-z)-\zeta(x)+\zeta(z)=\frac{1}{2}\frac{\wp^{\prime}(x)+\wp^{ \prime}(z)}{\wp(x)-\wp(z)}\,. \tag{2.27}\] Note that \(f(\omega x,\omega z)=\omega^{-1}f(x,z)\) for \(\omega=e^{2\pi i/m}\). Write \(\mathcal{S}\subset\{\omega_{1,2,3},\eta_{1,2}\}\) for the set of nonzero fixed points for \(\mathbb{Z}_{m}\), and consider \[y=\hat{p}-\sum_{l=1}^{m-1}\sum_{x_{i}\in\mathcal{S}}c_{l}(x_{i})f(q,x_{i})s^{ l}\,,\qquad\hat{p}=\hbar\frac{d}{dq}\,. \tag{2.28}\] It is clear that \(y\) belongs to \(H_{\hbar,c}(\mathcal{E}_{0})\). By the \(\mathbb{Z}_{m}\)-invariance of the couplings \(c_{l}(x_{i})\), \[sy=\omega^{-1}ys\,. \tag{2.29}\] Next, we have \(\mathcal{O}_{\mathcal{E}_{0}}=\mathbb{C}[\wp,\wp^{\prime}]/\{\wp^{\prime 2}=4 \wp^{3}-g_{2}\wp-g_{3}\}\) and the crossed product \(\mathcal{O}_{\mathcal{E}_{0}}\rtimes\mathbb{Z}_{m}\), with the relations \[s\wp=\omega^{-2}\wp s\,,\quad s\wp^{\prime}=\omega^{-3}\wp^{\prime}s\,. \tag{2.30}\] By definition, \(H_{\hbar,c}(\mathcal{E}_{0})\) is generated by \(\mathcal{O}_{\mathcal{E}_{0}}\rtimes\mathbb{Z}_{m}\) and \(y\). For any \(g\in\mathcal{O}_{\mathcal{E}_{0}}\), \[yg-gy=\hbar\frac{dg}{dq}-\sum_{l=1}^{m-1}\sum_{x_{i}\in\mathcal{S}}c_{l}(x_{i} )f(q,x_{i})(g(\omega^{l}q)-g(q))s^{l}\,. \tag{2.31}\] Note that whenever \(x_{i}\in\mathcal{S}\) is fixed by \(s^{l}\), the function \(g(\omega^{l}q)-g(q)\) vanishes at \(q=x_{i}\), and if \(x_{i}\) is not fixed by \(s^{l}\), then \(c_{l}(x_{i})=0\). Therefore, all the terms in the right-hand side of (2.31) are regular away from zero, hence, belong to \(\mathcal{O}_{\mathcal{E}_{0}}\rtimes\mathbb{Z}_{m}\). It is easy to show that as an abstract algebra, \(H_{\hbar,c}(\mathcal{E}_{0})\) is generated by \(\mathcal{O}_{\mathcal{E}_{0}}\rtimes\mathbb{Z}_{m}\) and \(y\), subject to the relations (2.29), (2.31). The algebra \(H_{\hbar,c}(\mathcal{E}_{0})\) admits a basis formed by the elements \[y^{j}s^{l}\,,\quad\wp^{(i)}(q)y^{j}s^{l}\,,\qquad 0\leq l\leq m-1,\quad i,j \geq 0\,, \tag{2.32}\] among which we have \(\mathbb{Z}_{m}\)-invariant elements \[y^{j}\quad\text{with}\quad j\in m\mathbb{Z}\,,\quad\text{and}\quad\wp^{(i)}(q )y^{j}\quad\text{with}\quad i+j+2\in m\mathbb{Z}\,. \tag{2.33}\] Applying the homomorphism (2.16), we obtain a basis of \(B_{\hbar,c}(\mathcal{E}_{0})\) of the following form: \[w_{j}:=(\hat{p}-f_{j-1})\ldots(\hat{p}-f_{0})\,\qquad j\in m\mathbb{Z}\,, \tag{2.34}\] \[v_{j}^{(i)}:=\wp^{(i)}(q)\,(\hat{p}-f_{j-1})\dots(\hat{p}-f_{0})\,\qquad i+j+2\in m \mathbb{Z}\,, \tag{2.35}\] where the coefficients \(f_{k}=f_{k}(q)\) are given by \[f_{k}=\sum_{l=1}^{m-1}\sum_{x_{i}\in\mathcal{S}}c_{l}(x_{i})f(q,x_{i})\omega^{ -kl}=\sum_{x_{i}\in\mathcal{S}}\mu_{l}(x_{i})f(q,x_{i})\,. \tag{2.36}\] _Remark 2.2_.: The classical algebras \(H_{0,c}(\mathcal{E}_{0})\) and \(B_{0,c}(\mathcal{E}_{0})\) are described similarly, by replacing \(\hat{p}=\hbar\frac{d}{dq}\) with the classical momentum, \(p\) (and by setting \(\hbar=0\) in (2.31)). _Remark 2.3_.: More generally, for any finite \(\mathbb{Z}_{m}\)-invariant set \(\mathcal{Z}\subset\mathcal{E}\), consider \(U=\mathcal{E}\setminus\mathcal{Z}\). In that case, the algebras \(H_{h,c}(U)\) and \(B_{\hbar,c}(U)\) admit a similar description, with the operator \(y\) modified as follows: \[y=\hbar\frac{d}{dq}-\sum_{l=1}^{m-1}\sum_{x_{i}\in U}\sum_{z_{j}\in\mathcal{Z }}\frac{1}{|\mathcal{Z}|}c_{l}(x_{i})f(q-z_{j},x_{i}-z_{j})s^{l}\,. \tag{2.37}\] The previous case corresponds to \(\mathcal{Z}=\{0\}\). ### Hamiltonians According to [1], the hamiltonians \(\widehat{h}_{1},\dots,\widehat{h}_{n}\) of the elliptic crystallographic Calogero-Moser system for a group \(G=\Gamma\rtimes W\) of rank \(n\) have the form \[\widehat{h}_{i}=f_{i}(\hat{p})+\dots\,,\qquad\hat{p}=\left(\hbar\frac{ \partial}{\partial q_{1}}\,,\dots,\hbar\frac{\partial}{\partial q_{n}}\right)\,, \tag{2.38}\] with their leading symbols \(f_{i}\) generating the ring of polynomial \(W\)-invariants, and with the dots representing terms of smaller order in \(\hat{p}\). A similar result holds in the classical case. The connection between these hamiltonians and the elliptic Cherednik algebra is as follows. **Theorem 2.4** ([1]).: _The hamiltonians \(\widehat{h}_{i}\) represent global sections of the sheaf \(B_{h,c}(\mathbb{C}^{n}/\Gamma,W)\) of spherical Cherednik algebras. Furthermore, they generate the full algebra of global sections, that is, any global section of \(B_{h,c}(\mathbb{C}^{n}/\Gamma,W)\) is a polynomial in \(\widehat{h}_{1},\dots,\widehat{h}_{n}\)._ The construction of \(\widehat{h}_{i}\) given in [1] is fairly involved, and no explicit expression for \(\widehat{h}_{i}\) is known in general. In the rank one case, however, the situation is simpler. In this case, we have a single hamiltonian of the form \[\widehat{h}=\hat{p}^{m}+\dots\,,\qquad\hat{p}=\hbar\frac{d}{dq}\,, \tag{2.39}\] which can be described as follows. **Proposition 2.5**.: _The algebra of global sections of the sheaf of spherical subalgebras \(B_{\hbar,c}(\mathcal{E})\) is generated by a single element of the form_ \[\widehat{h}=w_{m}+\alpha_{2}v_{m-2}^{(0)}+\alpha_{3}v_{m-3}^{(1)}+\dots+ \alpha_{m}v_{0}^{(m-2)}\,. \tag{2.40}\] _Here \(w_{m}\) and \(v_{j}^{(i)}\) are the elements defined in (2.34), (2.35), and \(\alpha_{i}\) are suitable constant coefficients._ To prove this, we notice that any global section \(\widehat{h}\) restricts onto \(\mathcal{E}_{0}=\mathcal{E}\setminus\{0\}\), and so it must be a combinations of elements \(w_{i}\), \(v_{j}^{(i)}\). For \(\widehat{h}\) to have degree \(m\) in \(\hat{p}\), it must be obtained from \(w_{m}\) by adding a finite linear combination of \(v_{j}^{(i)}\) with \(0\leq j\leq m-1\). On the other hand, near \(q=0\) each global section must belong to the (completion) of the local rational spherical Cherednik subalgebra, generated by \(\mathbb{C}[[u]]\), \(v\) and \(w\) as given in (2.18). Since \(u,v\) are regular at \(q=0\), we must have \[\widehat{h}=(\hat{p}-\mu_{m-1}q^{-1})\ldots(\hat{p}-\mu_{0}q^{-1})+\text{ regular terms}\,,\qquad\mu_{j}=\mu_{j}(0). \tag{2.41}\] Furthermore, each of \(w_{j},v_{j}^{(i)}\) near \(q=0\) has the following principal part: \[w_{j} \sim(\hat{p}+\widetilde{\mu}_{j-1}q^{-1})\ldots(\hat{p}+ \widetilde{\mu}_{0}q^{-1})\,, \tag{2.42}\] \[v_{j}^{(i)} \sim\,(-1)^{i}(i+1)!q^{-i-2}(\hat{p}+\widetilde{\mu}_{j-1}q^{-1} )\ldots(\hat{p}+\widetilde{\mu}_{0}q^{-1})\,,\qquad\widetilde{\mu}_{\!\!\! \!\!\mu}:=\sum_{x_{i}\in\mathcal{S}}\mu_{\!\!\!\!\!\!\mu}(x_{i})\,. \tag{2.43}\] Comparing this with the previous formula, we conclude that the only allowed terms in \(\widehat{h}\) are \(v_{j}^{(i)}\) with \(j=0,\ldots,m-2\) and \(i+j=m-2\), thus proving (2.40). Now, we may compare the principal parts in (2.40) and (2.41) to get the relation \[(\hat{p}-\mu_{m-1}q^{-1})\ldots(\hat{p}-\mu_{0}q^{-1})=(\hat{p}+ \widetilde{\mu}_{m-1}q^{-1})\ldots(\hat{p}+\widetilde{\mu}_{0}q^{-1})\\ +\sum_{i=2}^{m}(-1)^{i}(i-1)!\alpha_{i}q^{-i}(\hat{p}+\widetilde{ \mu}_{m-i-1}q^{-1})\ldots(\hat{p}+\widetilde{\mu}_{0}q^{-1})\,. \tag{2.44}\] This completely determines the coefficients \(\alpha_{2},\ldots,\alpha_{m}\) entering (2.40) in terms of \(\mu_{j}=\mu_{j}(0)\) and \(\widetilde{\mu}_{j}=\sum_{x_{i}\in\mathcal{S}}\mu_{j}(x_{i})\), i.e. in terms of the parameters of the elliptic Cherednik algebra. In fact, we can trade the parameters \(\mu_{j}(0)\) for \(\alpha_{j}\), that is, regard (2.40) as depending on \(c_{l}(x_{i})_{x_{i}\in\mathcal{S}}\) and \(\alpha_{2},\ldots,\alpha_{m}\). The classical hamiltonian is described by the same formulas, with \(\hat{p}=\hbar\frac{d}{dq}\) replaced by the classical momentum \(p\) everywhere. **Proposition 2.6**.: _The algebra of global sections of the sheaf of spherical subalgebras \(B_{0,c}(\mathcal{E})\) is generated by a single element of the form_ \[h=w_{m}+\alpha_{2}v_{m-2}^{(0)}+\alpha_{3}v_{m-3}^{(1)}+\cdots+\alpha_{m}v_{0} ^{(m-2)}\,. \tag{2.45}\] _Here \(w_{m}\) and \(v_{j}^{(i)}\) are the classical analogues of elements (2.34), (2.35),_ \[w_{j}:=(p-f_{j-1})\ldots(p-f_{0})\,\qquad j\in m\mathbb{Z}\,, \tag{2.46}\] \[v_{j}^{(i)}:=\wp^{(i)}(q)\,(p-f_{j-1})\ldots(p-f_{0})\,\qquad i+j+2 \in m\mathbb{Z}\,, \tag{2.47}\] \(f_{k}\) _are given by (2.36), and \(\alpha_{i}\) are suitable constant coefficients._ The relation for determining \(\alpha_{i}\) is obtained by taking the classical limit of (2.44): \[(p-\mu_{m-1}q^{-1})\ldots(p-\mu_{0}q^{-1})=(p+\widetilde{\mu}_{m -1}q^{-1})\ldots(p+\widetilde{\mu}_{0}q^{-1})\\ +\sum_{i=2}^{m}(-1)^{i}(i-1)!\alpha_{i}q^{-i}(p+\widetilde{\mu}_ {m-i-1}q^{-1})\ldots(p+\widetilde{\mu}_{0}q^{-1})\,. \tag{2.48}\] Classical dynamics, Dunkl operator, Lax matrix, and the spectral curve In this section we look at the dynamics of the hamiltonian (2.45), its Lax presentation, and the geometry of the spectral curves. ### Classical dynamics Let \(h\) be one of the hamiltonians (2.45) for \(m=2,3,4,6\). The corresponding dynamics is described by the Hamilton-Jacobi equations, \[\frac{dp}{dt}=-\frac{\partial h}{\partial q}\,,\qquad\frac{dq}{dt}=\frac{ \partial h}{\partial p}\,. \tag{3.1}\] The motion takes place along the level curves \[\widetilde{\Sigma}=\{(p,q)\,:\,h(p,q)=z\}\,. \tag{3.2}\] Each curve is an \(m\)-sheeted branched covering of the elliptic curve \(\mathcal{E}\). The curves are not compact, due to \(h\) having poles at \(q=x_{i}\), and have a fairly high genus. To interpret (3.1) as a _complex integrable system_, one needs to compactify the curves and take into account the \(\mathbb{Z}_{m}\)-symmetry \[s\,:\ p\to\omega^{-1}p\,,\quad q\to\omega q\,,\qquad\omega=e^{2\pi i/m}\,. \tag{3.3}\] This is summarised in the next proposition. **Proposition 3.1**.: (1) _Suppose \(x_{i}\in\mathcal{E}\) is a fixed point for \(W=\mathbb{Z}_{m}\), with the stabiliser \(\mathbb{Z}_{m_{i}}\). The \(m\)-sheeted branched covering \(\widetilde{\Sigma}\to\mathcal{E}\), \((p,q)\mapsto q\) near \(q=x_{i}\) has the form_ \[\prod_{j=0}^{m-1}\left(p-\frac{\mu_{j}}{q-x_{i}}+O\left((q-x_{i})^{m_{i}-1} \right)\right)=0\,. \tag{3.4}\] _Here \(\mu_{j}=\mu_{j}(x_{i})\) are the "linear masses" (2.24)._ (2) _A compactification of \(\widetilde{\Sigma}\) is obtained by adding \(m\) distinct points over each fixed point \(x_{i}\), one point for each of the sheets (3.4). For generic couplings \(c\), the compactified curve is smooth, of genus \(g=m^{2}+1\). The \(\mathbb{Z}_{m}\)-action (3.3) is free on \(\widetilde{\Sigma}\) and has the stabiliser \(\mathbb{Z}_{m_{i}}\) for the points above \(q=x_{i}\)._ (3) _The (compactified) quotient curves \(\Sigma:=\widetilde{\Sigma}/\mathbb{Z}_{m}\) have genus one. The differential_ \[dt=\frac{dp}{-\frac{\partial h}{\partial q}}=\frac{dq}{\frac{\partial h}{ \partial p}} \tag{3.5}\] _defines a non-vanishing holomorphic \(1\)-form on \(\Sigma\). Hence, the \(\mathbb{Z}_{m}\)-quotient of the fibration (3.2) defines an elliptic fibration on \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\), and the dynamics (3.1) becomes linear along its fibers._ It is possible to prove this proposition by analysing the formula (2.45). However, we will use an alternative method and derive it from a _Lax presentation_ for the system (3.1). Such a Lax presentation can be constructed following [15] by using Dunkl operators, which we need to introduce first. ### Elliptic Dunkl operators Elliptic analogues of Dunkl operators go back to [16], see also [17]. For general complex crystallographic groups, they were introduced in [18]. Like their rational counterparts, elliptic Dunkl operators form a commutative family, but their symmetry properties are more complicated due to the presence of auxiliary "spectral" variables. Below we discuss the case of rank 1 only, in which case we need just one Dunkl operator. We follow the general framework of [18, 1], but since we are dealing with a special case, everything will be made very concrete. Let us fix some notation. For an abelian variety \(X\) with an action of a finite group \(W\), the Dunkl operators [18] depend on an auxiliary variable \(\alpha\in X^{\vee}=\operatorname{Pic}_{0}(X)\). We will be dealing with the case \(X=\mathcal{E}=\mathbb{C}/\Gamma\) and \(W=\mathbb{Z}_{m}\), in which case we identify \(X^{\vee}\simeq X\) so that \(\alpha\in\mathcal{E}\). Below \(\sigma\), \(\zeta\), \(\wp\) stand for the Weierstrass functions associated to the lattice \(\Gamma=2\omega_{1}\mathbb{Z}+2\omega_{2}\mathbb{Z}\), and we write \(\varphi(x,z)\) for the following combination: \[\varphi(x,z)=\frac{\sigma(x-z)}{\sigma(x)\sigma(-z)}. \tag{3.6}\] Recall that for each fixed point \(x_{i}\in\mathcal{E}\) we have parameters \(c_{l}(x_{i})\), \(1\leq l\leq m-1\), with \(c_{l}(x_{i})=0\) unless \(s^{l}(x_{i})=x_{i}\). The fixed points of \(s^{l}\) are identified with the cosets \[(\Omega_{l})^{-1}\Gamma/\Gamma\,,\qquad\Omega_{l}:=1-\omega^{l}\,. \tag{3.7}\] Now introduce the following functions \(v_{l}=v_{l,c}\) of \(x,z\in\mathbb{C}\): \[v_{l}(x,z)=\sum_{\{x_{i}\}}c_{l}(x_{i})e^{-\eta(\Omega_{l}x_{i})z}\varphi(x-x _{i},\Omega_{-l}z)\,,\qquad l\in\mathbb{Z}_{m}\setminus\{0\}\,, \tag{3.8}\] where the summation is over fixed points \(x_{i}\in(\Omega_{l})^{-1}\Gamma/\Gamma\), and \(\eta(\gamma)\) for \(\gamma=2n_{1}\omega_{1}+2n_{2}\omega_{2}\in\Gamma\) is defined by \[\eta(\gamma)=-\int_{q}^{q+\gamma}\wp(q)\,dq=\zeta(q+\gamma)-\zeta(q)=2n_{1} \zeta(\omega_{1})+2n_{2}\zeta(\omega_{2})\,. \tag{3.9}\] We extend \(\eta\) by \(\mathbb{R}\)-linearity so that \(\eta(2a\omega_{1}+2b\omega_{2})=2a\zeta(\omega_{1})+2b\zeta(\omega_{2})\) for \(a,b\in\mathbb{R}\). The symmetry of the lattice \(\Gamma\) implies that \(\eta(\omega\gamma)=\omega^{-1}\eta(\gamma)\). Note that the formula (3.8) is invariant under \(x_{i}\mapsto x_{i}+\gamma\), \(\gamma\in\Gamma\), thus independent on the choice of coset representatives \(x_{i}\in(\Omega_{l})^{-1}\Gamma/\Gamma\). An important property of the functions \(v_{l}(x,z)\) is the _duality_, \[v_{l,c}(x,z)=-v_{-l,c^{\vee}}(z,x)\,, \tag{3.10}\] where the set of _dual couplings_\(c^{\vee}\) is described in Appendix A. We now define the _elliptic Dunkl operator_ for \(W=\mathbb{Z}_{m}\) by the formula \[y=\hbar\frac{d}{dq}-\sum_{l=1}^{m-1}v_{l}(q,\alpha)s^{l}\,, \tag{3.11}\] with the coefficients \(v_{l}(x,z)\) given by (3.8). One important feature of this case is that \(y\) does _not_ belong to the elliptic Cherednik algebra \(H_{\hbar,c}(\mathcal{E})\), since the coefficients \(v_{l}(q,\alpha)\) are not elliptic. Another distinctive feature is the dependence on the auxiliary "spectral" variable \(\alpha\); we write \(y=y(\alpha)\) to indicate that dependence. As a function of \(\alpha\), the Dunkl operator has simple poles at the fixed points \(\alpha=x_{i}\), and it has the following properties: \[y(\alpha+\gamma)=e^{-\eta(\gamma)q}\,y(\alpha)\,e^{\eta(\gamma)q} -\hbar\eta(\gamma)\,\quad\,\forall\,\,\gamma\in\Gamma\,, \tag{3.12}\] \[\operatorname{res}_{\alpha=x_{i}}y(\alpha)=\sum_{l=1}^{m-1}e^{- \eta(\Omega_{-l}x_{i})q}c_{-l}^{\vee}(x_{i})s^{l}\,,\] (3.13) \[s\,y(\alpha)=\omega^{-1}y(\omega^{-1}\alpha)\,s\,. \tag{3.14}\] The formula (3.13) follows from the obvious property \(\operatorname{res}_{x=x_{i}}v_{l}(x,z)=c_{l}(x_{i})e^{-\eta(\Omega_{l}x_{i})z}\) and the duality (3.10). Since \(\eta(\Omega_{-l}x_{i})q=\eta(x_{i})(1-\omega^{l})q\), the relation (3.13) can be rewritten as \[\operatorname{res}_{\alpha=x_{i}}y(\alpha)=e^{-\eta(x_{i})q}\left(\sum_{l=1}^ {m-1}c_{-l}^{\vee}(x_{i})s^{l}\right)e^{\eta(x_{i})q}\,. \tag{3.15}\] The classical Dunkl operator is defined as \[y^{c}=p-\sum_{l=1}^{m-1}v_{l}(q,\alpha)s^{l}\,. \tag{3.16}\] It has the same properties as in the quantum case, namely, (3.12) (with \(\hbar=0\)), (3.14) and (3.15). _Remark 3.2_.: Let \(\mathcal{L}_{\alpha}\) denote the line bundle over \(\mathcal{E}\) given by the quotient \((\mathbb{C}\times\mathbb{C})/\sim\) with \((q,\xi)\sim(q+\gamma,\exp(-\eta(\gamma)\alpha)\xi)\) for \(\gamma\in\Gamma\). Under the \(\mathbb{Z}_{m}\)-action on \(\mathcal{E}\), we have \((\mathcal{L}_{\alpha})^{s}=\mathcal{L}_{\omega^{-1}\alpha}\). The function \(v_{l}(q,\alpha)\), for a fixed \(\alpha\), represents a meromorphic section of \(\mathcal{L}_{\Omega_{-l}\alpha}\simeq\mathcal{L}_{\alpha}\otimes(\mathcal{L}_ {\alpha}^{s^{l}})^{*}\). Hence, the Dunkl operator \(y(\alpha)\) acts on (meromorphic) sections of \(\mathcal{L}_{\alpha}\) in agreement with conventions in [18]. ### Lax matrix We will make use of the fact that the system (3.1) admits a Lax presentation \[\frac{dL}{dt}=[L,A]\,, \tag{3.17}\] for a suitable matrices \(L=L(p,q)\), \(A=A(p,q)\). The Lax pair \(L,A\) can be found following the method of [15]. We only need the Lax matrix \(L\); it is calculated from the Dunkl operator (3.16) by adapting the recipe from [15]. Namely, let \(\mathbb{C}(p,q)\) denote the space of (meromorphic) functions of \(p,q\in\mathbb{C}\), with the \(\mathbb{Z}_{m}\)-action \(s(p,q)=(\omega^{-1}p,\omega q)\). One considers \(\mathbb{C}(p,q)\rtimes\mathbb{Z}_{m}\) acting on itself by left multiplication. If we use a vector-space isomorphism \(\mathbb{C}(p,q)\rtimes\mathbb{Z}_{m}\simeq\mathbb{C}\mathbb{Z}_{m}\otimes \mathbb{C}(p,q)\), we can interpret the action of any element as a \(\mathbb{Z}_{m}\times\mathbb{Z}_{m}\) matrix with entries from \(\mathbb{C}(p,q)\). For example, multiplication by \(q\) and \(s\) are represented by the following matrices \(Q\) and \(S\), respectively: \[Q=\operatorname{diag}(q,\omega^{-1}q,\ldots,\omega^{-m+1}q),,\quad S=\sum_{i \in\mathbb{Z}_{m}}E_{i+1,i}=\begin{pmatrix}0&\ldots&\ldots&0&1\\ 1&0&\ldots&\ldots&0\\ 0&1&0&\ldots&0\\ &&\ddots&&\\ 0&\ldots&0&1&0\end{pmatrix}\,. \tag{3.18}\] The action of the Dunkl operator (3.16) is then represented by the following _Lax matrix_\(L=(L_{ij})\): \[L_{ij}=\begin{cases}\omega^{i}p&\text{for $i=j$}\,,\\ v_{i-j}(\omega^{-i}q,\alpha)&\text{for $i\neq j$}\,,\end{cases}\qquad(i,j\in\mathbb{Z}_{m} )\,. \tag{3.19}\] We write \(L=L(\alpha)\) to indicate the dependence on the spectral parameter. The Lax matrix has first order poles at the fixed points \(\alpha=x_{i}\) and it has the following properties: \[L(\alpha+\gamma)=e^{-\eta(\gamma)Q}L(\alpha)e^{\eta(\gamma)Q} \quad\forall\ \gamma\in\Gamma\,, \tag{3.20}\] \[\text{res}_{\alpha=x_{i}}L(\alpha)=e^{-\eta(x_{i})Q}\left(\sum_{l =1}^{m-1}c_{-l}^{\vee}(x_{i})S^{l}\right)e^{\eta(x_{i})Q}\,,\] (3.21) \[L(\omega^{-1}\alpha)=\omega SL(\alpha)\,S^{-1}\,, \tag{3.22}\] with the above matrices \(Q\) and \(S\). These properties immediately follow from (3.12) (with \(\hbar=0\)), (3.14) and (3.15); alternatively, they can be verified directly. The method of [15] proves that the above \(L\) admits a Lax partner \(A\) so that the equation (3.17) holds (see Remark 3.3 below). Hence, the coefficients \(b_{i}\) of the characteristic polynomial \[\det(L-k\mathbb{I})=(-1)^{m}\left(k^{m}+b_{1}k^{m-1}+\dots+b_{m}\right) \tag{3.23}\] remain constant under the hamiltonian dynamics (3.1). Note that the hamiltonian \(h(p,q)\) has degree \(m\) in \(p\). Since the coefficients \(b_{i}=b_{i}(\alpha;p,q)\) have degree \(<m\) in \(p\) if \(i<m\), we must have \(b_{i}=b_{i}(\alpha)\). On the other hand, \[b_{m}=(-1)^{m}\det L=(-1)^{m}(\prod_{i=0}^{m-1}\omega^{i})p^{m}+\dots=-p^{m}+ \dots\,, \tag{3.24}\] so we must have \(b_{m}=-h(p,q)+b_{m}(\alpha)\). As a result, \[\det(L-k\mathbb{I})=(-1)^{m}\left(k^{m}+b_{1}(\alpha)k^{m-1}+\dots+b_{m}( \alpha)-h(p,q)\right)\,. \tag{3.25}\] To find an explicit formula for \(\det(L-k\mathbb{I})\), one may try calculating the determinant directly, but this seems daunting for \(m=4,6\). Instead, we will obtain the answer momentarily from the following symmetry of \(L\). Namely, write \(L=L(p,q;\alpha)\) for the Lax matrix (3.19), and \(L^{\vee}\) for the Lax matrix with the dual couplings \(c^{\vee}\). Then we have the following relation: \[\det(L(p,q;\alpha)-k\mathbb{I})=-\det(L^{\vee}(k,\alpha;q)-p\mathbb{I})\,. \tag{3.26}\] Indeed, the matrix in the r.h.s has the following entries: \[(L^{\vee}-p\mathbb{I})_{ij}=\begin{cases}\omega^{i}k-p=-\omega^{i}(w^{-i}p-k)& \text{for $i=j$}\,,\\ v_{i-j,c^{\vee}}(\omega^{-i}\alpha,q)=-\omega^{i}v_{j-i,c}(\omega^{i}q,\alpha)& \text{for $i\neq j$}\,.\end{cases} \tag{3.27}\] Thus, \((L^{\vee}-p\mathbb{I})_{ij}=-\omega^{i}(L-k\mathbb{I})_{m-i,m-j}\) and \[L^{\vee}-p\mathbb{I}=-\text{diag}(1,\omega,\dots,\omega^{m-1})C(L-k\mathbb{I} )C^{-1}\,,\qquad C=\sum_{i\in\mathbb{Z}_{m}}E_{i,-i}\,, \tag{3.28}\] which makes (3.26) obvious. Now, combining (3.25) and (3.26), we get \[(-1)^{m}\det(L-k\mathbb{I})=h^{\vee}(k,\alpha)-h(p,q)\,, \tag{3.29}\] where \(h^{\vee}\) denotes the hamiltonian (2.40) with the dual couplings \(c^{\vee}\). _Remark 3.3_.: Following [15], let us substitute the classical Dunkl operator (3.16) into the classical dual hamiltonian \(h^{\vee}\). By [15, (5.19)], this recovers the classical hamiltonian \(h(p,q)\), i.e., \[h^{\vee}(y^{c},\alpha)=h(p,q)\,. \tag{3.30}\] In the representation discussed above, \(y^{c}\) becomes the Lax matrix \(L\) and this relation turns into \[h^{\vee}(L,\alpha)=h(p,q)\mathbb{I}\,. \tag{3.31}\] This gives another proof of (3.29). Furthermore, if one uses instead the _quantum_ Dunkl operator \(y\), then according to [15, (5.20)] one gets \[h^{\vee}(y,\alpha)=\widehat{h}+\widehat{a}\,, \tag{3.32}\] for a suitable \(\widehat{a}\in\mathcal{D}_{\mathcal{E}}\rtimes\mathbb{Z}_{m}\). The classical limit of \(\hbar^{-1}\widehat{a}\) as \(\hbar\to 0\) gives an element \(a\in\mathbb{C}(p,q)\rtimes\mathbb{Z}_{m}\). As explained in [15], the matrix \(A=A(p,q)\) representing \(a\) gives a Lax partner for \(L\), satisfying (3.17). ### Spectral curves The formula (3.29) gives us an explicit one-parameter family of the _spectral curves_\(\det(L-k\mathbb{I})=0\) as \[\widetilde{\Sigma}^{\vee}=\{(k,\alpha)\,:\,h^{\vee}(k,\alpha)=z\}\,, \tag{3.33}\] parameterised by the value \(z\) of the hamiltonian \(h(p,q)\). The Lax matrix \(L\) has \(m\) distinct eigenvalues generically (this is obviously true if \(c=0\), hence also true for generic couplings). When \(\alpha\) approaches a fixed point \(\alpha=x_{i}\), the eigenvalues tend to infinity, and their behaviour is determined by \(\operatorname{res}_{\alpha=x_{i}}L\). Using (3.21), we find that the eigenvalues of \(L\) near \(\alpha=x_{i}\) are given by \[k=\frac{\mu_{j}^{\vee}}{\alpha-x_{i}}+O(1)\,,\qquad\mu_{j}^{\vee}=\mu_{j}^{ \vee}(x_{i}):=\sum_{l=1}^{m-1}\omega^{jl}c_{-l}^{\vee}(x_{i})\,,\qquad j=0, \dots,m-1\,. \tag{3.34}\] If the stabiliser of \(x_{i}\) is \(\mathbb{Z}_{m_{i}}\subset\mathbb{Z}_{m}\), then \(c_{l}^{\vee}=0\) unless \(lm_{i}\) is zero modulo \(m\). As a result, \(\mu_{j}^{\vee}=\mu_{j+m_{i}}^{\vee}\), so among \(\mu_{j}^{\vee}\) there will be only \(m_{i}\) different values. For example, for \(m=6\) and a fixed point with stabiliser \(\mathbb{Z}_{3}\), we have \[\{\mu_{j}^{\vee}\}=(\mu_{0}^{\vee},\mu_{1}^{\vee},\mu_{2}^{\vee},\mu_{0}^{ \vee},\mu_{1}^{\vee},\mu_{2}^{\vee})\,,\qquad\mu_{0}^{\vee}+\mu_{1}^{\vee}+ \mu_{2}^{\vee}=0\,. \tag{3.35}\] As is readily seen from (3.22), the spectral curve \(\widetilde{\Sigma}^{\vee}\) is invariant under the \(\mathbb{Z}_{m}\)-action \[s\,:\ k\to\omega k\,,\quad\alpha\to\omega^{-1}\alpha\,,\qquad\omega=e^{2\pi i /m}\,. \tag{3.36}\] (This also follows from the \(\mathbb{Z}_{m}\)-invariance of the hamiltonian.) Let \[\Sigma^{\vee}:=\widetilde{\Sigma}^{\vee}/\mathbb{Z}_{m} \tag{3.37}\] be the quotient curve. Both curves may be viewed as \(m\)-sheeted branched coverings \(\widetilde{\Sigma}^{\vee}\to\mathcal{E}\) and \(\Sigma^{\vee}\to\mathcal{E}/\mathbb{Z}_{m}=\mathbb{P}^{1}\), respectively. The action of the stabiliser \(\mathbb{Z}_{m_{i}}\) does not permute the sheets (3.34): indeed, the sheets near \(\alpha=0\) have distinct \(\mu_{j}\)'s so cannot be permuted. As a result, each sheet is invariant under the stabiliser \(\mathbb{Z}_{m_{i}}\) of \(\alpha=x_{i}\), which implies that the \(O(1)\) term in (3.34) must have correct symmetry, hence \[k=\frac{\mu_{j}^{\vee}}{\alpha-x_{i}}+O\left((\alpha-x_{i})^{m_{i}-1}\right)\,. \tag{3.38}\] This shows that \(\widetilde{\Sigma}^{\vee}\) can be compactified by adding \(m\) points above each \(\alpha=x_{i}\), so that the \(\mathbb{Z}_{m}\)-action extends to the compactification and the added points have \(\mathbb{Z}_{m_{i}}\) as their stabilisers. It also implies that in local invariant coordinates \(\epsilon=(\alpha-x_{i})^{m_{i}}\) and \(s=(\alpha-x_{i})k\), each branch (3.38) becomes \[s=\mu_{j}^{\vee}+\epsilon\,r_{j}(\epsilon)\,,\quad\text{with some }r_{j}\in \mathbb{C}[[\epsilon]]\,. \tag{3.39}\] This means that locally around \(\epsilon=0\), the branched covering \(\Sigma^{\vee}\to\mathbb{P}^{1}\) is _unramified_. ### Elliptic fibration, integrability, and Seiberg-Witten differential The above analysis of the curves \(h^{\vee}(k,\alpha)=z\), immediately carries over to the fibration (3.2): one just needs to replace \(c^{\vee}\) by \(c\). This partly proves Proposition 3.1. To prove the remaining claims, let us proceed by calculating the genus of \(\widetilde{\Sigma}^{\vee}\) and \(\Sigma^{\vee}\). There are various ways to do that, and we choose the one which is best for our purposes. Consider the following meromorphic differentials on \(T^{*}\mathcal{E}\): \[\Omega_{1}=\frac{dk}{-\partial h^{\vee}/\partial\alpha}\,,\quad\Omega_{2}= \frac{d\alpha}{\partial h^{\vee}/\partial k}\,. \tag{3.40}\] Obviously, \(\Omega_{1}=\Omega_{2}\) on the level curves (3.33), and the resulting 1-form \(\Omega:=\Omega_{1}=\Omega_{2}\) is holomorphic and non-vanishing on \(\widetilde{\Sigma}^{\vee}\) away from the fixed points of \(\mathbb{Z}_{m}\). To analyse it near \(\alpha=x_{i}\), we take \(\alpha\) as a local coordinate and use that \[\partial h^{\vee}/\partial k=\frac{\partial f(k,\alpha)}{\partial k}\,,\qquad f (k,\alpha)=(-1)^{m}\det(L-k\mathbb{I})\,, \tag{3.41}\] with \[f(k,\alpha)=\prod_{j=0}^{m-1}\left(k-\frac{\mu_{j}^{\vee}}{\alpha-x_{i}}+\beta _{j}(\alpha-x_{i})^{m_{i}-1}+\ldots\right)\,. \tag{3.42}\] Picking one of the local branches (3.38), we see that \(m/m_{i}\) factors in (3.42) behave as \((\alpha-x_{i})^{m_{i}-1}\), while the remaining \(m-m/m_{i}\) factors behave as \((\alpha-x_{i})^{-1}\). Differentiating \(f\) with respect to \(k\) removes one of the factors; from this, \[\frac{\partial f(k,\alpha)}{\partial k}\sim(\alpha-x_{i})^{(m_{i}-1)(m/m_{i}- 1)-(m-m/m_{i})}=(\alpha-x_{i})^{-m_{i}+1}\,. \tag{3.43}\] As a result, on each branch, \[\Omega=\frac{d\alpha}{\partial h^{\vee}/\partial k}\sim(\alpha-x_{i})^{m_{i}-1}d \alpha\,. \tag{3.44}\] Hence, \(\Omega\) is holomorphic on \(\widetilde{\Sigma}^{\vee}\), and so the overall number of its zeros over \(\alpha=x_{i}\) is \(m(m_{i}-1)\). There are \(m/m_{i}\) fixed points in the \(\mathbb{Z}_{m}\)-orbit of \(x_{i}\), so they contribute \(m^{2}(m_{i}-1)/m_{i}\) zeros. The total number of zeros is therefore \[m^{2}\sum_{i}\left(1-\frac{1}{m_{i}}\right)\,. \tag{3.45}\] Here \((m_{i})=(2,2,2,2)\) for \(m=2\), \((m_{i})=(3,3,3)\) for \(m=3\), \((m_{i})=(4,4,2)\) for \(m=4\), and \((m_{i})=(6,3,2)\) for \(m=6\). In all cases, the above sum gives \(2m^{2}\), so \(\Omega\) is a holomorphic differential on \(\widetilde{\Sigma}^{\vee}\) with \(2m^{2}\) zeros. From that, the genus of \(\widetilde{\Sigma}^{\vee}\) is \(m^{2}+1\). Next, the form \(\Omega\) is clearly \(\mathbb{Z}_{m}\)-invariant so it defines a holomorphic form on \(\Sigma^{\vee}\). Near \(\alpha=x_{i}\) we use \(x:=(\alpha-x_{i})^{m_{i}}\) as a local coordinate; then \[\Omega\sim(\alpha-x_{i})^{m_{i}-1}d\alpha\sim dx\,. \tag{3.46}\] Hence, \(\Omega\) is non-vanishing on \(\Sigma^{\vee}\), so \(\Sigma^{\vee}\) has genus one. This establishes all the remaining claims in Proposition 3.1. _Remark 3.4_.: When viewed on \(\widetilde{\Sigma}^{\vee}\), \(\Omega\) is holomorphic and \(\mathbb{Z}_{m}\)-invariant. Up to a factor, there is only one such 1-form. Indeed, by local symmetry, it must have zero of order at least \(m_{i}-1\) at each point with stabiliser \(\mathbb{Z}_{m_{i}}\). Thus, it is bound to have the same divisor as \(\Omega\). We finish the section by exhibiting a Seiberg-Witten differential for the elliptic fibration on \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\). The canonical holomorphic symplectic form on \(T^{*}\mathcal{E}\) is \(\omega=dk\wedge d\alpha=d\lambda\), for \(\lambda\) the canonical Liouville 1-form, \[\lambda=k\,d\alpha\,. \tag{3.47}\] Both \(\omega\) and \(\lambda\) are \(\mathbb{Z}_{m}\)-invariant so descend to holomorphic forms on \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\). On the compactified fibers the form \(\lambda\) is only meromorphic; the reason being that on any particular branch near \(\alpha=x_{i}\) we have \[\lambda=\left(\frac{\mu_{j}^{\vee}}{\alpha-x_{i}}+O\left((\alpha-x_{i})^{m_{i }-1}\right)\right)d\alpha\,. \tag{3.48}\] We therefore conclude that \(\lambda\) has only simple poles and _constant_ (i.e. independent of \(z\)) residues \(\mu_{j}^{\vee}=\mu_{j}^{\vee}(x_{i})\). This allows us to view \(\lambda\) as a _Seiberg-Witten (SW) differential_ for the elliptic fibration on \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\). The residues of the SW differential are referred to as _linear masses_; as we see, they are directly related to the coupling parameters of the integrable system. ## 4 Spectral curves and elliptic pencils To interpret the elliptic fibration \[h^{\vee}(k,\alpha)=z \tag{4.1}\] in geometric terms, we convert it into a polynomial form, using the \(\mathbb{Z}_{m}\)-invariant combinations \[x=u(\alpha)\,,\qquad y=v(\alpha)k\,, \tag{4.2}\] where \(u,v\) are the functions from the table (2). Equally, we may consider the fibration by the level sets of the hamiltonian \(h\), \[h(p,q)=z\,, \tag{4.3}\] writing it in terms of \(x=u(q)\) and \(y=v(q)p\). The only difference between the two fibrations is in the coupling parameters: \(c^{\vee}\) or \(c\), respectively. The polynomial form of the fibration (4.3) is presented in Appendix D. It is of the form \[y^{m}+\sum_{j=2}^{m}Q_{j}(x)y^{m-j}=zP_{m}(x)\,, \tag{4.4}\] where \(P_{m}(x)\) is the same as in (2.6), (2.7)-(2.10). The SW differential in terms of \(x,y\) is chosen as \[\lambda=\frac{y\,dx}{(x-e_{1})(x-e_{2})(x-e_{3})}\quad(m=2)\,,\qquad\lambda= \frac{y\,dx}{(x-e_{1})(x-e_{2})}\quad(m=3,4,6)\,. \tag{4.5}\] As we verify in Appendix D, the fibration (4.4) describes an elliptic pencil of a special form. Below we describe it geometrically. The diagram illustrating elliptic pencils can be seen in figure 2. We first consider the cases of \(m=3,4,6\). It will be convenient to work in homogeneous coordinates in the projective plane, replacing (4.4) with \[Q(x,y,w)-zP(x,w)=0\,, \tag{4.6}\] where \((x:y:w)\) are the homogeneous coordinates on \(\mathbb{P}^{2}\), and \(z\) parameterises the pencil. The base of each pencil will be a collection of points on a union of three lines \(\ell_{0}\), \(\ell_{1}\), \(\ell_{2}\) meeting at a point. Up to projective equivalence, we can always assume that the lines are \[\ell_{0}:\ w=0\,,\quad\ell_{1}:\ x-e_{1}w=0\,,\qquad\ell_{2}:\ x-e_{2}w=0\,. \tag{4.7}\] Case \(m=3\):Choose three distinct points on each line, \[p_{0},p_{1},p_{2}\in\ell_{0}\,,\quad q_{0},q_{1},q_{2}\in\ell_{1}\,,\quad r_{ 0},r_{1},r_{2}\in\ell_{2}\,, \tag{4.8}\] and consider the pencil of cubic curves passing through these points. For this to work, the base points (4.8) of the pencil should be chosen subject to one overall constraint. Two further degrees Figure 2: Elliptic pencils. We use black dots to represent simple points, 2-crosses for double points, and 3-crosses for triple points. of freedom can be eliminated by applying projective transformation preserving the three lines. Hence, we have a six-parameter family of such pencils up to projective equivalence. More concretely, assuming the lines are chosen as in (4.7), we have a pencil (4.6) where \[Q=y^{3}+Q_{1}(x,w)y^{2}+Q_{2}(x,w)y+Q_{3}(x,w)\,,\qquad P(x,w)=w(x-e_{1}w)(x-e_{2 }w)\,. \tag{4.9}\] The base points of the pencil are found by intersecting the cubic \(Q=0\) with the lines: \[p_{i}=(1:-\alpha_{i}:0)\,,\quad q_{i}=(e_{1}:\beta_{i}:1)\,,\quad r_{i}=(e_{2} :\gamma_{i}:1)\,. \tag{4.10}\] Writing \(Q_{1}=a_{11}x+a_{12}w\), we find that \(\alpha_{0}+\alpha_{1}+\alpha_{2}=a_{11}\), \(\beta_{0}+\beta_{1}+\beta_{2}=-a_{11}e_{1}-a_{12}\), \(\gamma_{0}+\gamma_{1}+\gamma_{2}=-a_{11}e_{2}-a_{12}\). Hence, the parameters \(\alpha_{i},\beta_{i},\gamma_{i}\) satisfy the constraint \[\sum_{i}\alpha_{i}+\sum_{i}\frac{\beta_{i}}{e_{1}-e_{2}}+\sum_{i}\frac{\gamma _{i}}{e_{2}-e_{1}}=0\,. \tag{4.11}\] Furthermore, by a transformation \(y\mapsto y+ax+bw\) we can make \(Q_{1}=0\) bringing \(Q\) to the form \[Q=y^{3}+Q_{2}(x,w)y+Q_{3}(x,w)\,. \tag{4.12}\] In that case, \[\alpha_{0}+\alpha_{1}+\alpha_{2}=\beta_{0}+\beta_{1}+\beta_{2}=\gamma_{0}+ \gamma_{1}+\gamma_{2}=0\,. \tag{4.13}\] The Seiberg-Witten differential (4.5) in homogeneous coordinates becomes \[\lambda=\frac{y\,(wdx-xdw)}{w(x-e_{1}w)(x-e_{2}w)}\,. \tag{4.14}\] Its residues at \(w=0\) and \(x=e_{1,2}w\) are \(\alpha_{1,2,3}\), \((e_{1}-e_{2})^{-1}\beta_{1,2,3}\), and \((e_{2}-e_{1})^{-1}\gamma_{1,2,3}\), respectively. Thus, the geometric parameters of the pencil are directly related to the residues of \(\lambda\) (linear masses). Generically, we have 3 distinct residues for each of \(x=\infty,e_{1},e_{2}\); we express this by saying that the _pattern of residues_ of \(\lambda\) is \((111),(111),(111)\). Case \(m=4\):In this case we need a pencil of curves of degree 4 with two double points. We choose ten distinct points on the lines \(\ell_{0},\ell_{1},\ell_{2}\) as follows: \[p_{0},p_{1},p_{2},p_{3}\in\ell_{0}\,,\quad q_{0},q_{1},q_{2},q_{3}\in\ell_{1} \,,\quad r_{0},r_{1}\in\ell_{2}\,. \tag{4.15}\] The curves of the pencil are quartic curves passing through \[(p_{0}p_{1}p_{2}p_{3}q_{0}q_{1}q_{2}q_{3}r_{0}^{2}r_{1}^{2})\,. \tag{4.16}\] This notation means that each curve of the pencil should have an ordinary double point at both \(r_{0}\) and \(r_{1}\). By the same reasoning as above, there is a seven-parameter family of such pencils up to projective equivalence. The generic curves in the pencil are quartics with two double points, of geometric genus one. Assuming that the lines are of the form (4.7), we consider a pencil (4.6), with \(Q\) homogeneous of degree 4 and with \[P(x,w)=w(x-e_{1}w)(x-e_{2}w)^{2}\,. \tag{4.17}\] The quartic \(Q=0\) intersects the lines at points \[p_{i}=(1:-\alpha_{i}:0)\,,\quad q_{i}=(e_{1}:\beta_{i}:1)\,,\quad r_{i}=(e_{2}: \gamma_{i}:1)\,. \tag{4.18}\] As before, we find that the parameters describing the points are constrained by \[\sum_{i}\alpha_{i}+\sum_{i}\frac{\beta_{i}}{e_{1}-e_{2}}+2\sum_{i=0,1}\frac{ \gamma_{i}}{e_{2}-e_{1}}=0\,. \tag{4.19}\] Furthermore, by a linear transformation \(y\mapsto y+ax+bw\) we make \(Q_{1}=0\) bringing \(Q\) to the form \[Q=y^{4}+Q_{2}(x,w)y^{2}+Q_{3}(x,w)y+Q_{4}(x,w)\,. \tag{4.20}\] In that case, we have \[\alpha_{0}+\alpha_{1}+\alpha_{2}+\alpha_{3}=\beta_{0}+\beta_{1}+\beta_{2}+ \beta_{3}=\gamma_{0}+\gamma_{1}=0\,. \tag{4.21}\] We normalise the curves by making each double point \(r_{0},r_{1}\) into a pair of distinct points. The Seiberg-Witten differential (4.14) has simple poles at \(4\) points over each of \(x=\infty,e_{1},e_{2}\), with residues equal to \(\alpha_{0,1,2,3}\), \((e_{1}-e_{2})^{-1}\beta_{0,1,2,3}\), and \((e_{2}-e_{1})^{-1}\gamma_{0,1}\) (twice). We see that the residues of the SW differential are directly related to the geometric parameters of the pencil, and the pattern of residues is \((1111),(1111),(22)\). Case \(m=6\):This time we need a pencil of curves of degree six. Choose eleven points as follows: \[p_{0},p_{1},p_{2},p_{3},p_{4},p_{5}\in\ell_{0}\,,\quad q_{0},q_{1},q_{2},\in \ell_{1}\,,\quad r_{0},r_{1}\in\ell_{2}\,. \tag{4.22}\] The curves of the pencil are of degree six, required to pass through \[(p_{0}p_{1}p_{2}p_{3}p_{4}p_{5}q_{0}^{2}q_{1}^{2}q_{2}^{2}r_{0}^{3}r_{1}^{3})\,. \tag{4.23}\] This notation means that each curve of the pencil should have an ordinary double point at each of \(q_{0,1,2}\) and a triple point at \(r_{0}\) and \(r_{1}\). By the same reasoning, there is an eight-parameter family of such pencils up to projective equivalence. The generic curves in the pencil are sextics with three double and two triple points, of geometric genus one. Assuming that the lines \(\ell_{0},\ell_{1},\ell_{2}\) are brought to the form (4.7), we obtain a pencil of the form (4.6), with \(Q\) homogeneous of degree \(6\) and with \[P(x,w)=w(x-e_{1}w)^{2}(x-e_{2}w)^{3}\,. \tag{4.24}\] The sextic \(Q=0\) intersects the lines at points \[p_{i}=(1:-\alpha_{i}:0)\,,\quad q_{i}=(e_{1}:\beta_{i}:1)\,,\quad r_{i}=(e_{2}: \gamma_{i}:1)\,. \tag{4.25}\] The eleven parameters \(\alpha_{0,1,2,3,4,5}\), \(\beta_{0,1,2}\), \(\gamma_{0,1}\) are constrained by \[\sum_{i}\alpha_{i}+2\sum_{i}\frac{\beta_{i}}{e_{1}-e_{2}}+3\sum_{i}\frac{ \gamma_{i}}{e_{2}-e_{1}}=0\,. \tag{4.26}\] We normalise the curves by making each double point \(q_{0,1,2}\) into a pair of distinct points, and each triple point \(r_{0,1}\) into three distinct points. The Seiberg-Witten differential (4.14) has simple poles at the six points over each of \(x=\infty,e_{1},e_{2}\), with the residues equal to \(\alpha_{i}\), \((e_{1}-e_{2})^{-1}\beta_{i}\) (repeated twice), and \((e_{2}-e_{1})^{-1}\gamma_{i}\) (thrice). The pattern of residues is therefore \((111111),(222),(33)\). When \(Q\) is brought into the form \[Q=y^{6}+Q_{2}(x,w)y^{4}+Q_{3}(x,w)y^{3}+Q_{4}(x,w)y^{2}+Q_{5}(x,w)y+Q_{6}(x,w)\,, \tag{4.27}\] then \[\alpha_{0}+\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}+\alpha_{5}=\beta_{0}+ \beta_{1}+\beta_{2}=\gamma_{0}+\gamma_{1}=0\,. \tag{4.28}\] We see, once again, that the geometric parameters of the pencil are directly related to the linear masses. Case \(m=2\):For completeness, let us give the result for \(m=2\), although in this case the answer is well known. This time, we work in the weighted projective plane \(\mathbb{P}^{2}_{1,2,1}\), so \(\deg x=\deg w=1\), \(\deg y=2\). It has a singular point \((0:1:0)\), and we choose four lines \(\ell_{0,1,2,3}\) (which automatically pass through that singular point)2. By a linear transformation of \(x,w\) we can bring the lines to Footnote 2: Another option is to work in \(\mathbb{P}^{2}\), in which case the elliptic pencil will have 9 base points, 3 of which infinitesimally close (see, for instance, [19], Section 2). \[\ell_{0}:\,\,w=0\,,\quad\ell_{1}:\,\,x-e_{1}w=0\,,\qquad\ell_{2}:\,\,x-e_{2}w= 0\,,\qquad\ell_{3}:\,\,x-e_{3}w=0\,. \tag{4.29}\] Choose eight distinct points \[p_{0},p_{1}\in\ell_{0}\,,\quad q_{0},q_{1}\in\ell_{1}\,,\quad r_{0},r_{1}\in \ell_{2}\,,\quad s_{0},s_{1}\in\ell_{3}\,, \tag{4.30}\] and consider a pencil of curves of weighted homogeneous degree four passing through these points. Hence, the pencil is of the form (4.6), with \[Q=y^{2}+Q_{1}(x,w)y+Q_{2}(x,w)\,,\qquad\deg Q_{1}=2\,,\quad\deg Q_{2}=4\,, \tag{4.31}\] and \[P(x,w)=w(x-e_{1}w)(x-e_{2}w)(x-e_{3}w)\,. \tag{4.32}\] The generic curves in the pencil are smooth elliptic curves. Write \[p_{i}=(1:-\alpha_{i}:0)\,,\quad q_{i}=(e_{1}:\beta_{i}:1)\,,\quad r_{i}=(e_{2} :\gamma_{i}:1)\,,\quad s_{i}=(e_{3}:\delta_{i}:1)\,. \tag{4.33}\] Then the condition that the curve \(Q=0\) passes through these points implies that \[\alpha_{0}+\alpha_{1}+\frac{\beta_{0}+\beta_{1}}{(e_{1}-e_{2})(e_{1}-e_{3})}+ \frac{\gamma_{0}+\gamma_{1}}{(e_{2}-e_{1})(e_{2}-e_{3})}+\frac{\delta_{0}+ \delta_{1}}{(e_{3}-e_{1})(e_{3}-e_{2})}=0\,. \tag{4.34}\] By a change of coordinates \(y\mapsto y+ax^{2}+bxw+cw^{2}\) we can make \(Q_{1}=0\), in which case \[\alpha_{0}+\alpha_{1}=\beta_{0}+\beta_{1}=\gamma_{0}+\gamma_{1}=\delta_{0}+ \delta_{1}=0\,. \tag{4.35}\] This reduces the set of parameters to \(\alpha_{0},\beta_{0},\gamma_{0},\delta_{0}\), plus the modular parameter, the cross-ratio of \(\infty,e_{1,2,3}\). The Seiberg-Witten differential (4.5) in homogeneous coordinates is \[\lambda=\frac{y\left(wdx-xdw\right)}{w(x-e_{1}w)(x-e_{2}w)(x-e_{3}w)}\,. \tag{4.36}\] Its residues at the points \(p_{0,1}\), \(q_{0,1}\), \(r_{0,1}\), \(s_{0,1}\) are equal to \[\alpha_{0,1}\,,\quad\frac{\beta_{0,1}}{(e_{1}-e_{2})(e_{1}-e_{3})}\,,\quad \frac{\gamma_{0,1}}{(e_{2}-e_{1})(e_{2}-e_{3})}\,,\quad\frac{\delta_{0,1}}{(e _{3}-e_{1})(e_{3}-e_{2})}\,. \tag{4.37}\] This relates the geometric parameters of the pencil to the linear masses. The pattern of residues is \((11),(11),(11),(11)\). Finally, an explicit formula for \(Q\) is \[Q=y^{2}+a_{0}x(x-e_{1}w)(x-e_{2}w)(x-e_{3}w)+\sum_{i=1,2,3}a_{i}w^{2}\prod_{j \neq i}^{3}\frac{(x-e_{j}w)}{(e_{i}-e_{j})}\,, \tag{4.38}\] with \(a_{0}=\alpha_{0}\alpha_{1}/2\), \(a_{1}=\beta_{0}\beta_{1}/2\), \(a_{2}=\gamma_{0}\gamma_{1}/2\), \(a_{3}=\delta_{0}\delta_{1}/2\). _Remark 4.1_.: Some of the above elliptic pencils appear in [20, 21, 22] in the context of discrete integrable maps and non-standard Kahan discretisation. Note also that in very special cases such pencils and the corresponding continuous hamiltonian dynamics appeared in the study of symmetric monopoles [23]. ## 5 Quantum curves Since the classical hamiltonian \(h(p,q)\) has a natural quantum analogue \(\widehat{h}\) (2.40), we obtain a natural quantisation of the fibration \(h(p,q)=z\) in the form of a one-parameter family of differential equations \[\widehat{h}\left(q,\hbar\frac{d}{dq}\right)\psi(q,z)=z\psi(q,z)\,,\quad z\in \mathbb{C}\,. \tag{5.1}\] Because of the duality, we can also replace \(q\) and \(d/dq\) by \(\alpha\) and \(d/d\alpha\), and the couplings \(c\) by the dual couplings \(c^{\vee}\), to obtain a quantisation of the spectral curve \(h^{\vee}(k,\alpha)=z\) in the form of a differential equation in the \(\alpha\)-variable. We will refer to both families of ODEs as _quantum curves_, as they represent the same object up to a change of notation. While the explicit form of (5.1) is available, we would like to characterise the arising ODEs intrinsically, similarly to the characterisation of classical spectral curves in terms of elliptic pencils. Like in the classical case, we may use the \(\mathbb{Z}_{m}\)-symmetry and view (5.1) as an equation on the Riemann sphere. By using the \(\mathbb{Z}_{m}\)-invariant coordinate \(x=u(q)\), we can rewrite the differential equation (5.1) as \[\widehat{h}\left(x,\hbar\frac{d}{dx}\right)\psi=z\psi\,. \tag{5.2}\] We refer to this as the quantum curve in a _rational form_. By clearing denominators, it can be further brought into a polynomial form. As it turns out, the result is a Fuchsian equation of a rather special type. This is summarised below case by case. For simplicity, set \(\hbar=1\) so the equations we consider are of the form \[\frac{d^{m}\psi}{dx^{m}}+A_{1}\frac{d^{m-1}\psi}{dx^{m-1}}+\cdots+A_{m}\psi=0\,. \tag{5.3}\] Here \(A_{i}=A_{i}(x)\) are rational functions satisfying \(A_{i}\to 0\) as \(x\to\infty\). Case \(m=3\):Consider Fuchsian equations of order \(3\) with three singular points \(x=\infty,e_{1},e_{2}\) (which may be taken as \(\infty,0,1\)) and with prescribed local exponents \[\alpha_{0,1,2}\,,\quad\beta_{0,1,2},\quad\gamma_{0,1,2}\,,\quad\text{with} \quad\sum_{i}\alpha_{i}+\sum_{i}\beta_{i}+\sum_{i}\gamma_{i}=3\,. \tag{5.4}\] This means that the local monodromy around \(x=\infty\) has eigenvalues \(e^{2\pi i\alpha_{0,1,2}}\), and similarly for \(x=e_{1,2}\). The condition on the sum of local exponents is known as the _Fuchs relation_. Typically, prescribing local exponents does not determine the equation uniquely: there may be additional parameters called _accessory parameters_. In this case there is just one such parameter, corresponding to the variable \(z\) in (5.2). Case \(m=4\):Consider Fuchsian equations of order \(4\) with three singular points \(x=\infty,e_{1},e_{2}\) and with prescribed local exponents \[\alpha_{0,1,2,3}\,,\quad\beta_{0,1,2,3},\quad\gamma_{0,1},\quad 1+\gamma_{0, 1}\,,\quad\text{with}\quad\sum_{i}\alpha_{i}+\sum_{i}\beta_{i}+2\sum_{i}\gamma _{i}=4\,. \tag{5.5}\] The last condition says that the total sum of local exponents is \(6\), which is the Fuchs relation. In addition, in this case, some of the local exponents differ by an integer; this is known as the _resonance_, and one expects local solutions to contain logarithms. Prescribing local exponents in this case allows for several accessory parameters. However, if we impose _semi-simplicity_ of the local monodromy (absence of logarithms) at \(x=e_{2}\), then the family of such equations is parameterised by a single parameter, \(z\). Case \(m=6\):Consider Fuchsian equations of order \(6\) with three singular points \(x=\infty,e_{1},e_{2}\) and with prescribed local exponents \[\alpha_{0,1,2,3,4,5}\,,\quad\beta_{0,1,2},\quad 1+\beta_{0,1,2}\,,\quad \gamma_{0,1},\quad 1+\gamma_{0,1},\quad 2+\gamma_{0,1}\,,\quad\sum_{i} \alpha_{i}+2\sum_{i}\beta_{i}+3\sum_{i}\gamma_{i}=6\,. \tag{5.6}\] Again, this allows for several accessory parameters, and since some local exponents differ by an integer one expects local solutions to contain logarithmic terms. However, imposing the condition of semi-simplicity of the local monodromy at \(x=e_{1,2}\), we obtain a one-parameter family of such equations, parameterised by \(z\). Case \(m=2\):This is the well-known case of the _Heun equation_, a Fuchsian equations of order \(2\) with four singular points \(x=\infty,e_{1},e_{2},e_{3}\) (which may be taken as \(\infty,0,1,t\)) and with prescribed local exponents \[\alpha_{0,1}\,,\quad\beta_{0,1},\quad\gamma_{0,1}\,,\quad\delta_{0,1}\,,\quad \text{with}\ \ \alpha_{0}+\alpha_{1}+\beta_{0}+\beta_{1}+\gamma_{0}+\gamma_{1}+\delta_{0}+ \delta_{1}=2\,. \tag{5.7}\] Such Fuchsian equations have one accessory parameter, \(z\). The precise relationship between the quantum curves and the types of Fuchsian equations described above is as follows. **Proposition 5.1**.: _For \(m=2,3,4,6\), the equation (5.1) when written in rational form falls into one of the above classes, with the following relation between the local exponents and the parameters of the Cherednik algebra:_ \[m=3:\ \alpha_{j}=\frac{j+\mu_{j}(0)\hbar^{-1}}{3}\,,\ \beta_{j}=\frac{j+\mu_{j}( \eta_{1})\hbar^{-1}}{3}\,,\quad\gamma_{j}=\frac{j+\mu_{j}(\eta_{2})\hbar^{-1} }{3}\,,\] \[m=4:\ \alpha_{j}=\frac{j+\mu_{j}(0)\hbar^{-1}}{4}\,,\ \beta_{j}= \frac{j+\mu_{j}(\omega_{3})\hbar^{-1}}{4}\,,\quad\gamma_{j}=\frac{j+\mu_{j} (\omega_{1,2})\hbar^{-1}}{2}\,,\] \[m=6:\ \alpha_{j}=\frac{j+\mu_{j}(0)\hbar^{-1}}{6}\,,\ \beta_{j}= \frac{j+\mu_{j}(\eta_{1,2})\hbar^{-1}}{3}\,,\quad\gamma_{j}=\frac{j+\mu_{j} (\omega_{1,2,3})\hbar^{-1}}{2}\,,\] \[m=2:\ \alpha_{j}=\frac{j+\mu_{j}(0)\hbar^{-1}}{2}\,,\ \beta_{j}= \frac{j+\mu_{j}(\omega_{1})\hbar^{-1}}{2}\,,\quad\gamma_{j}=\frac{j+\mu_{j} (\omega_{2})\hbar^{-1}}{2}\,,\quad\delta_{j}=\frac{j+\mu_{j}(\omega_{3})\hbar^ {-1}}{2}\,.\] _Here \(\mu_{j}(x_{i})\) are the linear masses (2.24) attached to the fixed points of \(\mathbb{Z}_{m}\)._ This can be checked by using explicit formulas for \(\widehat{h}\), the details can be found in Appendix C. ## 6 Further connections Let us describe some other contexts where closely related objects appear. ### Hitchin systems Hitchin systems on algebraic curves are known as a rich source of complex integrable systems [24]. For curves of genus \(g=0,1\) one needs to allow Higgs fields to have poles [25, 26, 27], and several many-body integrable systems have already been identified with Hitchin systems in that way. Our cases can be interpreted as Hitchin systems on the orbifold tori which are Riemann spheres with with three or four punctures as shown in figure 3. This goes in accordance with the class-S theory description given in [28], and, equivalently, M-theory orbifold construction as in [29]. In the following, we explain how to obtain the Hitchin system description from the Lax presentation. First, by looking at the properties of the Lax matrix \(L=L(\alpha)\), we can recognize \(\tilde{\phi}:=L(\alpha)d\alpha\) as a \(\mathbb{Z}_{m}\)-equivariant Higgs field on \(\mathcal{E}\). Namely, take \(\mathbb{C}\times\mathbb{C}^{m}\) with the \(\mathbb{Z}_{m}\)-action \(\omega.(\alpha,\xi)=(\omega^{-1}\alpha,S\xi)\), where \(S\) is the matrix (3.18). This induces a \(\mathbb{Z}_{m}\)-action on the total space of \(\tilde{E}:=\oplus_{i=0}^{m-1}\mathcal{L}_{\omega^{-i}q}\), where the line bundle \(\mathcal{L}_{q}\) was defined in Remark 3.2. This gives a \(\mathbb{Z}_{m}\)-equivariant element \(\tilde{\phi}\in\operatorname{End}\!\tilde{E}\otimes\Omega_{\mathcal{E}}^{1}( \sum x_{i})\). At the punctures \(\tilde{\phi}\) has simple poles with \(\operatorname{res}\tilde{\phi}|_{\alpha=x_{i}}\) diagonalisable, and with prescribed eigenvalues \(\mu_{j}^{\vee}(x_{i})\). Further, by modifications at the fixed points one can make the pair \((\tilde{E},\tilde{\phi})\)\(\mathbb{Z}_{m}\)-invariant; the modified pair can then be obtained as a pullback of some Higgs bundle \((E,\phi)\) on \(\mathbb{P}^{1}=\mathcal{E}/\mathbb{Z}_{m}\). That is how in general one relates equivariant Higgs bundles with Higgs bundles on orbifolds, which in turn are identified with (weakly) parabolic Higgs bundles [30] (see also [31, 32] for related studies). The conclusion is that the phase space of our integrable system through the map \((p,q)\mapsto L(\alpha)\mapsto\tilde{\phi}\mapsto\phi\) gets identified with an open subset of the moduli space \(\mathcal{M}^{\vee}\) of parabolic \(\mathrm{GL}_{m}\) Higgs bundles on \(\mathbb{P}^{1}\). The parabolic data consists of the eigenvalues/eigenspaces of the residues of \(\phi\) at the orbifolded points (punctures) in \(\mathbb{P}^{1}\), with three punctures for \(m=3,4,6\) and four punctures for \(m=2\). (The definition of \(\mathcal{M}^{\vee}\) also requires a choice of parabolic weights, but this is unimportant since we work on an open subspace of the moduli space.) The moduli space \(\mathcal{M}^{\vee}\) carries a structure of a complex integrable system: the Hitchin fibration and Hitchin system. This identifies our integrable system with the Hitchin system on an open subset of \(\mathcal{M}^{\vee}\). The Hitchin fibration is built from the family of spectral curves which coincide with our spectral curves \(\Sigma_{z}^{\vee}=\widetilde{\Sigma}_{z}^{\vee}/\mathbb{Z}_{m}\). Since these have genus one, the fibers are isomorphic to \(\Sigma_{z}^{\vee}\), \(z\in\mathbb{C}\). The dynamics of the Hitchin system is linear along the fibers. Because the dynamics in \(p,q\) coordinates along the elliptic curves \(\Sigma_{z}\) is also linear, we conclude that \[\Sigma_{z}\cong\Sigma_{z}^{\vee}\quad\forall\ z \tag{6.1}\] (where \(z\) is assumed to be generic so that \(\Sigma_{z},\Sigma_{z}^{\vee}\) are non-singular). This property is non-obvious; combined with (3.28) it is reminiscent of the SYZ-type _mirror symmetry_ for Hitchin fibrations due to Donagi-Pantev [33]. See also Remark 6.3 below. _Remark 6.1_.: Here is the sketch of how to observe (6.1) without resorting to Hitchin systems. Starting from the Lax matrix \(L(p,q;\alpha)\) and its spectral curve \(\widetilde{\Sigma}^{\vee}\) : \(\det(L-kI)=0\), we view the family of eigenlines \((L-kI)\ell=0\) parameterised by \((k,\alpha)\in\widetilde{\Sigma}\) as a line bundle \(\mathcal{L}\) over \(\widetilde{\Sigma}^{\vee}\). The dynamics \(p(t),q(t)\) induces a dynamics \(\mathcal{L}(t)\) on the Jacobian \(\mathrm{Jac}(\widetilde{\Sigma}^{\vee})\). One then checks the following two properties: (1) the induced dynamics on \(\mathrm{Jac}(\widetilde{\Sigma}^{\vee})\) is linear, and (2) \(\mathcal{L}^{s}\cong\mathcal{L}\) for any \(s\in\mathbb{Z}_{m}\). As a result, the linear motion along \(\Sigma\) in the phase space is mapped onto a linear motion along a \(\mathbb{Z}_{m}\)-fixed subtorus in the Jacobian of \(\widetilde{\Sigma}^{\vee}\), which is isomorphic to \(\mathrm{Jac}(\widetilde{\Sigma}^{\vee}/\mathbb{Z}_{m})\cong\Sigma^{\vee}\). This implies the isomorphism (6.1). _Remark 6.2_.: In the case \(m=4\), the quantum hamiltonian \(\widehat{h}\) appeared in the studies of multi-conformal blocks [34]. Figure 3: Three and four punctured spheres corresponding to \((T^{2}\times\mathbb{C})/\mathbb{Z}_{m}\) with fixed points (“punctures”) labeled by Young diagrams representing partitions of \(m\). ### Local systems, star-shaped quivers, and generalised DAHAs According to the non-abelian Hodge correspondence [35, 36], the moduli space of Higgs bundles (the Dolbeaut space \(\mathcal{M}_{Dol}\)) over a complex algebraic curve \(X\), has two other avatars, \(\mathcal{M}_{dR}\) (de Rham) and \(\mathcal{M}_{B}\) (Betti). The three spaces are diffeomorphic: \(\mathcal{M}_{Dol}\) and \(\mathcal{M}_{dR}\) are obtained from each other by rotating the complex structure within a hyper-Kahler family, while \(\mathcal{M}_{dR}\) and \(\mathcal{M}_{B}\) are identified as complex-analytic spaces by the Riemann-Hilbert correspondence. From that perspective, if \(\mathcal{M}\) is one of our moduli spaces of Higgs bundles on the punctured Riemann sphere, then the corresponding de Rham moduli space is precisely one of the four spaces of Fuchsian systems considered by Boalch [37]. As he explains, these moduli spaces are nothing but the ALE spaces considered by Kronheimer [38] which can also be recast as quiver varieties [39] associated with the affine Dynkin quivers of type \(\widetilde{D}_{4}\), \(\widetilde{E}_{6}\), \(\widetilde{E}_{7}\), \(\widetilde{E}_{8}\) (these are precisely the star-shaped affine Dynkin quivers) as represented in figure 4. Note that there are corresponding 3d \(\mathcal{N}=4\) quiver gauge theories which are the _mirror_ theories for the circle reduction of \(D_{4}\), \(E_{6}\), \(E_{7}\), and \(E_{8}\) theories [40, 41]. The \(\widetilde{D}_{4}\) case (\(m=2\) case in our language) corresponds to the family of \(2\times 2\) Fuchsian systems on \(\mathbb{P}^{1}\) with four singularities and prescribed local exponents at the singular points; it has been studied from various angles in Painleve theory and related contexts, see in particular [42, 43, 44, 45, 46]. The \(E_{6,7,8}\) cases (\(m=3,4,6\)) are also closely related to Painleve theory, but to difference rather than continuous Painleve equations. As explained in Sections 6 and 7 of [37], these are essentially the surfaces from Sakai's list [47] within his geometric approach to Painleve equations (they correspond to the cases _Add1, Add2, Add3_ in [47]). From that perspective, our duality \(c\mapsto c^{\vee}\) and (6.1) appears to be similar to the Okamoto transformation for Painleve VI, interpreted in terms of middle convolution in [48, 37] (cf. Remark 6.3 below). On the Betti side, we have spaces of the monodromy data of the above Fuchsian systems. According to the general theory [49], these are modelled by multiplicative quiver varieties associated to the affine Dynkin quivers of type \(\widetilde{D}_{4}\), \(\widetilde{E}_{6,7,8}\). From yet another perspective, they appear in the work of Etingof, Oblomkov, and Rains [50] on generalised rank-one DAHAs. These varieties can be characterised as certain affine del Pezzo surfaces, see Sections 6 and 9 of [50]. _Remark 6.3_.: As Eric Rains pointed out to us, the duality isomorphism (6.1) can be explained from the results of [50]. Indeed, the Betti spaces from [50] are written as affine hypersurfaces whose coefficients are given by \(D_{4}/E_{6}/E_{7}/E_{8}\) characters, hence they are invariant under the action of the corresponding Weyl group on parameters; this is also seen from the quiver interpretation of the middle convolution in [51, 49]. By taking a limit to the Higgs moduli space, we conclude that these also do not change under the Weyl group action. Then one needs to check that the transformation \(c\mapsto c^{\vee}\) can be identified with a suitable element of the Weyl group. Therefore, the corresponding Hitchin fibrations are isomorphic. ### Quantum curves and opers As explained above, we can view the fibration \(\{\Sigma_{z}^{\vee}|\}_{z\in\mathbb{C}}\) on \(T^{*}\mathcal{E}/\mathbb{Z}_{m}\) as a Hitchin fibration over a punctured \(\mathbb{P}^{1}\). Quantisation of \(\Sigma_{z}^{\vee}\) gives a pencil of Fuchsian ODEs with prescribed local monodromy data. The monodromy around each puncture is semi-simple and has prescribed eigenvalues, with some repetitions if \(m=4,6\). If we write \(G=\mathrm{GL}_{m}\) and denote by \(M_{i}\in G\) the monodromy around \(x=e_{i}\) (in some chosen basis), then we require \(M_{i}\) to belong to a particular conjugacy class \([\Lambda_{i}]:=\{g\Lambda_{i}g^{-1}\,|\,g\in G\}\) for some diagonal matrix \(\Lambda_{i}\). For example for \(m=6\), \(\Lambda_{0}\) is a generic diagonal matrix, while \(\Lambda_{1,2}\) are of the form \(\mathrm{diag}(a,a,b,b,c,c)\) and \(\mathrm{diag}(d,d,d,e,e,e)\), respectively; the global monodromy in this case represents a point on the character variety \[\mathcal{M}_{B}:=\{M_{0},M_{1},M_{2}\in G\,|\,M_{0}M_{1}M_{2}=\mathbb{I}\,,\, \,M_{i}\in[\Lambda_{i}]\}\,//\,G\,, \tag{6.2}\] which is the Betti moduli space mentioned above. (These character varieties are precisely the affine del Pezzo surfaces from [50].) Each quantum curve can also be viewed as a rank \(m\) trivial bundle over \(\mathbb{P}^{1}\) with (flat) connection, so it represents a point in the de Rham space, \(\mathcal{M}_{dR}\). As it comes from an ODE, this automatically has the form of a \(\mathrm{GL}_{m}\)-oper. We therefore observe that the pencil of quantum curves can be associated with the one-dimensional Lagrangian subvariety of opers, \(\mathcal{L}\subset\mathcal{M}_{dR}\). This illustrates the general philosophy, going back to Nekrasov-Rosly-Shatashvili [52] and Gaiotto [53], that quantizing spectral curves of a Hitchin system should produce the variety of opers in the corresponding de Rham moduli space. Note that for compact curves of genus \(\geq 2\), a result of that kind has been established in [54], but the case of curves with punctures remains open in general. Note also that in the case of superconformal gauge theories, e.g., \(SU(2)\) gauge theory with \(N_{f}=4\), the quantum spectral curves can be studied with the help of instanton counting [55]. ### 5d theories There is an approach to 4d \(\mathcal{N}=2\) SQFTs which allows us to view them as a result of compactifying a 5d theory on a circle. It is then natural to expect that the classical and quantum curves of the 4d theory can be obtained as a limit of the corresponding 5d families. For 5d theories that can be constructed in string theory using five-brane webs, there are systematic approaches for deriving the SW curves on \(\mathbb{R}^{4}\times S^{1}\)[56, 57]. The curve can be expressed in terms of a polynomial equation in \((t,w)\in\mathbb{C}^{*}\times\mathbb{C}^{*}\) with monomials associated with the vertices of a 2d dot diagram which is the dual graph of a 5-brane web, and the coefficients encode the moduli and parameters of the 5d theories. This is particularly applicable to the 5d theories corresponding to the 4d \(D_{4}\) and \(E_{6,7,8}\) theories, which are known as Seiberg's \(E_{n}\) theories. The schematic representation of the webs for Seiberg's theories is shown in figure 5. The SW curves for Seiberg's \(E_{n}\) theories using 5-brane webs have been obtained in [57]. They have been further quantised in [58], see also [59]. We have checked that our results are consistent with those in [57, 58]; the details will appear elsewhere. ## Acknowledgement We are grateful to C. Closset, P. Etingof, P. Van Haecke, A. King, O. Lechtenfeld, K. Lee, P. Longhi, J. Manschot, M. Martone, M. Mazzocco, J. Minahan, N. Nekrasov, E. Rains, T. Schedler, E. Sklyanin, Y. Tachikawa, K. Takemura and A. Veselov for stimulating discussions and useful remarks. PCA is supported in part by DOE grant DE-SC1019775. YL is supported by KIAS individual grant PG084801. ## Appendix A Elliptic functions and duality Here we collect the main properties of the elliptic functions used throughout the paper. Associated to the lattice \(\Gamma=2\mathbb{Z}\omega_{1}+2\mathbb{Z}\omega_{2}\), we have Weierstrass functions \(\sigma,\zeta,\wp\). Recall that \(\sigma(x)\) is an odd, entire function with \(\sigma^{\prime}(0)=1\) and with the properties \[\sigma(x+\gamma)=(-1)^{mn+m+n}e^{\eta(\gamma)(x+\gamma)}\sigma(x)\,,\qquad \gamma=2m\omega_{1}+2n\omega_{2}\,,\] (A.1) where \(\eta(\gamma)\) was defined in (3.9). Consider the function \[\varphi(x,z)=\frac{\sigma(x-z)}{\sigma(x)\sigma(-z)},\qquad\varphi(x,z)=- \varphi(z,x)\,.\] (A.2) It has the following translation properties: \[\frac{\varphi(x+\gamma,z)}{\varphi(x,z)}=e^{-\eta(\gamma)z},\qquad\frac{ \varphi(x,z+\gamma)}{\varphi(x,z)}=e^{-\eta(\gamma)x}\,,\qquad\gamma\in\Gamma\,.\] (A.3) Next, we have \[v_{l}(x,z)=\sum_{\{x_{i}\}}c_{l}(x_{i})e^{-\eta(\Omega_{l}x_{i})z}\varphi(x-x _{i},\Omega_{-l}z)\,,\qquad l\in\mathbb{Z}_{m}\setminus\{0\}\,,\] (A.4) where \(\Omega_{l}=1-\omega^{l}\) and the summation is over all fixed points \(x_{i}\in\mathcal{E}\). Note that since our convention is to set \(c_{l}(x_{i})=0\) whenever \(x_{i}\) is _not_ fixed by \(\omega^{l}\), the summation reduces to \(x_{i}\in(\Omega_{l})^{-1}\Gamma/\Gamma\). The following properties are now clear: \[\operatorname{res}_{x=x_{i}}v_{l}(x,z)=c_{l}(x_{i})e^{-\eta(\Omega_{l}x_{i})z }\,,\] (A.5) Figure 5: Shown above are the five-brane webs corresponding to 5d Seiberg’s theories. For all cases, the internal part of the diagram is represented by a large black circle, while only the external legs are illustrated in detail. Black dots are used to represent seven-branes and lines represent five-branes. \[v_{l}(x+\gamma,z)=e^{-\eta(\gamma)\Omega_{-l}z}v_{l}(x,z)\,,\quad\gamma\in\Gamma\,.\] (A.6) These properties characterize \(v_{l}\) uniquely. On the other hand, as a function of \(z\), \(v_{l}(x,z)\) has simple poles at fixed points \(z=x_{i}\), with residues \[\operatorname{res}_{z=x_{i}}v_{l}(x,z)=-\frac{1}{\Omega_{-l}}\sum_{\{x_{j}\}}c _{l}(x_{j})e^{\eta(\Omega_{-l}x_{i})x_{j}-\eta(\Omega_{l}x_{j})x_{i}}e^{-\eta (\Omega_{-l}x_{i})x}\,.\] (A.7) Also, under translations in the \(z\) variable one has \[v_{l}(x,z+\gamma)=e^{-\eta(\gamma)\Omega_{l}x}v_{l}(x,z)\,,\quad\gamma\in \Gamma\,.\] (A.8) (This uses that \(\eta(\Omega_{-l}\gamma)=\Omega_{l}\eta(\gamma)\) and the property \(\eta(a)b-\eta(b)a\in 2\pi i\mathbb{Z}\) for \(a,b\in\Gamma\).) Let us define the dual parameters \(c^{\vee}\) by \[c^{\vee}_{l}(x_{i})=\frac{1}{\Omega_{l}}\sum_{\{x_{j}\}}c_{-l}(x_{j})e^{\eta( \Omega_{l}x_{i})x_{j}-\eta(\Omega_{-l}x_{j})x_{i}}\,.\] (A.9) Then (A.7) becomes \[\operatorname{res}_{z=x_{i}}v_{l}(x,z)=-c^{\vee}_{-l}(x_{i})e^{-\eta(\Omega_{ -l}x_{i})x}\,.\] (A.10) We conclude that \(v_{l}\) can be uniquely characterised by its properties in \(z\). By comparing the properties in \(x\) and \(z\), we obtain the following _duality_: \[v_{l,c}(x,z)=-v_{-l,c^{\vee}}(z,x)\,.\] (A.11) Let us now describe explicitly the transformation \(c\mapsto c^{\vee}\) given by (A.9). We will use the notation \(\omega_{1,2,3}\) and \(\eta_{1,2}\) as in (1). We will also use the fact that \(\zeta(\omega_{1})=\pi/(4\omega_{1})\) for the lemniscatic lattice and \(\zeta(\omega_{1})=\pi/(2\sqrt{3}\omega_{1})\) for the equianharmonic lattice. For \(m=2\), take \(x_{0}=0\), \(x_{i}=\omega_{i}\) and denote \(g_{i}=c_{1}(x_{i})\), \(i=0,1,2,3\). Then \[\begin{pmatrix}g_{0}^{\vee}\\ g_{1}^{\vee}\\ g_{2}^{\vee}\\ g_{3}^{\vee}\end{pmatrix}=\frac{1}{2}\begin{pmatrix}1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1\end{pmatrix}\begin{pmatrix}g_{0}\\ g_{1}\\ g_{2}\\ g_{3}\end{pmatrix}\,.\] For \(m=3\), take \(x_{0}=0\), \(x_{1,2}=\eta_{1,2}\). We have \(6\) parameters \(c_{i}(x_{0,1,2})\) with \(i=1,2\). Set \(\vec{c}_{i}=\begin{pmatrix}c_{i}(x_{0})&c_{i}(x_{1})&c_{i}(x_{2})\end{pmatrix}^ {T}\) (similarly for dual variables). Then \[\begin{pmatrix}\vec{c}_{1}^{\vee}\\ \vec{c}_{2}^{\vee}\end{pmatrix}=\begin{pmatrix}0&A\\ B&0\end{pmatrix}\begin{pmatrix}\vec{c}_{1}\\ \vec{c}_{2}\end{pmatrix}\] where \[A=\frac{1}{1-\omega}\begin{pmatrix}1&1&1\\ 1&\omega&\omega^{2}\\ 1&\omega^{2}&\omega\end{pmatrix},\quad B=\frac{1}{1-\omega^{2}}\begin{pmatrix}1 &1&1\\ 1&\omega^{2}&\omega\\ 1&\omega&\omega^{2}\end{pmatrix},\quad\omega=e^{2\pi i/3}.\] (A.12) For \(m=4\), take \(x_{0}=0\), \(x_{1}=\omega_{3}\), \(x_{2}=\omega_{1}\), \(x_{3}=\omega_{2}\). We have 7 parameters, \(c_{1,2,3}(x_{0,1})\) and \(c_{2}(x_{2})=c_{2}(x_{3})\). Set \(\vec{c}_{i}=\begin{pmatrix}c_{i}(x_{0})&c_{i}(x_{1})\end{pmatrix}^{T}\) for \(i=1,3\). Then \[\begin{pmatrix}c_{2}^{\vee}(x_{0})\\ c_{2}^{\vee}(x_{1})\\ c_{2}^{\vee}(x_{2})\end{pmatrix}=\frac{1}{2}\begin{pmatrix}1&1&2\\ 1&1&-2\\ 1&-1&0\end{pmatrix}\begin{pmatrix}c_{2}(x_{0})\\ c_{2}(x_{1})\\ c_{2}(x_{2})\end{pmatrix}\,,\qquad\begin{pmatrix}\vec{c}_{1}^{\vee}\\ \vec{c}_{3}^{\vee}\end{pmatrix}=\begin{pmatrix}0&C\\ D&0\end{pmatrix}\begin{pmatrix}\vec{c}_{1}\\ \vec{c}_{3}\end{pmatrix}\] (A.13) with \[C=\frac{1}{1-i}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\,,\quad D=\frac{1}{1+i}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\,.\] (A.14) Finally, for \(m=6\) we take \(x_{0}=0\), \(x_{1,2}=\eta_{1,2}\), \(x_{3}=\omega_{1}\), \(x_{4}=\omega_{2}\), \(x_{5}=\omega_{3}\). We have 8 parameters, \(c_{i}(x_{0})\), \(i=1,\ldots,5\), \(c_{i}(x_{1})=c_{i}(x_{2})\), \(i=2,4\), and \(c_{3}(x_{3})=c_{3}(x_{4})=c_{3}(x_{5})\). Set \(\vec{c}_{i}=\begin{pmatrix}c_{i}(x_{0})&c_{i}(x_{1})\end{pmatrix}^{T}\), for \(i=2,4\) and \(c_{2}(x_{1})=c_{2}(x_{2})\), \(c_{3}(x_{3})=c_{3}(x_{4})=c_{3}(x_{5})\), \(c_{4}(x_{1})=c_{4}(x_{2})\). Then \[\begin{pmatrix}c_{1}^{\vee}(x_{0})\\ c_{5}^{\vee}(x_{0})\end{pmatrix}=\begin{pmatrix}0&\frac{1}{1-\epsilon}\\ \frac{1}{1-\epsilon^{5}}&0\end{pmatrix}\begin{pmatrix}c_{1}(x_{0})\\ c_{5}(x_{0})\end{pmatrix}\,,\quad\begin{pmatrix}c_{3}^{\vee}(x_{0})\\ c_{3}^{\vee}(x_{3})\end{pmatrix}=\frac{1}{2}\begin{pmatrix}1&3\\ 1&-1\end{pmatrix}\begin{pmatrix}c_{3}(x_{0})\\ c_{3}(x_{3})\end{pmatrix}\] (A.15) and \[\begin{pmatrix}\vec{c}_{2}^{\vee}\\ \vec{c}_{4}^{\vee}\end{pmatrix}=\begin{pmatrix}0&E\\ F&0\end{pmatrix}\begin{pmatrix}\vec{c}_{2}\\ \vec{c}_{4}\end{pmatrix}\] where \(\epsilon=e^{\pi i/3}\) and \[E=\frac{1}{1-\epsilon^{2}}\begin{pmatrix}1&2\\ 1&\epsilon^{2}+\epsilon^{4}\end{pmatrix}\,,\quad F=\frac{1}{1-\epsilon^{4}} \begin{pmatrix}1&2\\ 1&\epsilon^{2}+\epsilon^{4}\end{pmatrix}\,.\] (A.16) ## Appendix B Quantum Hamiltonians Here we write explicitly the hamiltonians in elliptic form, based on the formula (2.40). We use the notation \(\omega_{1,2,3}\) and \(\eta_{1,2}\) for the fixed points, as in (1). Case \(m=2\):In this case, \(\tau=\omega_{2}/\omega_{1}\) is arbitrary. We denote \(g_{i}:=c_{1}(\omega_{i})\), \(i=0\ldots 3\); then \(\mu_{0}(\omega_{i})=-\mu_{1}(\omega_{i})=g_{i}\). The hamiltonian has the form \[\widehat{h}=(\hat{p}+f_{0})(\hat{p}-f_{0})+\alpha_{2}\wp(q)\,, \qquad f_{0}=\sum_{i=1}^{3}g_{i}\left(\zeta(q-\omega_{i})-\zeta(q)+\zeta( \omega_{i})\right)=\sum_{i=1}^{3}\frac{g_{i}\wp^{\prime}(q)}{2(\wp(q)-e_{i})}\,,\] (B.1) where \(\alpha_{2}\) is determined from \[(\hat{p}+g_{0}q^{-1})(\hat{p}-g_{0}q^{-1})=(\hat{p}-\widetilde{g}q^{-1})(\hat {p}+\widetilde{g}q^{-1})+\alpha_{2}q^{-2}\,,\qquad\widetilde{g}=g_{1}+g_{2}+g_ {3}\,.\] (B.2) The result is: \(\alpha_{2}=(\widetilde{g}-g_{0}+\hbar)(\widetilde{g}+g_{0})\). After rearranging, we get the familiar formula, \[\widehat{h}=\hat{p}\,^{2}-\sum_{i=0}^{3}g_{i}(g_{i}-\hbar)\wp(q-\omega_{i})\,,\] (B.3) up to an additive constant. Case \(m=3\):In this case \(\omega_{2}/\omega_{1}=\exp(\pi i/3)\), and we have the parameters \(\mu_{j}(\eta_{i})\), \(i,j=0,1,2\), where we put \(\eta_{0}=0\) for convenience. The hamiltonian has the form \[\widehat{h} =(\hat{p}-f_{2})(\hat{p}-f_{1})(\hat{p}-f_{0})+\alpha_{2}\wp(q)( \hat{p}-f_{0})+\alpha_{3}\wp^{\prime}(q)\,,\] (B.4) \[f_{j} =\sum_{i=1,2}\mu_{j}(\eta_{i})f(q,\eta_{i})=\sum_{i=1,2}\frac{ \mu_{j}(\eta_{i})(\wp^{\prime}(q)+\wp^{\prime}(\eta_{i}))}{2\wp(q)}\,,\qquad j =0,1,2\,,\] (B.5) with \(\alpha_{2},\alpha_{3}\) determined from (2.44). This can be rearranged as follows (cf. [1]): \[\widehat{h}=\hat{p}^{\,3}+(a_{2}\wp(q)+b_{2}\wp\left(q-\eta_{1} \right)+c_{2}\wp\left(q-\eta_{2}\right))\hat{p}-\frac{1}{2}\left(a_{3}\wp^{ \prime}(q)+b_{3}\wp^{\prime}\left(q-\eta_{1}\right)+c_{3}\wp^{\prime}\left(q -\eta_{2}\right)\right)\,,\] (B.6) where \[a_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(0),\mu_{1}(0)+\hbar,\mu_{2}(0)+2 \hbar)\,,\] (B.7) \[b_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(\eta_{1}),\mu_{1}(\eta_{1})+\hbar,\mu _{2}(\eta_{1})+2\hbar)\,,\] (B.8) \[c_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(\eta_{2}),\mu_{1}(\eta_{2})+\hbar,\mu _{2}(\eta_{2})+2\hbar)\,,\qquad i=1,2\,.\] (B.9) Here we use \(\sigma_{i}\) to denote the elementary symmetric polynomial of degree \(i\). Case \(m=4\):In this case \(\omega_{2}/\omega_{1}=\exp(\pi i/2)\), and we put \(\omega_{0}=0\) for convenience. We have the parameters \(\mu_{j}(\omega_{i})\), \(i=0,3\), and \(\mu_{j}(\omega_{1})=\mu_{j}(\omega_{2})\), with \(j=0,1,2,3\), and with \(\mu_{j}(\omega_{1,2})=\mu_{j+2}(\omega_{1,2})\). Recall that \(\sum_{j}\mu_{j}(x_{i})=0\) for each fixed point \(x_{i}\). The hamiltonian has the form \[\widehat{h}= (\hat{p}-f_{3})(\hat{p}-f_{2})(\hat{p}-f_{1})(\hat{p}-f_{0})\] \[+ \alpha_{2}\wp(q)(\hat{p}-f_{1})(\hat{p}-f_{0})\] \[+ \alpha_{3}\wp^{\prime}(q)(\hat{p}-f_{0})\] \[+ \alpha_{4}\wp^{\prime\prime}(q)\,,\] (B.10) up to an additive constant and with \(\alpha_{2},\alpha_{3},\alpha_{4}\) determined from (2.44). We have \[f_{j}=\mu_{j}(\omega_{1,2})(f(q,\omega_{1})+f(q,\omega_{2}))+\mu_{j}(\omega_{3 })f(q,\omega_{3})\,.\] (B.11) Using that \(\wp(\omega_{3})=0\) and \(\wp(\omega_{1})=-\wp(\omega_{2})\), we can further transform this into \[f_{j}=4\mu_{j}(\omega_{1,2})\frac{\wp(q)^{2}}{\wp^{\prime}(q)}+\frac{1}{2}\mu _{j}(\omega_{3})\frac{\wp^{\prime}(q)}{\wp(q)}\,.\] (B.12) The formula (B.10) can be expanded (cf. [1, 34]) as \(\widehat{h}=\hat{p}^{\,4}+A_{2}\hat{p}^{\,2}+A_{3}\hat{p}+A_{4}\) with \[A_{2} =a_{2}\wp(q)+b_{2}\wp\left(q-\omega_{1}\right)+2c_{2}\left(\wp \left(q-\omega_{2}\right)+\wp\left(q-\omega_{3}\right)\right),\] (B.13) \[A_{3} =-\frac{1}{2}a_{3}\wp^{\prime}(q)-\frac{1}{2}b_{3}\wp^{\prime} \left(q-\omega_{1}\right)+2\hbar c_{2}\left(\wp^{\prime}\left(q-\omega_{2} \right)+\wp^{\prime}\left(q-\omega_{3}\right)\right),\] (B.14) \[A_{4} =a_{4}\wp(q)^{2}+b_{4}\wp\left(q-\omega_{1}\right)^{2}+c_{2} \left(c_{2}+6\hbar^{2}\right)\left(\wp(q-\omega_{2})^{2}+\wp(q-\omega_{3})^{2}\right)\] \[\qquad+(a_{2}-b_{2})\,\wp(\omega_{2})\left(\wp(q-\omega_{2})- \wp(q-\omega_{3})\right)\,.\] (B.15) The seven parameters \(a_{2,3,4}\), \(b_{2,3,4}\), \(c_{2}\) are related to \(\mu_{j}(x_{i})\) by \[a_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(0),\mu_{1}(0)+\hbar,\mu_{2}(0)+2\hbar, \mu_{3}(0)+3\hbar)\,,\] (B.16) \[b_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(\omega_{3}),\mu_{1}(\omega_{3})+\hbar,\mu_{2}(\omega_{3})+2\hbar,\mu_{3}(\omega_{3})+3\hbar)\,,\] (B.17) \[c_{2} =\sigma_{2}(\mu_{0}(\omega_{1,2}),\mu_{1}(\omega_{1,2})+\hbar)=\mu _{0}(\omega_{1,2})(-\mu_{0}(\omega_{1,2})+\hbar)\,.\] (B.18) Case \(m=6\):In this case \(\omega_{2}/\omega_{1}=e^{\pi i/3}\), and we have six parameters \(\mu_{j}(0)\), \(j=0,\ldots,5\), further three parameters \(\mu_{j}(\eta_{1})=\mu_{j}(\eta_{2})\), \(j=0,1,2\), and two parameters \(\mu_{j}(\omega_{1})=\mu_{j}(\omega_{2})=\mu_{j}(\omega_{3})\), \(j=0,1\). Recall that \(\sum_{j}\mu_{j}(x_{i})=0\) for each fixed point. Also, we extend \(\mu_{j}(x_{i})\) by \(\mu_{j}(\eta_{1,2})=\mu_{j+3}(\eta_{1,2})\) and \(\mu_{j}(\omega_{1,2,3})=\mu_{j+2}(\omega_{1,2,3})\). The hamiltonian has the form \[\widehat{h}= (\hat{p}-f_{5})(\hat{p}-f_{4})(\hat{p}-f_{3})(\hat{p}-f_{2})( \hat{p}-f_{1})(\hat{p}-f_{0})\] \[+ \alpha_{2}\wp(q)(\hat{p}-f_{3})(\hat{p}-f_{2})(\hat{p}-f_{1})( \hat{p}-f_{0})\] \[+ \alpha_{3}\wp^{\prime}(q)(\hat{p}-f_{2})(\hat{p}-f_{1})(\hat{p}- f_{0})\] \[+ \alpha_{4}\wp^{\prime\prime}(q)(\hat{p}-f_{1})(\hat{p}-f_{0})\] \[+ \alpha_{5}\wp^{(3)}(q)(\hat{p}-f_{0})\] \[+ \alpha_{6}\wp^{(4)}(q)\,,\] (B.19) up to an additive constant. Here the parameters \(\alpha_{2},\ldots,\alpha_{6}\) are determined from (2.44). The coefficients \(f_{j}\) are given by \[f_{j}=\mu_{j}(\omega_{1,2,3})(f(q,\omega_{1})+f(q,\omega_{2})+f(q,\omega_{3})) +\mu_{j}(\eta_{1,2})(f(q,\eta_{1})+f(q,\eta_{2})).\] (B.20) Using that \(\wp(\omega_{2})=e^{4\pi i/3}\wp(\omega_{1})\), \(\wp(\omega_{3})=e^{2\pi i/3}\wp(\omega_{1})\), \(\wp^{\prime}(\eta_{2})=-\wp^{\prime}(\eta_{1})\), \(\wp^{\prime}(\omega_{i})=0\), \(\wp(\eta_{i})=0\), \(\wp^{\prime}(q)^{2}=4\prod_{i=1}^{3}(\wp(q)-\wp(\omega_{i}))\), we can rearrange them as \[f_{j}=\mu_{j}(\omega_{1,2,3})\frac{6\wp(q)^{2}}{\wp^{\prime}(q)}+\mu_{j}(\eta _{1,2})\frac{\wp^{\prime}(q)}{\wp(q)}.\] (B.21) The formula (B.19) can be further expanded into the form \[\widehat{h}=\hat{p}^{\,6}+A_{2}\hat{p}^{\,4}+A_{3}\hat{p}^{\,3}+A_{4}\hat{p}^{ \,2}+A_{5}\hat{p}+A_{6}\,.\] (B.22) However, the resulting coefficients are rather cumbersome: \[A_{2} =a_{2}\wp(q)+2b_{2}\sum_{i=1,2}\wp(q-\eta_{i})+3c_{2}\sum_{i=1,2,3 }\wp(q-\omega_{i})\] (B.23) \[A_{3} =-\frac{a_{3}}{2}\wp^{\prime}(q)-(b_{3}-3b_{2}\hbar)\sum_{i=1,2} \wp^{\prime}(q-\eta_{i})+6c_{2}\hbar\sum_{i=1,2,3}\wp^{\prime}(q-\omega_{i})\] (B.24) \[A_{4} =a_{4}\wp(q)^{2}+(b_{2}^{2}-9b_{3}\hbar+18b_{2}\hbar^{2})\sum_{i =1,2}\wp(q-\eta_{i})^{2}+b_{2}\sum_{i=1,2}\beta_{i}\left(\zeta(q-\eta_{i})+ \zeta(\eta_{i})\right)\] (B.25) \[\quad+3c_{2}\left(c_{2}+14\hbar^{2}\right)\sum_{i=1,2,3}\wp(q- \omega_{i})^{2}+2c_{2}\sum_{i=1,2,3}\gamma_{i}\wp(q-\omega_{i})\] (B.26) \[A_{5} =-\frac{a_{5}}{2}\wp^{\prime}(q)\wp(q)-(b_{2}b_{3}-b_{2}^{2}\hbar +18b_{3}\hbar^{2}-12b_{2}\hbar^{3})\sum_{i=1,2}\wp^{\prime}(q-\eta_{i})\wp(q -\eta_{i})\] (B.27) \[\quad+\sum_{i=1,2}(b_{2}(\delta_{i}-2\beta_{i}\hbar)+\beta_{i}b_ {3})\wp(q-\eta_{i})+6c_{2}\hbar\left(c_{2}+8\hbar^{2}\right)\sum_{i=1,2,3}\wp ^{\prime}(q-\omega_{i})\wp(q-\omega_{i})\] (B.28) \[A_{6} =a_{6}\wp^{3}(q)+\left(b_{3}\left(b_{3}-3b_{2}\hbar-60\hbar^{3} \right)\right)\sum_{i=1,2}\wp^{3}(q-\eta_{i})-\frac{1}{2}\sum_{i=1,2}\left(b_{ 3}(\delta_{i}-3\beta_{i}\hbar)\right)\wp^{\prime}(q-\eta_{i})\] (B.30) \[+c_{2}\left(26c_{2}\hbar^{2}+c_{2}^{2}+120\hbar^{4}\right)\sum_{i=1,2, 3}\wp^{3}(q-\omega_{i})+c_{2}\left(c_{2}+6\hbar^{2}\right)\sum_{i=1,2,3}\gamma_{ i}\wp^{2}(q-\omega_{i})\] (B.31) \[+c_{2}\sum_{i=1,2,3}(\kappa_{i}-2\hbar(\rho_{i}-2\xi_{i}\hbar)) \wp(q-\omega_{i})\,.\] (B.32) In these formulas, the parameters \(a_{2,3,4,5,6}\), \(b_{2,3}\), \(c_{2}\) are related to \(\mu_{j}(x_{i})\) in the following way: \[a_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(0),\mu_{1}(0)+\hbar,\mu_{2}(0)+2\hbar,\mu_{3}(0)+3\hbar,\mu_{4}(0)+4\hbar,\mu_{5}(0)+5\hbar)\,,\] (B.33) \[b_{i} =(-1)^{i}\sigma_{i}(\mu_{0}(\eta_{1,2}),\mu_{1}(\eta_{1,2})+ \hbar,\mu_{2}(\eta_{1,2})+2\hbar)\,,\] (B.34) \[c_{2} =\sigma_{2}(\mu_{0}(\omega_{1,2,3}),\mu_{1}(\omega_{1,2,3})+ \hbar)=\mu_{0}(\omega_{1,2,3})(-\mu_{0}(\omega_{1,2,3})+\hbar)\,.\] (B.35) The other parameters are expressed in terms of \(a_{i},b_{i},c_{i}\): \[\beta_{1} =(a_{2}-2b_{2}-27c_{2})\,\wp^{\prime}\left(\eta_{1}\right) \beta_{2} =-\beta_{1}\] \[\gamma_{1} =(a_{2}-8b_{2}-3c_{2})\,\wp\left(\omega_{1}\right) \gamma_{2} =\omega^{-2}\gamma_{1} \gamma_{3} =\omega^{2}\gamma_{1}\] \[\delta_{1} =-\frac{1}{2}\left(a_{3}-2b_{3}+(6b_{2}+108c_{2})\hbar\right) \wp^{\prime}\left(\eta_{1}\right) \delta_{2} =-\delta_{1}\] \[\xi_{1} =3\left(a_{2}+16b_{2}-3c_{2}\right)\wp\left(\omega_{1}\right)^{2} \xi_{2} =\omega^{2}\xi_{1} \xi_{3} =\omega^{-2}\xi_{1}\] \[\rho_{1} =-3\left(a_{3}+16b_{3}+(a_{2}-32b_{2}+9c_{2})\,\hbar\right)\wp \left(\omega_{1}\right)^{2} \rho_{2} =\omega^{2}\rho_{1} \rho_{3} =\omega^{-2}\rho_{1}\] \[\kappa_{1} =(a_{2}\left(c_{2}-4b_{2}\right)+a_{4}+28b_{2}c_{2}+16b_{2}^{2}- 6c_{2}^{2} \kappa_{2} =\omega^{2}\kappa_{1} \kappa_{3} =\omega^{-2}\kappa_{1}\] \[\quad-72b_{3}\hbar+(144b_{2}-42c_{2})\,\hbar^{2})\wp\left(\omega _{1}\right)^{2}\] Recall that here \(\omega=e^{\pi i/3}\). ## Appendix C Quantum curves as Fuchsian equations The quantum curves in elliptic form are given by a family of ODEs on the elliptic curve \(\mathcal{E}\) of the form \[(\widehat{h}-z1)\psi=0\,,\qquad z\in\mathbb{C}\,.\] (C.1) These equations are \(\mathbb{Z}_{m}\)-invariant and have regular singularities at the fixed points \(q=x_{i}\). To find the leading exponents at the singular points, we first look at nonzero fixed point \(q=x_{i}\). Let us choose a local coordinate \(X=q-x_{i}\) and apply \(\widehat{h}\) to \(X^{n}\), using the formula (2.40). By picking the most singular terms, it is easy to see that \[\widehat{h}(X^{\lambda})=c(\lambda)X^{\lambda-m}+\ldots\,,\qquad c(\lambda)= \prod_{j=0}^{m-1}((\lambda-j)\hbar-\mu_{j}(x_{i}))\,,\] (C.2) where the dots denote terms of higher degree in \(X\). (The form of \(c(\lambda)\) is dictated entirely by the first term, \(w_{m}\), in (2.40).) This tells us that the indicial equation determining the local exponents at \(q=x_{i}\) is \(c(\lambda)=0\), from which the local exponents are found as \[\lambda=j+\mu_{j}(x_{i})\hbar^{-1}\,,\qquad j=0,\ldots,m-1.\] (C.3) The same result is true for \(x_{i}=0\), simply because that is how we chose the correction terms, cf. (2.44). Note that in cases \(m=4,6\) we have repetitions among \(\mu_{j}(\omega_{i})\) or \(\mu_{j}(\eta_{i})\); as a result, some of the local exponents at these points differ by an integer. This is known as _resonance_, and in general it may lead to Jordan blocks in the local monodromy (and the presence of logarithmic terms in the local solutions). This, however, does not happen in our case. Indeed, by [18], Section 6.2, the monodromy representation factors through the orbifold Hecke algebra which is semisimple in our situation (cf. the proof of Theorem 7.1 in [1]). Hence, we obtain the following result. **Proposition C.1**.: _For generic parameters \(c_{l}(x_{i})\) of the Cherednik algebra, the differential equations (C.1) have local exponents given by (C.3) and semisimple (i.e. diagonalizable) local monodromy around each singular point._ ### Rational form Furthermore, we can convert these ODEs into a rational form so they become Fuchsian equations on the Riemann sphere. Let us use the \(\mathbb{Z}_{m}\)-invariant coordinate \(x=u(q)\) in accordance with the table (1). Then \[\frac{d}{dq}=w\frac{d}{dx}\,,\quad\text{with}\ \ w:=\frac{du}{dq}\,.\] (C.4) Introduce \[D_{j}:=w^{-j-1}(\hat{p}-f_{j})w^{j}=\hbar\frac{d}{dx}-\frac{f_{j}}{w}+j\hbar \frac{w^{\prime}}{w^{2}}\,,\qquad A_{j}:=\alpha_{j}\frac{\wp^{(j-2)}(q)}{w^{j }}\,.\] (C.5) It is easy to check that \(D_{j}\) and \(A_{j}\) are \(\mathbb{Z}_{m}\)-invariant and so depend rationally on \(x\). Then the expression (2.40) can be rearranged as \[w^{-m}\widehat{h}=D_{m-1}\dots D_{0}+\sum_{j=2}^{m}A_{j}D_{m-j-1}\dots D_{0}\,.\] (C.6) Let us write explicit expressions for each of \(m=2,3,4,6\). Case \(m=2\):In this case, \(x=\wp(q)\) and \(w=\wp^{\prime}(q)\), with \(w^{2}=4(x-e_{1})(x-e_{2})(x-e_{3})\), \(e_{i}=\wp(\omega_{i})\). We have \(\mu_{j}(\omega_{i})=(-1)^{j}g_{i}\) in terms of the parameters \(g_{0,1,2,3}\), and so \[\frac{f_{j}}{w}=\sum_{i=1,2,3}\frac{(-1)^{j}g_{i}}{2(x-e_{i})}\,,\quad\frac{w ^{\prime}}{w^{2}}=\sum_{i=1,2,3}\frac{1}{2(x-e_{i})}\,,\quad A_{2}=\frac{ \alpha_{2}x}{4(x-e_{1})(x-e_{2})(x-e_{3})}\,.\] (C.7) Hence, the operator \(w^{-2}(\widehat{h}-z1)\) takes the form \[\left(\hbar\frac{d}{dx}+\sum_{i=1,2,3}\frac{g_{i}+\hbar}{2(x-e_{i})}\right) \left(\hbar\frac{d}{dx}-\sum_{i=1,2,3}\frac{g_{i}}{2(x-e_{i})}\right)+\frac{ \alpha_{2}x-z}{4(x-e_{1})(x-e_{2})(x-e_{3})}\,,\] (C.8) which is equivalent to the Heun operator with four singular points \(x=\infty,e_{1},e_{2},e_{3}\) and the accessory parameter, \(z\). Since the coordinate \(x\) behaves like \(X^{2}=(q-\omega_{i})^{2}\) near \(q=\omega_{i}\), the local exponents get halved, i.e. they are of the form \(\frac{j+(-1)^{j}g_{i}\hbar^{-1}}{2}\), matching Proposition 5.1. Case \(m=3\):In this case, \(x=\frac{1}{2}\wp^{\prime}(q)\), \(w=\frac{1}{2}\wp^{\prime\prime}(q)=3\wp^{2}(q)\), and \(\wp^{3}(q)=(x-e_{1})(x-e_{2})\) where \(e_{i}=\frac{1}{2}\wp^{\prime}(\eta_{i})\). We have parameters \(\mu_{j}(\eta_{i})\), \(i,j=0,1,2\). A short calculation gives \[\frac{f_{j}}{w}=\sum_{i=1,2}\,\frac{\mu_{j}(\eta_{i})}{3(x-e_{i})} \,,\qquad\frac{w^{\prime}}{w^{2}}=\sum_{i=1,2}\,\frac{2}{3(x-e_{i})}\,,\] (C.9) \[A_{2}=\frac{\alpha_{2}}{3^{2}(x-e_{1})(x-e_{2})}\,,\qquad A_{3}= \frac{2\alpha_{3}x}{3^{3}(x-e_{1})^{2}(x-e_{2})^{2}}\,.\] (C.10) Hence, \[D_{j}=\hbar\frac{d}{dx}-\sum_{i=1,2}\frac{\mu_{j}(\eta_{i})-2j\hbar}{3(x-e_{i} )}\,.\] (C.11) In terms of these, the operator \(w^{-3}(\widehat{h}-z1)\) takes the form \[D_{2}D_{1}D_{0}+\frac{\alpha_{2}}{3^{2}(x-e_{1})(x-e_{2})}D_{0}+\frac{2\alpha _{3}x-z}{3^{3}(x-e_{1})^{2}(x-e_{2})^{2}}\,,\] (C.12) This is the quantum curve in rational form. It is an operator of Fuchsian type with three singular points \(x=\infty,e_{1},e_{2}\) and one accessory parameter, \(z\). (This is the general Fuchsian 3rd order ODE with three singular points and generic local exponents.) The coordinate \(x\) behaves like \(X^{3}\) near each fixed point, so the local exponents are obtained from those in (C.3) by dividing by \(3\). Hence, they are of the form \(\frac{j+\mu_{j}(\eta_{i})\hbar^{-1}}{3}\), matching Proposition 5.1. Case \(m=4\):In this case, \(x=\wp^{2}(q)\), \(w=2\wp(q)\wp^{\prime}(q)\), and \(\wp(q)\wp^{\prime 2}(q)=4(x-e_{1})(x-e_{2})^{2}\) where \(e_{1}=\wp^{2}(\omega_{3})=0\) and \(e_{2}=\wp^{2}(\omega_{1,2})\). By translating the variable \(x\), we can make \(e_{1},e_{2}\) arbitrary. We have parameters \(\mu_{j}(\omega_{3})\) and \(\mu_{j}(\omega_{1})=\mu_{j}(\omega_{2})\) for \(j=0,1,2,3\), with the property \(\mu_{j}(\omega_{1,2})=\mu_{j+2}(\omega_{1,2})\). A straightforward calculation gives \[\frac{f_{j}}{w}=\frac{\mu_{j}(\omega_{3})}{4(x-e_{1})}+\frac{\mu _{j}(\omega_{1,2})}{2(x-e_{2})}\,,\qquad\frac{w^{\prime}}{w^{2}}=\frac{3}{4(x- e_{1})}+\frac{1}{2(x-e_{2})}\,,\] (C.13) \[D_{j}=\hbar\frac{d}{dx}-\frac{\mu_{j}(\omega_{3})-3j\hbar}{4(x- e_{1})}-\frac{\mu_{j}(\omega_{1,2})-j\hbar}{2(x-e_{2})}\,,\] (C.14) \[A_{2}=\frac{\alpha_{2}}{4^{2}(x-e_{1})(x-e_{2})}\,,\quad A_{3}= \frac{2\alpha_{3}}{4^{3}(x-e_{1})^{2}(x-e_{2})}\,,\quad A_{4}=\frac{2\alpha_{ 4}(3x-2e_{1}-e_{2})}{4^{4}(x-e_{1})^{3}(x-e_{2})^{2}}\,.\] (C.15) The quantum curve in rational form is therefore \[D_{3}D_{2}D_{1}D_{0}+\frac{\alpha_{2}}{4^{2}\left(x-e_{1}\right) \left(x-e_{2}\right)}D_{1}D_{0}\] \[+\frac{2\alpha_{3}}{4^{3}\left(x-e_{1}\right)^{2}\left(x-e_{2} \right)}D_{0}+\frac{2\alpha_{4}(3x-2e_{1}-e_{2})-z}{4^{4}\left(x-e_{1}\right) ^{3}\left(x-e_{2}\right)^{2}}\,.\] (C.16) It is an operator of Fuchsian type with three singular points \(x=\infty,e_{1},e_{2}\) and one accessory parameter, \(z\). The coordinate \(x\) behaves like \(X^{4}\) near \(q=\omega_{0},\omega_{3}\) and like \(X^{2}\) near \(q=\omega_{1,2}\). Hence, the local exponents are obtained from those in (C.3) by dividing by \(4\) and \(2\), respectively, in agreement with Proposition 5.1. Case \(m=6\):This is similar to the previous cases. We use \(x=\wp^{3}(q)\) and \(w=3\wp^{2}(q)\wp^{\prime}(q)\). We have parameters \(\mu_{j}(0)\), \(\mu_{j}(\omega_{1,2,3})\) and \(\mu_{j}(\eta_{1,2})\). Let us use the shorthand \(\mu_{j}^{(\omega)}:=\mu_{j}(\omega_{1,2,3})\) and \(\mu_{j}^{(\eta)}:=\mu_{j}(\eta_{1,2})\). These have the periodicity property \(\mu_{j}^{(\omega)}=\mu_{j+2}^{(\omega)}\), \(\mu_{j}^{(\eta)}=\mu_{j+3}^{(\eta)}\). A straightforward calculation gives \[D_{j}:=\hbar\frac{d}{dx}-\frac{\mu_{j}^{(\eta)}-2j\hbar}{3(x-e_{1})}-\frac{\mu _{j}^{(\omega)}-j\hbar}{2(x-e_{2})}\,.\] (C.17) Also, the coefficients \(A_{i}=\alpha_{i}\wp^{(i-2)}(q)w^{-i}\) are found to be \[A_{2}=\frac{\alpha_{2}}{6^{2}\left(x-e_{1}\right)\left(x-e_{2} \right)},\ A_{3}=\frac{2\alpha_{3}}{6^{3}\left(x-e_{1}\right)^{2}\left(x-e_{2 }\right)},A_{4}=\frac{6\alpha_{4}}{6^{4}\left(x-e_{1}\right)^{2}\left(x-e_{2} \right)^{2}},\] (C.18) \[A_{5}=\frac{24\alpha_{5}}{6^{5}\left(x-e_{1}\right)^{3}\left(x-e _{2}\right)^{2}},\ A_{6}=\frac{24\alpha_{6}(5x-3e_{1}-2e_{2})}{6^{6}\left(x-e _{1}\right)^{4}\left(x-e_{2}\right)^{3}}\,.\] (C.19) With these, the operator \(w^{-6}(\widehat{h}-z1)\) takes the form \[D_{5}D_{4}D_{3}D_{2}D_{1}D_{0}+\frac{\alpha_{2}}{6^{2}\left(x-e _{1}\right)\left(x-e_{2}\right)}D_{3}D_{2}D_{1}D_{0}\] \[+\frac{2\alpha_{3}}{6^{3}\left(x-e_{1}\right)^{2}\left(x-e_{2} \right)}D_{2}D_{1}D_{0}+\frac{6\alpha_{4}}{6^{4}\left(x-e_{1}\right)^{2}\left( x-e_{2}\right)^{2}}D_{1}D_{0}\] \[+\frac{24\alpha_{5}}{6^{5}\left(x-e_{1}\right)^{3}\left(x-e_{2} \right)^{2}}D_{0}+\frac{24\alpha_{6}(5x-3e_{1}-2e_{2})-z}{6^{6}\left(x-e_{1} \right)^{4}\left(x-e_{2}\right)^{3}}\,.\] (C.20) This is the quantum curve in rational form. It is an operator of Fuchsian type with three singular points \(x=\infty,e_{1},e_{2}\) and one accessory parameter, \(z\). The coordinate \(x\) behaves like \(x\sim X^{6}\) near \(q=0\), \(x\sim X^{3}\) near \(q=\eta_{1,2}\), and \(x\sim X^{2}\) near \(q=\omega_{1,2,3}\). Hence, the local exponents are obtained from those in (C.3) by dividing those at \(\eta_{1,2}\) by \(3\), and those at \(\omega_{1,2,3}\) by \(2\), in agreement with Proposition 5.1. ### Polynomial form To make it easier to compare quantum and classical curves, we convert them into a polynomial form. This is done by multiplying it from the left by \(P(x)^{m}\) where \(P(x)=(x-e_{1})(x-e_{2})(x-e_{3})\) for \(m=2\) and \(P(x)=(x-e_{1})(x-e_{2})\) for \(m=3,4,6\). We then rearrange the expression using \[\hat{y}:=mP(x)\hbar\frac{d}{dx}\,.\] (C.21) Below we present the results, case by case. In all cases we have one accessory parameter, \(z\). The coefficients \(\alpha_{2}\), etc., are related to the local exponents at singular points via the formula (2.44). One can view \(\alpha_{2},\ldots,\alpha_{m}\) as indeterminate, and instead use (2.44) to determine \(\mu_{j}(0)\) and the local exponents at \(x=\infty\) in terms of \(\alpha_{i}\). Case \(m=2\): the quantum curve in polynomial form is \[\left(\hat{y}+\sum_{i=1}^{3}\left(g_{i}-\hbar\right)\prod_{j\neq i}^{3}\left(x -e_{j}\right)\right)\left(\hat{y}-\sum_{i=1}^{3}g_{i}\prod_{j\neq i}^{3}\left(x -e_{j}\right)\right)+\left(\alpha_{2}x-z\right)\prod_{i=1}^{3}\left(x-e_{i} \right).\] (C.22) **Case \(m=3\)**: the quantum curve in polynomial form is \[Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})(x-e_{2})Y_{0}+(2\alpha_{3}x-z)(x-e_{1})(x-e_{ 2})\,,\] (C.23) where \[Y_{j}=\hat{y}-(\mu_{j}(\eta_{1})+j\hbar)(x-e_{2})-(\mu_{j}(\eta_{2})+j\hbar)(x- e_{1})\,.\] (C.24) **Case \(m=4\)**: the quantum curve in polynomial form is \[Y_{3}Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})(x-e_{2})Y_{1}Y_{0}\] \[+2\alpha_{3}(x-e_{1})(x-e_{2})^{2}Y_{0}+(2\alpha_{4}(3x-2e_{1}-e_ {2})-z)(x-e_{1})(x-e_{2})^{2}\,,\] (C.25) where \[Y_{j}=\hat{y}-(\mu_{j}(\omega_{3})+j\hbar)(x-e_{2})-2(\mu_{j}( \omega_{1,2})+j\hbar)(x-e_{1})\,.\] (C.26) **Case \(m=6\)**: the quantum curve in polynomial form is \[Y_{5}Y_{4}Y_{3}Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})\,(x-e_{2})Y_{3 }Y_{2}Y_{1}Y_{0}\] \[+2\alpha_{3}(x-e_{1})\,(x-e_{2})^{2}Y_{2}Y_{1}Y_{0}+6\alpha_{4}(x -e_{1})^{2}\,(x-e_{2})^{2}Y_{1}Y_{0}\] \[+24\alpha_{5}(x-e_{1})^{2}\,(x-e_{2})^{3}Y_{0}+(24\alpha_{6}(5x-3 e_{1}-2e_{2})-z)(x-e_{1})^{2}\,(x-e_{2})^{3}\,,\] (C.27) where \[Y_{j}=\hat{y}-2(\mu_{j}^{(\eta)}+j\hbar)(x-e_{2})-3(\mu_{j}^{( \omega)}+j\hbar)(x-e_{1})\,.\] (C.28) ## Appendix D Classical spectral curves In Section 4 we described elliptic pencils of special form. Here we verify that our classical spectral curves fit that description. We also give explicit equations of these pencils in projective coordinates. This will be done case by case. Case \(m=2\):The classical limit \(\hbar=0\) of (C.22) can be written as \(Q-zP=0\), where \[Q =\left(y+\sum_{i=1}^{3}g_{i}\prod_{j\neq i}^{3}\left(x-e_{j} \right)\right)\left(y-\sum_{i=1}^{3}g_{i}\prod_{j\neq i}^{3}\left(x-e_{j} \right)\right)+\alpha_{2}x\prod_{i=1}^{3}\left(x-e_{i}\right),\] (D.1) \[P =(x-e_{1})(x-e_{2})(x-e_{3})\,.\] (D.2) Here we think of \(x,y\) as \(y=\frac{1}{2}\wp^{\prime}(q)p\), \(x=\wp(q)\), where \(p,q\) are canonical coordinates, \(\{p,q\}=1\). This induces the Poisson bracket \[\{y,x\}=2(x-e_{1})(x-e_{2})(x-e_{3})\,.\] (D.3) Rewriting \(Q,P\) in weighted homogeneous coordinates \((x:y:w)\) on \(\mathbb{P}^{2}_{1,2,1}\), we get \[Q=\left(y+\sum_{i=1}^{3}g_{i}\prod_{j\neq i}^{3}\left(x-e_{j}w \right)\right)\left(y-\sum_{i=1}^{3}g_{i}\prod_{j\neq i}^{3}\left(x-e_{j}w \right)\right)+\alpha_{2}x\prod_{i=1}^{3}\left(x-e_{i}w\right),\] (D.4) \[P=w(x-e_{1}w)(x-e_{2}w)(x-e_{3}w)\,.\] (D.5) The pencil \(Q-zP=0\) intersects the line \(x-e_{i}w=0\) at two points \((e_{i}:\pm g_{i}\prod_{j\neq i}(e_{i}-e_{j}):1)\). To find its intersection with the line \(w=0\), we set \(x=1,w=0\) and get \[(y+\widetilde{g})(y-\widetilde{g})+\alpha_{2}=0\,,\qquad\widetilde{g}=\sum_{i =1}^{3}g_{i}.\] (D.6) Recall that \(\alpha_{2}\) is determined by the classical variant of (B.2): \[(p+g_{0}q^{-1})(p-g_{0}q^{-1})=(p-\widetilde{g}q^{-1})(p+\widetilde{g}q^{-1}) +\alpha_{2}q^{-2}\,.\] (D.7) It tells us that (D.6) can be rearranged as \((y+g_{0})(y-g_{0})=0\), and so the curves of the pencil pass through the points \((1:\pm g_{0}:0)\). Therefore, this is a pencil of the type described in Sec. 4. It is now straightforward to match \(Q\) to the expression (4.38). Case \(m=3\):The classical limit \(\hbar=0\) of (C.23) can be written as \(Q-zP=0\), where \[Q =Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})(x-e_{2})Y_{0}+2\alpha_{3}x( x-e_{1})(x-e_{2})\,,\] (D.8) \[P =(x-e_{1})(x-e_{2})\,,\qquad Y_{j}=y-\mu_{j}(\eta_{1})(x-e_{2})- \mu_{j}(\eta_{2})(x-e_{1})\,.\] (D.9) Here \[x=\frac{1}{2}g^{\prime}(q)\,,\quad y=\wp(q)p\,,\qquad\{y,x\}=3(x-e_{1})(x-e_{2 })\,,\] (D.10) Writing \(Q,P\) in homogeneous coordinates \((x:y:w)\) on \(\mathbb{P}^{2}\), we get \[Q =Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1}w)(x-e_{2}w)Y_{0}+2\alpha_{3}x( x-e_{1}w)(x-e_{2}w)\,,\] (D.11) \[P =w(x-e_{1}w)(x-e_{2}w)\,,\qquad Y_{j}=y-\mu_{j}(\eta_{1})(x-e_{2} w)-\mu_{j}(\eta_{2})(x-e_{1}w)\,.\] (D.12) The cubic \(Q=0\) intersects the line \(x-e_{1}w=0\) at the points \(\mu_{j}(\eta_{1})(e_{1}-e_{2})\), and \(\mu_{j}(\eta_{1})(e_{2}-e_{1})\) for the line \(x-e_{2}w=0\). To find the intersection with the line \(w=0\), we set \(x=1\), \(w=0\) and get \[(y-\widetilde{\mu}_{2})(y-\widetilde{\mu}_{1})(y-\widetilde{\mu}_{0})+\alpha _{2}(y-\widetilde{\mu}_{0})+2\alpha_{3}=0\,.\] (D.13) Using the relation (2.48) (and setting \(q=-1\)), we see that this factorizes as \[(y+\mu_{2}(0))(y+\mu_{1}(0))(y+\mu_{0}(0))=0\,.\] (D.14) Hence, the pencil \(Q-zP=0\) passes through points \((1:-\mu_{j}(0):0)\). Therefore, we recognize this as a pencil of cubics from Sec. 4. Finally, the polynomial \(Q\) can be rearranged as \[Q =y^{3}+Q_{2}y+Q_{3}\,,\] \[Q_{2} =a_{2}\left(x-e_{1}w\right)(x-e_{2}w)+b_{2}(e_{1}-e_{2})w\left(x- e_{2}w\right)+c_{2}(e_{2}-e_{1})w\left(x-e_{1}w\right),\] \[Q_{3} =a_{3}\left(x-e_{1}w\right)\left(x-e_{2}w\right)^{2}-b_{3}\left(e _{1}-e_{2}\right){}^{2}w^{2}\left(x-e_{2}w\right)-c_{3}\left(e_{2}-e_{1} \right)^{2}w^{2}\left(x-e_{1}w\right)\,.\] The 6 parameters \(a_{2},b_{2},c_{2},a_{3},b_{3},c_{3}\) are symmetric combinations of linear masses. Indeed, by intersecting this cubic with the three lines, we find that \[a_{i}=\sigma_{i}(\mu_{0}(0),\mu_{1}(0),\mu_{2}(0))\,,\quad b_{i}=\sigma_{i}(\mu _{0}(\eta_{1}),\mu_{1}(\eta_{1}),\mu_{2}(\eta_{1}))\,,\quad c_{i}=\sigma_{i}( \mu_{0}(\eta_{2}),\mu_{1}(\eta_{2}),\mu_{2}(\eta_{2}))\,.\] Case \(m=4\):The classical limit of (C.25) is \(Q-zP=0\), with \[Q=Y_{3}Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})(x-e_{2})Y_{1}Y_{0}\] \[+2\alpha_{3}(x-e_{1})(x-e_{2})^{2}Y_{0}+2\alpha_{4}(3x-2e_{1}-e_{2 })(x-e_{1})(x-e_{2})^{2}\,,\] \[P=(x-e_{1})(x-e_{2})^{2}\,,\qquad Y_{j}=y-\mu_{j}(\omega_{3})(x- e_{2})-2\mu_{j}(\omega_{1,2})(x-e_{1})\,.\] Here \[x=\wp^{2}(q)\,,\quad y=\frac{1}{2}\wp^{\prime}(q)p\,,\qquad\{y,x\}=4(x-e_{1}) (x-e_{2})\,.\] (D.15) We easily confirm that the quartic \(Q=0\) intersects \(\ell_{1}:x=e_{1}\) at points \[p_{j}\,:\ (x,y)=(e_{1},\mu_{j}(\omega_{3})(e_{1}-e_{2}))\,,\quad j=0,1,2,3\,,\] (D.16) while the intersection with \(\ell_{2}:x=e_{2}\) consists of two points of multiplicity two, \[q_{j}\,:\ (x,y)=(e_{2},2\mu_{j}(\omega_{1,2})(e_{2}-e_{1}))\,,\quad j=0,1\,,\] (D.17) due to repetitions among \(\mu_{j}(\omega_{1,2})\). Working in homogeneous coordinates \((x:y:w)\), we also confirm that the intersection of \(Q=0\) with \(\ell_{0}:w=0\) consists of 4 points, \[r_{j}=(1:-\mu_{j}(0):0)\,,\quad j=0,1,2,3\,.\] (D.18) It remains to check that each of the two points \((x_{0},y_{0})=(e_{2},2\mu_{j}(\omega_{1,2})(e_{2}-e_{1}))\) is an ordinary double point of the quartic \(Q=0\). For this, a simple check confirms that each summand in \(Q\) belongs to the ideal generated by \((x-x_{0})^{2}\), \((x-x_{0})(y-y_{0})\), and \((y-y_{0})^{2}\). Finally, here is the quartic \(Q=0\) in a symmetric homogeneous form: \[Q =y^{4}+Q_{2}y^{2}+Q_{3}y+Q_{4}\,,\] \[Q_{2} =a_{2}\left(x-e_{1}w\right)\left(x-e_{2}w\right)+(e_{1}-e_{2})\,w \left(b_{2}\left(x-e_{2}w\right)+2c_{2}\left(x-e_{1}w\right)\right),\] \[Q_{3} =\left(x-e_{2}w\right)^{2}\left(a_{3}\left(x-e_{1}w\right)-b_{3} \left(e_{1}-e_{2}\right)w\right),\] \[Q_{4} =\left(e_{1}-e_{2}\right)^{2}w^{2}\left(c_{2}\left(a_{2}-b_{2}+c_ {2}\right)+b_{4}\right)\left(x-e_{1}w\right)\left(x-e_{2}w\right)+a_{4}\left(x- e_{1}w\right)^{2}\left(x-e_{2}w\right){}^{2}\] \[+b_{4}\left(e_{1}-e_{2}\right)^{3}w^{3}\left(x-e_{2}w\right)+c_{2 }^{2}\left(e_{2}-e_{1}\right)^{3}w^{3}\left(x-e_{1}w\right)\,.\] Checking how it intersects the lines \(\ell_{0,1,2}\), we find that the 7 parameters \(a_{i}\), \(b_{i}\), \(c_{2}\) are symmetric combinations of the linear masses: \[a_{i}=\sigma_{i}(\mu_{0}(0),\ldots,\mu_{3}(0))\,,\quad b_{i}=\sigma_{i}(\mu_{0 }(\omega_{3}),\ldots,\mu_{3}(\omega_{3}))\,,\quad c_{2}=4\mu_{0}(\omega_{1,2}) \mu_{1}(\omega_{1,2})\,.\] Case \(m=6\):The classical limit of (C.27) is \(Q-zP=0\), with \[Q=Y_{5}Y_{4}Y_{3}Y_{2}Y_{1}Y_{0}+\alpha_{2}(x-e_{1})\left(x-e_{2 }\right)Y_{3}Y_{2}Y_{1}Y_{0}\] \[+2\alpha_{3}(x-e_{1})\left(x-e_{2}\right)^{2}Y_{2}Y_{1}Y_{0}+6 \alpha_{4}(x-e_{1})^{2}\left(x-e_{2}\right)^{2}Y_{1}Y_{0}\] \[+24\alpha_{5}(x-e_{1})^{2}\left(x-e_{2}\right)^{3}Y_{0}+24\alpha_ {6}(5x-3e_{1}-2e_{2})(x-e_{1})^{2}\left(x-e_{2}\right)^{3},\] \[P=(x-e_{1})^{2}(x-e_{2})^{3}\,,\qquad Y_{j}=y-2\mu_{j}^{(\eta)}(x -e_{2})-3\mu_{j}^{(\omega)}(x-e_{1})\,.\] Here \[x=\wp^{3}(q)\,,\quad y=\frac{1}{2}\wp(q)\wp^{\prime}(q)p\,,\qquad\{y,x\}=6(x-e_{1} )(x-e_{2})\,.\] (D.19) The sextic \(Q=0\) intersects \(\ell_{1}:x=e_{1}\) at three points of multiplicity two, \[p_{j}\,:\ (x,y)=(e_{1},2\mu_{j}^{(\eta)}(e_{1}-e_{2}))\,,\quad j=0,1,2\,,\] (D.20) while the intersection with \(\ell_{2}:x=e_{2}\) consists of two points of multiplicity three, \[q_{j}\,:\ (x,y)=(e_{2},3\mu_{j}^{(\omega)}(e_{2}-e_{1}))\,,\quad j=0,1\,,\] (D.21) Working in homogeneous coordinates \((x:y:w)\), we also confirm that the intersection of \(Q=0\) with \(\ell_{0}:w=0\) consists of 6 points, \[r_{j}=(1:-\mu_{j}(0):0)\,,\quad j=0,\ldots,5\,.\] (D.22) It remains to check that each of \(p_{j}\) is an ordinary double point, and each of \(q_{j}\) is an ordinary triple point. This follows from the formula for \(Q\). For example, taking \(q_{j}=(x_{0},y_{0})\), it is easy to confirm that each summand in \(Q\) belongs to the ideal generated by \((x-x_{0})^{3}\), \((x-x_{0})^{2}(y-y_{0})\), \((x-x_{0})(y-y_{0})^{2}\), and \((y-y_{0})^{3}\). Finally, here is the sextic \(Q=0\) in a symmetric homogeneous form: \[Q =y^{6}+Q_{2}y^{4}+Q_{3}y^{3}+Q_{4}y^{2}+Q_{5}y+Q_{6}\,,\] \[Q_{2} =a_{2}\left(x-e_{1}w\right)\left(x-e_{2}w\right)+2b_{2}\left(e_{1 }-e_{2}\right)w\left(x-e_{2}w\right)-3c_{2}(e_{1}-e_{2})w\left(x-e_{1}w\right),\] \[Q_{3} =a_{3}\left(x-e_{1}w\right)\left(x-e_{2}w\right)^{2}-2b_{3}\left( e_{1}-e_{2}\right)w\left(x-e_{2}w\right)^{2},\] \[Q_{4} =a_{2}b_{2}\left(e_{1}-e_{2}\right)w\left(x-e_{1}w\right)\left(x- e_{2}w\right)^{2}-2a_{2}c_{2}\left(e_{1}-e_{2}\right)w\left(x-e_{1}w\right)^{2} \left(x-e_{2}w\right)\] \[+a_{4}\left(x-e_{1}w\right)^{2}\left(x-e_{2}w\right)^{2}+b_{2}c_{ 2}\left(e_{1}-e_{2}\right)w\left(x-4e_{1}w+3e_{2}w\right)\left(x-e_{1}w\right) \left(x-e_{2}w\right)\] \[+b_{2}^{2}\left(e_{1}-e_{2}\right)^{2}w^{2}\left(x-e_{2}w\right) {}^{2}+3c_{2}^{2}\left(e_{1}-e_{2}\right)^{2}w^{2}\left(x-e_{1}w\right)^{2},\] \[Q_{5} =\left\{a_{3}b_{2}\left(e_{1}-e_{2}\right)w\left(x-e_{1}w\right) \left(x-e_{2}w\right)-a_{2}b_{3}\left(e_{1}-e_{2}\right)w\left(x-e_{1}w\right) \left(x-e_{2}w\right)\right.\] \[-a_{3}c_{2}\left(e_{1}-e_{2}\right)w\left(x-e_{1}w\right)^{2}+a_{ 5}\left(x-e_{1}w\right)^{2}\left(x-e_{2}w\right)\] \[+b_{3}c_{2}\left(e_{1}-e_{2}\right)w\left(x+2e_{1}w-3e_{2}w \right)\left(x-e_{1}w\right)-2b_{2}b_{3}\left(e_{1}-e_{2}\right)^{2}w^{2}\left( x-e_{2}w\right)\right\}\left(x-e_{2}w\right)^{2},\] \[Q_{6} =\left(e_{1}-e_{2}\right)^{4}\left(c_{2}^{2}\left(a_{2}-2b_{2}+2c _{2}\right)+b_{3}^{2}\right)w^{4}\left(x-e_{1}w\right)\left(x-e_{2}w\right)\] \[-\left(e_{1}-e_{2}\right)^{3}\left(c_{2}^{2}\left(a_{2}-2b_{2}+c _{2}\right)+a_{3}b_{3}-2b_{3}^{2}\right)w^{3}\left(x-e_{1}w\right)\left(x-e_{2 }w\right)^{2}\] \[+\left(e_{1}-e_{2}\right)^{2}\left(c_{2}\left(a_{4}-\left(a_{2}-b _{2}\right)\left(b_{2}-c_{2}\right)\right)-a_{3}b_{3}+b_{3}^{2}\right)w^{2} \left(x-e_{1}w\right)^{2}\left(x-e_{2}w\right)^{2}\] \[+a_{6}\left(x-e_{1}w\right)^{2}\left(x-e_{2}w\right)^{4}+b_{3}^{ 2}\left(e_{1}-e_{2}\right)^{5}w^{5}\left(x-e_{2}w\right)\] \[-c_{2}^{3}\left(e_{1}-e_{2}\right)^{5}w^{5}\left(x-e_{1}w\right)\,.\] The relations between the 8 parameters \(a_{i},b_{2,3},c_{2}\) and the linear masses are easily determined by considering how \(Q=0\) intersects the lines \(\ell_{0},\ell_{1},\ell_{2}\). We find that \[a_{i}=\sigma_{i}(\mu_{0}(0),\ldots,\mu_{5}(0))\,,\quad b_{i}=\sigma_{i}(2\mu_{0 }^{(\eta)},2\mu_{1}^{(\eta)},2\mu_{2}^{(\eta)})\,,\quad c_{2}=9\mu_{0}^{(\omega )}\mu_{1}^{(\omega)}\,.\] Algebraic integrability According to [1], Theorem 7.1, the differential operator \(\widehat{h}\) is algebraically integrable for certain values of the parameters \(\mu_{j}(x_{i})\). This implies the existence of a family of _double-Bloch_ eigenfunctions \[\widehat{h}\psi=z\psi\,,\qquad\psi=\psi(q,z),\quad z\in\mathbb{C}\,,\] (E.1) such that \[\psi(q+2\omega_{i},z)=M_{i}\psi(q,z)\,,\qquad i=1,2\,,\] (E.2) for some \(M_{1},M_{2}\in\mathbb{C}^{*}\). There is a procedure for calculating such solutions based on a version of Hermite-Bethe ansatz. This is explained below. For convenience, we put \(\hbar=1\). Recall that if \(x_{i}\in\mathcal{E}\) is a fixed point, with stabiliser \(\mathbb{Z}_{m_{i}}\subset\mathbb{Z}_{m}\), then \(\mu_{j}(x_{i})=\mu_{j+m_{i}}(x_{i})\). Assume, following [1], Section 7, that \[\mu_{j}(x_{i})\in m_{i}\mathbb{Z}\,.\] (E.3) This implies that the arithmetic progressions \(\{j+\mu_{j}(x_{i})+m_{i}\mathbb{Z}_{\geq 0}\}\), \(j=0,\ldots,m_{i}-1\) do not overlap. Write \(-n_{i}\) for the smallest number among \(j+\mu_{j}(x_{i})\). Recall that \(\mu_{j}(x_{i})\) sum to zero, therefore \(n_{i}\geq 0\). Now consider the set \[S_{i}:=\mathbb{Z}_{\geq 0}\setminus\cup_{j=0}^{m_{i}-1}\{n_{i}+j+\mu_{j}(x_{ i})+m_{i}\mathbb{Z}_{\geq 0}\}\}\,.\] (E.4) Our assumptions imply that \(S_{i}\) is a finite set and \(|S_{i}|=n_{i}\). Denote \(n:=\sum_{x_{i}}n_{i}\), and consider the following function \(\phi(q)\) depending on the parameters \(t_{1},\ldots,t_{n},\lambda\in\mathbb{C}\): \[\phi(q)=e^{\lambda q}\prod_{r=1}^{n}\sigma(q-t_{r})\,.\] (E.5) Let us impose \(n\) relations on these parameters (\(n_{i}\) relations for each fixed point \(x_{i}\)) as follows: \[\left[\frac{d^{s}}{dq^{s}}\left(\phi(q)e^{-n\eta(x_{i})q}\right)\right]_{q=x_ {i}}=0\quad\text{for all $s\in S_{i}$}\,.\] (E.6) We will refer to (E.5)-(E.6) as the Bethe ansatz equations. **Proposition E.1**.: _For generic solutions \(t_{1},\ldots,t_{n},\lambda\) of the Bethe ansatz equations, the function_ \[\psi=e^{\lambda q}\frac{\prod_{r=1}^{n}\sigma(q-t_{r})}{\prod_{x_{i}}\sigma(q -x_{i})^{n_{i}}}\] (E.7) _is an eigenfunction of the hamiltonian \(\widehat{h}\),_ \[\widehat{h}\psi=z\psi,\] (E.8) _with some \(z\in\mathbb{C}\) determined by \(t_{1},\ldots,t_{n},\lambda\). The functions \(\psi_{l}=\psi(\omega^{l}q)\) with \(0\leq l\leq m-1\) span the solution space to the eigenvalue problem (E.8) for generic \(z\)._ Proof.: First, by [1, Theorem 7.1] and the general results of [60, 61] (see Corollary 5.7 and Theorem 5.9 in [62]), for generic \(z\in\mathbb{C}\) the solution space to (E.8) is spanned by double-Bloch eigenfunctions. Let now \(t_{1},\ldots,t_{n},\lambda\) be a solution to the Bethe ansatz equation, and \(\psi\) be the corresponding function (E.7). Pick one of the fixed points \(x_{i}\), so that \(x_{i}\equiv\omega^{l}x_{i}\ (\operatorname{mod}\Gamma)\) for some \(l\). It can be checked that the function \[w(q):=e^{-n\eta(x_{i})q}\prod_{x_{i}}\sigma(q-x_{i})^{n_{i}}\] (E.9) transforms under the symmetry about \(q=x_{i}\) as follows: \[w(q)\mapsto\omega^{ln_{i}}w(q)\quad\text{when}\quad q\mapsto(1-\omega^{l})x_{i }+\omega^{l}q\,.\] (E.10) This implies that the formal series for \(w(q)\) at \(q=x_{i}\) lies in \((q-x_{i})^{n_{i}}\mathbb{C}[[(q-x_{i})^{m_{i}}]]\). Together with (E.6), this means that the formal Laurent series for \(\psi\) at \(q=x_{i}\) belongs to the space \[U_{i}:=\bigoplus_{j=0}^{m_{i}-1}(q-x_{i})^{j+\mu_{j}(x_{i})}[[(q-x_{i})^{m_{i }}]]\,.\] (E.11) On the other hand, our previous analysis (Proposition C.1) showed that all solutions to (E.8) should belong to \(U_{i}\). It follows that double-Bloch eigenfunctions must be of the form (E.7) and the Bethe ansatz equations must hold. Moreover, if \(\psi(q)\) is one such eigenfunction and \(\lambda\) is generic, then the functions \(\psi_{l}=\psi(\omega^{l}q)\) will be linearly independent eigenfunctions for the same \(z\). This proves that the Bethe ansatz method will produce a basis of eigenfunctions. _Remark E.2_.: For the Heun equation (\(m=2\)), Bethe ansatz in this form appeared in [63], see also [64] for a different form.
2310.00207
Detecting Unseen Multiword Expressions in American Sign Language
Multiword expressions present unique challenges in many translation tasks. In an attempt to ultimately apply a multiword expression detection system to the translation of American Sign Language, we built and tested two systems that apply word embeddings from GloVe to determine whether or not the word embeddings of lexemes can be used to predict whether or not those lexemes compose a multiword expression. It became apparent that word embeddings carry data that can detect non-compositionality with decent accuracy.
Lee Kezar, Aryan Shukla
2023-09-30T00:54:59Z
http://arxiv.org/abs/2310.00207v1
# Detecting Unseen Multiword Expressions in American Sign Language ###### Abstract Multiword expressions present unique challenges in many translation tasks. In an attempt to ultimately apply a multiword expression detection system to the translation of American Sign Language, we built and tested two systems that apply word embeddings from GloVe to determine whether or not the word embeddings of lexemes can be used to predict whether or not those lexemes compose a multiword expression. It became apparent that word embeddings carry data that can detect non-compositionality with decent accuracy. ## 1 Introduction Translating signed languages, such as American Sign Language (ASL), has been challenging for machine translation systems. This is particularly problematic considering that there are between several hundred thousand and several million people in the United States who are deaf, hard of hearing or might otherwise rely on ASL as their most convenient method of communication. That said, ASL translation requires many more steps in translation (Bragg et al., 2019). One fundamental processing task involves the detection of multiword expressions (MWEs) in ASL. Translating MWEs is a unique challenge (compared to translating single words) because their meaning is frequently derived from an idiomatic use of multiple lexemes. At the same time, those lexemes can be used individually and non-idiomatically, which is a much more straightforward task for translation. This ambiguity motivates the detection of MWEs, so as to avoid deep misunderstandings in language processing. The present research leverages these observations to detect compound MWEs based on their non-compositionality. While other MWEs such as multiword entities (e.g. _long short-term memory network_) can also exhibit non-compositionality, Constant et al. (2017) present three unique challenges for compounds: 1. **Inconsistency** There is a wide range of diversity in the structure in which compounds can appear. 2. **Contextual Dependence** Frequently, context indicates that the target set of words are an MWE, as opposed to linguistic identifiers that surround compounds. 3. **Non-contigity** In languages with non-obvious separations between lexemes, such as logographic and signed languages (e.g. written Chinese and ASL, respectively), compounds may be mistakenly parsed as separate tokens. For ASL in particular, there is the additional challenge of being a very low resource environment, complicating the use of neural methods like transformers (Bragg et al., 2019). In the absence of a labeled dataset of ASL MWEs, we present a method for detecting separated English compounds (e.g. "home work") that leverages word embeddings and definitions, both of which can be adapted to low-resource environments (Xu et al., 2018). To evaluate this method, we form two hypotheses: 1. Co-occurrence across related contexts will be an effective signal for detecting non-contiguous compounds. 2. The definition of each lexeme that composes a compound MWE may contain additional information about the context in which a compound might appear. We find that our methods can detect compounds with high recall, indicating that many compounds are non-compositional in nature, and that definition-based detection is effective for low-resource environments. Future work will study the generalizability of these methods for glossed ASL. ## 2 Related Work Non-compositionality is one of the several properties of MWEs that can be used for detection. Non-compositionality describes how the meaning of a MWE may not be correlated with the individual components that make it up. This property "is generally leveraged by models based on vector-space semantic similarity" Constant et al. (2017). Many compounds can be non-compositional, and thus, this property can be applied to detect them. In terms of word embeddings, if the lexemes of a compound co-occur regardless of their MWE status, then they should have relatively similar word embeddings Kiela and Clark (2014). For example, "video lag" can be expected to have high similarity, whereas a non-compositional MWE, such as "jet lag" should not have high similarity. Embeddings also have the advantage of generalizing to low-resource environments, such as ASL. Note that non-compositionality has the limitation of non-universality; that is, not all compounds are necessarily non-compositional Constant et al. (2017). There have been several, often successful, attempts at accomplishing MWE detection through the application of vector space models. Two methods are the most notable: One method, presented by Kiela and Clark (2014) to determine the compositionality of a phrase by substituting synonyms in a phrase forms a conceptual foundation for the potential for word embeddings to suggest compositionality. The authors compare MWEs by computing the distance between a MWE's vector and the vectors of version of the same MWE with synonyms substituted for individual lexemes in that MWE. Their results support the intuition that word embeddings can predict whether substituted versions of MWE are "meaningful" or not. Their results suggest that word embeddings may carry information about the compositionality of a phrase. Another method, presented by Salehi et al. (2015) forms a conceptual foundation for comparing individual word embeddings. This work operates on principles similar to ours, except it leverages part-to-whole relationships, such as _snow vs. snowball_, instead of part-to-part relationships, such as _snow vs. ball_. One limitation of this work is that it does not study detection directly, so it is unclear how it would perform with non-MWEs. ASL translation is one occasion where this presents a problem; compounds will often be expressions like "RED CUT" ("tomato") with no specific signal to suggest that the two words should be understood together as a compound. Our methods, however, build on the work of Salehi et al. (2015) to determine the potential of a system that is blind to the frame of a potential compound and is capable of simply running on all groups of cooccurring lexemes to determine whether or not the set of lexemes might be a compound. ## 3 Method To test our hypotheses, we introduce three different scores based on the cosine similarity between different elements of the two lexemes of a potential compound. Our first method states that if a word is a compound, then the embeddings of the lexemes that compose it will have high cosine similarity ("word similarity"). The second method leverages the capacity of definitions of each lexeme to contain contextualizing terms. Thus, we predicted that the embedding of the definitions will also have high cosine similarity ("definition similarity"). We define a definition embedding as the elementwise sum of the embeddings of words in a provided definition. Finally, we optionally remove stop words from definitions to improve consistency ("definition content similarity"). To test each method, we used data from the Large Database of English Compounds (LADEC) to provide a sample of 8956 compounds, mixed with an equal number of random word pairs and an equal number of frequently co-occurring word pairs from the Brown Corpus (Gagne et al., 2019). The scores, then, must distinguish between "home work" (compound), "home play" (random), and "home chef" (frequent bigram). We compare the distribution of random word pairs to known compounds to determine an appropriate threshold for classification, a step that generalizes to low-resource settings where a modest list of compounds can be procured by hand. After establishing this maximum-effectiveness threshold, we label each unseen pair and compute overall performance. ## 4 Results We evaluate our methods by computing recall, precision, and F1 scores for each score's ability to distinguish LADEC compounds from negative samples. Based on the aforementioned threshold calibration, it was decided that the threshold for compound is 0.78 for word similarity, 0.90 for definition similarity, and 0.46 for definition content similarity. Any value returned below the threshold for each method could be considered as a "compound" judgement whereas any value returned above the threshold could be considered a "not a compound" judgement. It is important to note that these methods are making compound judgements based solely on the non-compositionality of compounds, so it is not expected that all compounds, especially compositional compounds, will be detected. The findings are summarized in Tables 1 and 2. ## 5 Discussion We find substantial support for H1, and less for H2. Because the similarity of two compound-forming lexemes in a vector space proved to be decently accurate in analyzing whether a word is a compound, H1 demonstrated that vector space embeddings that are made through the distributional hypothesis, with words that are likely to appear together being given embeddings that are closer together, are a useful way to predict compounds. H2 was supported less, since the accuracy of the second method was not higher than the accuracy of the first method, so the splitting of a word into the individual words that composed its definition did not provide important context about the appearance of the word in a sentence or paragraph. In fact, since Method 1 proved to be more accurate while simply plotting the locations of each lexeme rather than the sum of the words in each lexeme's first definition, our experiments suggest that the 100-dimensional embeddings actually provide more significant information about the context of a lexeme that allows us to better judge a compound's compositionality than the words in a lexeme's definition. We also discovered that removing stop words hardly made the system more accurate in correctly determining whether or not a word was a compound, which may suggest that the stop words as a part of the word's definition are only marginally important. These generalized results stayed consistent whether we used the random word pairs or the cooccurring word pairs as the negative samples for comparison. However, the higher accuracy for co-occurring word pair negative samples suggests that compounds may be easier to detect in contextualized language because meaningless word pairs with no context are more likely to be non-compositional like some compounds themselves. The recall remained the same for both the random word pair and cooccurring word pair negative samples, since recall compares true positives to all positives and the positive samples in our tests were the same alongside both sets of negative samples (the compounds presented in Gagne et al. (2019)). The lower precision on the definition-based methods, however, suggest that they cast a wider net in detecting non-compositionality because the individual words of the definitions summed do not contain the same key contextualizing data as the embeddings of words themselves. Our results show strong evidence that word embeddings can be applied to detect compositionality in lexemes for detecting compounds. However, our results are certainly limited. The word em \begin{table} \begin{tabular}{|c|c c c|} \hline Method & Recall & Precision & F1 Score \\ \hline word similarity & 0.840 & 0.596 & 0.697 \\ \hline definition similarity & 0.847 & 0.481 & 0.613 \\ \hline definition content similarity & 0.859 & 0.483 & 0.619 \\ \hline \end{tabular} \end{table} Table 1: Accuracy of methods using random word pair negative samples. \begin{table} \begin{tabular}{|c|c c c|} \hline Method & Recall & Precision & F1 Score \\ \hline word similarity & 0.840 & 0.754 & 0.795 \\ \hline definition similarity & 0.847 & 0.527 & 0.649 \\ \hline definition content similarity & 0.859 & 0.532 & 0.657 \\ \hline \end{tabular} \end{table} Table 2: Accuracy of methods using co-occurring word pair negative samples. beddings that we used are from Pennington et al. (2014). This system groups word embeddings by definition and synonyms, substructures based on basic knowledge graphs, and more, but may not be directly intended to detect compositionality between two lexemes, since that requires context. Additionally, our models have only been adjusted on and applied to a very limited set of data; we are dependent on the Brown corpus for random samples and the LADEC for true positive compounds, and each of those datasets contain flaws based on the likelihood for certain word pairs to appear based on the news context of Brown's corpus Bird et al. (2009). Furthermore, our results should be applied and understood with one key distinction; these systems were detected on a set of all compounds in LADEC, which includes both compositional and non-compositional compounds. Because the system is premised on detecting non-compositionality and not necessarily compositional compounds, this distinction may greatly affect our dataset. Because the LADEC contains data about the likely compositionality of various compounds, it is possible to attempt to test these systems on only non-compositional compounds, but the system will ultimately be applied to detect all compounds in our ultimate application in ASL translation, so we tested it on all compounds. This is significant to take note of when analyzing our data, but our conclusions remain valid due to the simple fact that the population of compounds in LADEC will, in general, be more non-compositional than average word pairs since many compounds inherently exist because of their non-compositionality. ## 6 Future Work and Conclusion The distributional hypothesis used in producing word embeddings provides a useful way to detect compounds through closeness of embedding locations, which may provide information on how descriptive one lexeme is of the other, and thus, determine compositionality. Individual lexeme definitions do not contain important context information that embeddings do not already take into account that would make them any more useful in determining the compositionality of a pair of lexemes, and stop words seem to have no significant effect in changing that usefulness. Ultimately, it has become clear that word embeddings have proven to be a useful tool in determining the compositionality of a pair of lexemes. We hope that these findings prove to be useful in determining a more robust MWE detection mechanism that is able to find multiword expressions out of a set of sentences. While this work has been done assuming that inputted MWEs have already been split into their compositional lexemes, this tool can be applied more practically if it can work alongside a method to isolate lexemes in a MWE, even if they are combined in the form of one word. Ultimately, we hope to apply this mechanism to ASL translation, where compounds are already separated into individual morphemes. Additionally, attempting to test this system on compounds that are either explicitly judged to be non-compositional can give us better insight into the logic and reasons for the success of these methods.
2302.00016
Two Schwarzschild-like black holes balanced by their scalar hair
We show that, unlike vacuum General Relativity, Einstein-scalar theories allow balanced static, neutral, asymptotically flat, double-black hole solutions, for scalar field models minimally coupled to gravity, with appropriate self-interactions. These are scalar hairy versions of the double-Schwarzschild (or Bach-Weyl) solution, but regular on and outside the two (topologically spherical) horizons. The balancing repulsive force is provided by the scalar field. An explicit illustration is presented, using a Weyl-type construction adapted to numerical solutions, requiring no partial linearisation, or integrability structure, of the Einstein-scalar equations. Fixing the couplings of the model, the balanced configurationsform a one-parameter family of solutions, labelled by the proper distance between the black holes.
Carlos A. R. Herdeiro, Eugen Radu
2023-01-31T19:00:01Z
http://arxiv.org/abs/2302.00016v1
###### Abstract ###### Abstract We show that, unlike vacuum General Relativity, Einstein-scalar theories allow _balanced_ static, neutral, asymptotically flat, double-black hole solutions, for scalar field models minimally coupled to gravity, with appropriate self-interactions. These are scalar hairy versions of the double-Schwarzschild (or Bach-Weyl) solution, _but regular_ on and outside the two (topologically spherical) horizons. The balancing repulsive force is provided by the scalar field. An explicit illustration is presented, using a Weyl-type construction adapted to numerical solutions, requiring no partial linearisation, or integrability structure, of the Einstein-scalar equations. Fixing the couplings of the model, the balanced configurations form a one-parameter family of solutions, labelled by the proper distance between the black holes. **Two Schwarzschild-like black holes** **balanced by their scalar hair** **Carlos A. R. Herdeiro and Eugen Radu** \({}^{\ddagger}\)Departamento de Matematica da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA), Campus de Santiago, 3810-183 Aveiro, Portugal ###### Contents * 1 Introduction * 2 Einstein-scalar models and the vacuum Weyl construction * 2.1 The action and equations * 2.2 Canonical Weyl-coordinates * 2.3 Scalar-vacuum * 2.4 Vacuum: Schwarzschild and rod-structure * 2.5 Vacuum: \(\mathbb{Z}_{2}\)-symmetric Bach-Weyl * 3 A Weyl-type construction adapted to numerics * 3.1 The rod structure and quantities of interest * 3.2 The boundary conditions for the 2BHs construction * 3.3 The single (spherical) BH limit in Weyl-type coordinates * 4 An illustration: two BHs balanced by their scalar hair * 4.1 The scalar field potential and scaling properties * 4.2 The balanced 2BHs system with scalar hair * 5 Further remarks * A Field equations in the parameterization (3.1) * B A new coordinate system and details on the numerics * C Numerical construction of the 2RNBHs solution Introduction A remarkable solution of "vacuum" General Relativity (GR) is the Bach-Weyl (BW) or double-Schwarzschild metric, describing two static, neutral black holes (BHs) placed at some non-zero distance, in a four dimensional, asymptotically flat spacetime [1].1 The gravitational attraction between the BHs is unbalanced; as a result, conical singularities along the symmetry axis are mandated by the field equations [2]. Despite such naked singularities, this solution has a well defined gravitational action. Moreover, the Bekenstein-Hawking area law still holds when using standard Euclidean gravity thermodynamical arguments [3]. Footnote 1: The quotes in “vacuum” emphasise that even if one is solving the vacuum Einstein equations, the existence of conical singularities implies localised sources. The precise location of the conical singularity of the BW solution is a matter of choice. It can either be chosen in between the two BHs - in which case it is interpreted as a _strut_ - or connecting either BH to infinity - in which case it is interpreted as two _strings_. In order for the spacetime to be asymptotically flat, without any conical singularities at spatial infinity, one often takes the former viewpoint. Then, the strut energy is interpreted as the interaction energy between the BHs, while its pressure prevents the gravitational collapse of the system [4]. On the one hand, the BW solution can be generalised within "vacuum" GR in different ways. One way is to introduce \(N\) (instead of 2) colinear, neutral, static BHs, leading to the Israel-Kahn solution [5]. However, this does not solve the need for a conical singularity, except in the \(N\to\infty\) limit [6], which has a natural interpretation as a BH in a compactified spacetime, rather than an asymptotically flat configuration. Another way is to place the BW solution in an appropriate external gravitational field [7, 8]. Such solution ceases, again, to be asymptotically flat; in fact it is plagued by naked curvature singularities at spatial infinity.2 A final way is to make the BHs _spin_. The double-Kerr solution can be constructed via elaborate solution generating techniques, such as the inverse scattering method [9]. For co-rotating BHs, with aligned spins, the spin-spin interaction is repulsive [10], introducing a plausible balancing effect. It turns out, however, that this extra interaction cannot balance the system, for objects covered by an event horizon - see \(e.g.\)[11]. A physical explanation has been put forward in [12]. Footnote 2: In [7] a _local_ perspective is taken, to argue on the physical merits of such solutions. On the other hand, the BW solution can be generalised within _electrovacuum_ GR to yield balanced, asymptotically flat configurations. This involves making the BHs _extremal_, \(i.e.\) with their maximal charge to mass ratio. The corresponding balanced BHs fall into the Majumdar-Papapetrou class of metrics [13, 14], describing \(N\) extremal Reissner-Nordstrom BHs in equilibrium [15], which are regular on and outside the event horizon and asymptotically flat. Such solutions can also be generalized to Einstein-Maxwell dilatonic theories - see \(e.g.\)[16, 17, 18]. There are also non-asymptotically flat charged BHs in equilibrium, when immersed in a Melvin-type universe [19] or in a de Sitter Universe [20]; in the latter case the BHs are co-moving with the cosmological expansion. To the best of our knowledge, no static, electro-magnetically neutral, asymptotically flat BHs in equilibrium are known in four spacetime dimensions.3 There is no reason, however, to expect this to be a fundamental feature of relativistic gravity. Conceptually, the electromagnetic repulsive interaction that allows balance in some of the aforementioned solutions could, in principle, be replaced by another repulsive interaction, namely scalar. Technically, however, one faces important obstacles. Footnote 3: In higher dimensions, there are vacuum multi-BH solutions, like the black Saturn [21], allowed by the non-trivial topology of the event horizons permitted by higher dimensional vacuum gravity [22]. These solutions are stationary, rather than static. For solutions akin to the Majumdar-Papapetrou solution (multi-BHs experiencing a "no-force" condition), common in supergravity theories (see \(e.g.\)[23]), under an appropriate ansatz one observes a _full_ linearization of the Einstein-matter equations. This allows a superposition principle that corresponds to adding multiple BHs and it is intimately connected with supersymmetry [24, 25]. For solutions akin to the double-Schwarzschild metric, under an appropriate ansatz - corresponding to the Weyl formalism [26], one observes a _partial_ linearization of the full Einstein equations. This still allows a superposition principle for a specific metric function, which in effect corresponds to adding multiple BHs, with the remaining metric functions obeying non-linear equations, which can, nonetheless, be straighforwardly solved once the linear metric function function is known. The Weyl formalism comes with an intuitive diagramatic construction - the rod structure (see \(e.g.\)[27, 28]) - that permits constructing new solutions. The static Weyl solutions constructed in this way, moreover, serve as natural seeds for the inverse scattering technique [29], that can add rotation (and other properties, such as NUT charges) to the solutions. For scalar fields with canonical kinetic terms, minimally coupled to Einstein's theory, possibly with some self-interacting potentials - hereafter _Einstein-scalar theories_ -, the Weyl construction has not been made to work, except in the case of free, massless scalar fields [30, 31]. In the absence of a methodology to obtain exact multi-BH solutions, one may approach such configurations numerically, as solutions of partial differential equations (PDEs) with suitable boundary conditions. This paper aims at proposing a general framework for the study of static multi-BH systems with matter fields, numerically. As an application, we shall report solutions describing two balanced BHs (hereafter dubbed _2BHs_) in a specific Einstein-scalar theory. The construction we propose is, in principle, more general than scalar matter models. The choice of a scalar field for the matter content is mainly motivated by its simplicity, both technical and conceptual. Simultaneously, the influential theorem by Bekenstein [32] forbidding the existence of (single) BHs with scalar hair can be circumvented in different ways [33]. A simple way is to allow a scalar field potential which is not strictly positive, such that the energy conditions assumed by the theorem are violated [33]. Such scalar fields can provide an extra repulsive interaction, balancing a non-trivial scalar field profile outside the horizon. (Single) BHs with scalar hair are allowed by this mechanism, \(e.g.\)[34, 35, 36, 37]. One may thus anticipate the same mechanism to work for the 2BHs case as well. A scalar field potential which can take negative values in the region between the horizons, moreover, could provide the extra (repulsive) interaction to balance two neutral static BHs, curing the conical singularity of the BW solution. Indeed, this is confirmed by the results in this work, where we present numerical evidence for the existence of balanced 2BHs solutions in this setting. A central point in our approach is that the rod structure of the BW solution can be used also for such Einstein-matter configurations, in particular for the 2BHs system with scalar hair, even though the partial linearization of the Einstein-scalar equations and the vacuum Newtonian interpretation of the rods cease to be valid. The application given in this work will focus on 2BH systems in thermal equilibrium, \(i.e.\) with two identical BHs placed at some distance. This paper is organized as follows. In Section 2 we present the Einstein-scalar model we shall work with. The canonical Weyl construction is attempted with this model, to observe the known obstructions. Then, we specialize to vacuum to discuss the Schwarzschild, double-Schwarzschild (or BW) solutions and the rod structure. In Section 3 we present a Weyl-type construction adapted to numerics, readressing the rod structure, discussing the boundary conditions and the single BH limit. In Section 4 we specialize to a potential that allows BHs to have scalar hair and thus that may allow such hair to balance two BHs. We then report the results for such 2BHs balanced system. Some final remarks close this paper, which also contains three appendices with technical details, together with a numerical construction of the double-Reissner-Nordstrom solution (2RNBHs), as a test of the proposed numerical scheme. ## 2 Einstein-scalar models and the vacuum Weyl construction ### The action and equations We consider the Einstein-scalar model described by the following action \[\mathcal{S}=\frac{1}{4\pi}\int d^{4}x\sqrt{-g}\left[\frac{R}{4G}-\frac{1}{2}g ^{\alpha\beta}\left(\Phi^{*}_{,\,\alpha}\Phi_{,\,\beta}+\Phi^{*}_{,\,\alpha} \Phi_{,\,\beta}\right)-U(|\Phi|)\right]\, \tag{2.1}\] where \(G\) is the gravitational constant, \(R\) is the Ricci scalar associated with the spacetime metric \(g_{\alpha\beta}\), which has determinant \(g\), \(\Phi\) is a complex scalar field with \({}^{*}\) denoting complex conjugation and \(U(|\Phi|)\) denotes the scalar potential. The scalar field mass is defined by \(\mu^{2}\equiv(d^{2}U/d|\Phi|^{2})\big{|}_{\Phi=0}\). The Einstein-scalar field equations, obtained by varying (2.1) with respect to the metric and scalar field are, respectively, \[E_{\alpha\beta}\equiv R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R-2GT_{\alpha \beta}=0\,\ \ \ \ \ \ \ \ \nabla_{\alpha}\nabla^{\alpha}\Phi=\frac{dU}{d\,|\Phi|}\, \tag{2.2}\] where \(T_{\alpha\beta}\) is the energy-momentum tensor of the scalar field \[T_{\alpha\beta}=\partial_{\alpha}\Phi^{*}\partial_{\beta}\Phi+\partial_{\beta} \Phi^{*}\partial_{\alpha}\Phi-g_{\alpha\beta}\left[\frac{1}{2}g^{\gamma\delta}( \partial_{\gamma}\Phi^{*}\partial_{\delta}\Phi+\partial_{\delta}\Phi^{*} \partial_{\gamma}\Phi)+U(|\Phi|)\right]. \tag{2.3}\] ### Canonical Weyl-coordinates The configurations we shall consider herein are static and axially symmetric, admitting two orthogonal, commuting, non-null Killing Vector Fields (KVFs). In what follows we take their line element written in coordinates adapted these symmetries, \((t,\rho,z,\varphi)\), such that \(\partial_{t}\) and \(\partial_{\varphi}\) are KVFs; it reads \[ds^{2}=-e^{2{\cal U}(\rho,z)}dt^{2}+e^{-2{\cal U}(\rho,z)}\left[e^{2{\cal K}( \rho,z)}(d\rho^{2}+dz^{2})+e^{2{\cal C}(\rho,z)}\rho^{2}d\varphi^{2}\right]\, \tag{2.4}\] thus introducing three unknown functions \({\cal U}\), \({\cal K}\) and \({\cal C}\) of the non-Killing coordinates \((\rho,z)\), where \(0\leqslant\rho<\infty\), \(-\infty<z<\infty\) and \(0\leqslant\varphi<2\pi\). For the scalar field \(\Phi\), we take a generic ansatz \[\Phi=\phi(\rho,z)e^{im\varphi}\, \tag{2.5}\] with \(\phi\) a real function - the scalar field amplitude -, and \(m\in\mathbb{Z}\). Appropriate combinations of the Einstein equations, \(E^{t}_{t}=0\), \(E^{\rho}_{\rho}+E^{z}_{z}=0\) and \(E^{\varphi}_{\varphi}=0\), yield the following set of equations for the functions \({\cal U}\), \({\cal K}\), and \({\cal C}\), \[\Delta{\cal U}+(\nabla{\cal U})\cdot(\nabla{\cal C})=-2Ge^{2({\cal K }-{\cal U})}U(\phi)\,\] \[\Delta{\cal K}-\frac{{\cal K}_{,\rho}}{\rho}+(\nabla{\cal U})^{2} =-2G\left[(\nabla\phi)^{2}+e^{2({\cal K}-{\cal U})}U(\phi)-\frac{e^{2({\cal K }-{\cal C})}m^{2}\phi^{2}}{\rho^{2}}\right]\,\] \[\Delta{\cal C}+\frac{{\cal C}_{,\rho}}{\rho}+(\nabla{\cal C})^{2} =-4Ge^{2({\cal K}-{\cal U})}\left[U(\phi)+\frac{e^{2({\cal U}-{\cal C})}m^{2} \phi^{2}}{\rho^{2}}\right]. \tag{2.6}\] The equation for the scalar field amplitude \(\phi\) is \[\Delta\phi+(\nabla{\cal C})\cdot(\nabla\phi)=\frac{1}{2}e^{2({\cal K}-{\cal U })}\frac{dU(\phi)}{d\phi}+\frac{e^{2({\cal K}-{\cal C})}m^{2}\phi}{\rho^{2}}\, \tag{2.7}\] We have defined, acting on arbitrary functions \({\cal F}(\rho,z)\) and \({\cal G}(\rho,z)\), \[(\nabla{\cal F})\cdot(\nabla{\cal G})\equiv{\cal F}_{,\rho}{\cal G}_{,\rho}+{ \cal F}_{,z}{\cal G}_{,z}\,\qquad\Delta{\cal F}\equiv{\cal F}_{,\rho\rho}+{\cal F}_{,zz}+\frac{1}{\rho}{ \cal F}_{,\rho}. \tag{2.8}\] These operators are the covariant operators on an auxiliary Euclidean 3-space in standard cylindrical coordinates, \(ds^{2}_{\rm auxiliary}=d\rho^{2}+\rho^{2}d\varphi^{2}+dz^{2}\). The remaining Einstein equations \(E^{z}_{\rho}=0,\ E^{\rho}_{\rho}-E^{z}_{z}=0\) yield two constraints, \[{\cal C}_{,\rho\rho}+\frac{2{\cal C}_{,\rho}}{\rho}-{\cal C}_{, zz}+2({\cal U}_{,\rho}^{2}-{\cal U}_{,z}^{2})+{\cal C}_{,\rho}^{2}-{\cal C}_{,z}^{2} -2({\cal C}_{,\rho}{\cal K}_{,\rho}-{\cal C}_{,z}{\cal K}_{,z})-\frac{2{\cal K} _{,\rho}}{\rho}+4G(\phi_{,\rho}^{2}-\phi_{,z}^{2})=0\,\] \[{\cal C}_{,\rho z}+{\cal C}_{,\rho}{\cal C}_{,z}+2{\cal U}_{,\rho}{ \cal U}_{,z}-{\cal C}_{,\rho}{\cal K}_{,z}-{\cal K}_{,\rho}{\cal C}_{,z}+ \frac{{\cal C}_{,z}}{\rho}-\frac{{\cal K}_{,z}}{\rho}+4G\phi_{,\rho}\phi_{,z}=0. \tag{2.9}\] Following [38], we note that setting \(E^{t}_{t}=E^{\varphi}_{\varphi}=E^{\rho}_{\rho}+E^{z}_{z}=0\) in \(\nabla_{\alpha}E^{\alpha\rho}=0\) and \(\nabla_{\alpha}E^{\alpha z}=0\), we obtain a set of Cauchy-Riemann relations. Thus the weighted constraints satisfy Laplace equations, and the constraints are fulfilled, when one of them is satisfied on the boundary and the other at a single point [38]. ### Scalar-vacuum In the absence of a scalar potential, \(i.e.\)\(U(\phi)=0\), and for a real scalar (\(m=0\)), the third equation in (2.6) allows us to take \({\cal C}=0\). The problem then has only three unknown functions, \({\cal U}\), \({\cal K}\), and \(\phi\). Two of them obey linear equations. Indeed, the first equation in (2.6) and the scalar equation become Laplace-type equations \[\Delta{\cal U}=0\,\qquad\Delta\phi=0. \tag{2.10}\] This is the aforementioned partial linearization of the Einstein equations. It is easy to see from (2.6) that this linearization is lost for \(m\neq 0\) or in the presence of a potential \(U(\phi)\), which leads to a nuisance (from the viewpoint of the Weyl construction) source term. The source term is, however, absent for a free, massless real scalar. The Weyl construction has in fact been explored in that case [30, 31]. Instead of determining the remaining function \({\cal K}(\rho,z)\) from the second eq. in (2.6), it can be determined from the two constraint equations (2.9), which reduce to \[{\cal K}_{,\rho}=\rho({\cal U}_{,\rho}^{2}-{\cal U}_{,z}^{2})+4G\rho(\phi_{, \rho}^{2}-\phi_{,z}^{2})\,\qquad\qquad{\cal K}_{,z}=2\rho{\cal U}_{,\rho}{\cal U}_{,z}+4G\rho\phi_{, \rho}\phi_{,z}. \tag{2.11}\] Thus, once \({\cal U},\phi\) are determined by solving the linear equations (2.10), then \({\cal K}\) is determined by solving two line integrals from (2.11). With what concerns BHs, this scalar-vacuum Weyl construction has not been rewarding, as in scalar vacuum BH hair is forbidden [32, 33]. As such, let us focus on the particular case of vacuum (\(\phi=0\)). ### Vacuum: Schwarzschild and rod-structure Setting \(\phi=0\) in (2.10) and (2.11), the simplest solution has \({\cal U}=0={\cal K}\), which is Minkowski spacetime. We are now interested in asymptotically Minkowski solutions. The Schwarzschild BH with mass \(M\) appears in this Weyl construction as the Newtonian potential4\({\cal U}\) of an infinitely thin rod of length \(2M\) along the \(z\)-axis. Placing this rod symmetrically w.r.t. \(z=0\) between \(z=-M\) and \(z=M\), this means that Footnote 4: The Laplace eq. \(\Delta{\cal U}=0\) has no source, but it can be regarded as the Poisson equation of Newtonian gravity with sources along the \(z\) axis only, wherein the Laplace operator in cylindrical coordinates is not defined. \[e^{2{\cal U}}=\frac{r_{+}+r_{-}-2M}{r_{+}+r_{-}+2M}\,\qquad r_{\pm}=\sqrt{ \rho^{2}+\left(z\pm M\right)^{2}}. \tag{2.12}\] Additionally, from (2.11) \[e^{2{\cal K}}=\frac{(r_{+}+r_{-})^{2}-(2M)^{2}}{4r_{+}r_{-}}. \tag{2.13}\] The standard Schwarzschild coordinates \((t,r,\theta,\varphi)\) can be recovered from the transformation \((\rho,z)\to(r,\theta)\) via \[\rho=\sqrt{r^{2}-2Mr}\sin\theta\,\qquad z=(r-M)\cos\theta. \tag{2.14}\] A large class of solutions in Weyl coordinates are characterized by the boundary conditions on the \(z-\)axis, known as the _rod-structure_. That is, the \(z-\)axis is divided into \(N\) intervals (called rods of the solution), \([-\infty,z_{1}]\), \([z_{1},z_{2}]\),..., \([z_{N-1},\infty]\). A necessary condition for a regular solution is that only one of the functions \(g_{tt}(0,z)\) or \(g_{\varphi\varphi}(0,z)\) becomes zero for a given rod (except for isolated points between the intervals). The rods are timelike or spacelike: * Event horizons are described by timelike rods. They are sets of fixed points of the \(\partial_{t}\) KVF with \(g_{tt}(0,z)=0\) and \[\lim_{\rho\to 0}\frac{g_{tt}(\rho,z)}{\rho^{2}}<0\.\] (2.15) * The symmetry axes are described by spacelike rods. They are sets of fixed points of the \(\partial_{\varphi}\) KVK, with \(g_{\varphi\varphi}(0,z)=0\) and \[\lim_{\rho\to 0}\frac{g_{\varphi\varphi}(\rho,z)}{\rho^{2}}>0\.\] (2.16) Fig. 1 (left panel) exhibits the rod structure of the Schwarzschild solution. ### Vacuum: \(\mathbb{Z}_{2}\)-symmetric Bach-Weyl The rod structure allows an intuitive, diagramatic-based reconstruction of the metric. For instance, one can easily imagine the rod structure of the double-Schwarzschild (or BW) solution - Fig. 1 (right panel). Choosing the location of the horizons \(\mathbb{Z}_{2}\)-symmetric with respect to \(z=0\), both at \(\rho=0\), with the "lower" one at \(-\Delta z/2-\bar{\mu}\leqslant z\leqslant-\Delta z/2\) and the "upper" one at \(\Delta z/2\leqslant z\leqslant\Delta z/2+\bar{\mu}\), the \(g_{tt}\) metric function is defined by the corresponding Newtonian potential, which reads: \[e^{2\mathcal{U}}=\frac{(r_{1}+r_{2}-\bar{\mu})}{(r_{1}+r_{2}+\bar{\mu})}\frac{ (r_{3}+r_{4}-\bar{\mu})}{(r_{3}+r_{4}+\bar{\mu})}\, \tag{2.17}\] where \[r_{1}=\sqrt{\rho^{2}+\left(z-\frac{\Delta z}{2}-\bar{\mu}\right) ^{2}},\ \ r_{2}=\sqrt{\rho^{2}+\left(z-\frac{\Delta z}{2}\right)^{2}}, \tag{2.18}\] \[r_{3}=\sqrt{\rho^{2}+\left(z+\frac{\Delta z}{2}\right)^{2}},\ \ r_{4}= \sqrt{\rho^{2}+\left(z+\frac{\Delta z}{2}+\bar{\mu}\right)^{2}}.\] Again, from (2.11) \[e^{2\mathcal{K}}=\left(\frac{\Delta z}{\Delta z+\bar{\mu}}\right)^{2}\left( \frac{(r_{1}+r_{2})^{2}-\bar{\mu}^{2}}{4r_{1}r_{2}}\right)\left(\frac{(r_{3}+ r_{4})^{2}-\bar{\mu}^{2}}{4r_{3}r_{4}}\right)\left(\frac{(\Delta z+\bar{\mu})r_{1}+( \Delta z+2\bar{\mu})r_{2}-\bar{\mu}r_{4}}{\Delta z\ r_{1}+(\Delta z+\bar{\mu} )r_{2}-\bar{\mu}r_{3}}\right)^{2}. \tag{2.19}\] This 2-parameter solution5 describes two equal BHs in thermodynamical equilibrium - with the same Hawking temperature and horizon area. The parameter \(\bar{\mu}\) is the ADM mass of this spacetime (twice the individual BH masses): Footnote 5: We follow the conventions in [4]. \[M=\bar{\mu}>0. \tag{2.20}\] The parameter \(\Delta z\geqslant 0\) provides the coordinate distance between the two horizons along the \(z\) axis. Figure 1: Rod structure, encoding the boundary conditions at \(\rho=0\) along the \(z\) axis, for the Schwarzschild solution (left) and \(\mathbb{Z}_{2}\)-symmetric BW solution (right). This system has a deficit angle along the section in between the BHs, \(i.e.\) for \(-\Delta z/2\leqslant z\leqslant\Delta z/2\), with a strength \(\delta\), as defined by the relation (3.10) below, given by: \[\frac{\delta}{2\pi}=-\frac{\bar{\mu}^{2}}{(\Delta z+2\bar{\mu})\Delta z}<0. \tag{2.21}\] The proper distance between the BHs is \[L=\int_{-\Delta z/2}^{\Delta z/2}dzf_{1}(0,z)=\Delta z\left(\frac{u+4}{u+2} \right)^{2}E(\bar{m})\,\quad\mbox{where}\ \ u=\frac{2\Delta z}{\bar{\mu}}\,\ \ \bar{m}=\left(\frac{u}{u+4}\right)^{2}\, \tag{2.22}\] \(E(\bar{m})\) being the complete elliptic integral of the second kind. The event horizon area of each BH and the corresponding Hawking temperature are: \[A_{H}=4\pi\bar{\mu}^{2}\frac{\Delta z+2\bar{\mu}}{\Delta z+\bar{\mu}}\,\qquad T _{H}=\frac{1}{4\pi\bar{\mu}}\frac{\Delta z+\bar{\mu}}{\Delta z+2\bar{\mu}}. \tag{2.23}\] For \(\Delta z=0\) the two BH horizons coalesce, and we are left the (single) Schwarzschild BH in Weyl coordinates. Also, the solution trivializes as \(\bar{\mu}\to 0\), a limit which corresponds to flat spacetime. We remark that the solution above captures already all the basic features of its Israel-Kahn generalization [5], with \(N\) horizons placed arbitrarily on a common symmetry axis. However, the \(N>2\) metric functions get increasingly more involved, albeit with an underlying common structure. ## 3 A Weyl-type construction adapted to numerics ### The rod structure and quantities of interest The Weyl construction does not carry through to generic matter models, as illustrated in the previous section for the Einstein-scalar models with a potential. In this work, however, we argue that the Weyl coordinates in (2.4) together with the rod structure of the vacuum multi-BH solutions can be used to construct physically relevant solutions beyond the simplest theories where the Weyl construction allows a partial linearization of the field equations. _A priori_, this is not guaranteed, and the validity of the metric ansatz could be proven only _a posteriori_, after solving the field equations. Of course, the elegant Newtonian interpretation of the rods is lost in the non-(scalar, electro)vacuum case; but a working method allows exploring the physics of the non-linear solutions. Even though one could hold on to the canonical metric parameterization in the Weyl ansatz (2.4), a numerical implementation, namely the boundary conditions at the horizons, is facilitated by taking the simpler parameterization of the metric functions \[ds^{2}=-f_{0}(\rho,z)dt^{2}+f_{1}(\rho,z)(d\rho^{2}+dz^{2})+f_{2}(\rho,z)d \varphi^{2}\ ; \tag{3.1}\] in other words, we relabel \[f_{0}(\rho,z)\equiv e^{2{\cal U}(\rho,z)}\,\qquad f_{1}(\rho,z)\equiv e^{-2{ \cal U}(\rho,z)+2{\cal K}(\rho,z)}\,\qquad f_{2}(\rho,z)\equiv\rho^{2}e^{-2{\cal U}(\rho,z)+2{ \cal C}(\rho,z)}. \tag{3.2}\] For completeness, the field equations in terms of this parameterization are given in Appendix A. With this parameterization the following expressions of the metric functions and scalar field near the \(z-\)axis are compatible with the Einstein-scalar field equations: \[f_{i}(\rho,z)=f_{i0}(z)+\rho^{2}f_{i2}(z)+\ldots\,\qquad\phi(\rho,z)=\phi_{0}(z )+\rho^{2}\phi_{2}(z)+\ldots\, \tag{3.3}\] where the functions \(f_{i0}(z)\), \(f_{i2}(z)\) (with \(i=0,1,2\)) and \(\phi_{0}(z)\), \(\phi_{2}(z)\) satisfy a complicated set of nonlinear second order ordinary differential equations. Our main assumption (supported by the results reported below) is that, similarly to the vacuum case, the \(z-\)axis is divided into \(N\) intervals: the rods of the solution. Moreover, we assume that, except for isolated points between the rods, only one of the functions \(f_{0}(0,z)\) or \(f_{2}(0,z)\) becomes zero for a given rod, while the remaining functions stay finite at \(\rho=0\), in general. Also, one imposes the condition that the union of the \(N\) intervals covers the entire \(z\)-range. There are again timelike and spacelike rods, which we now discuss separately. A finite timelike rod corresponds to an event horizon, which we assume is located for \(z_{H_{1}}\leqslant z\leqslant z_{H_{2}}\). Therein, one can further specify the generic expansion (3.3) to have \(f_{00}(z)=0\), such that6 Footnote 6: For several horizons, one should write such an expansion for each of them. \[f_{0}(\rho,z)=\rho^{2}f_{02}(z)+\rho^{4}f_{04}(z)+\ldots\, \tag{3.4}\] with \(\lim_{\rho\to 0}\rho^{2}f_{1}/f_{0}=\)const., as implied by the constraint equation \(E_{\rho}^{z}=0\). The horizon metric is given by \[d\sigma_{H}^{2}=f_{1}(0,z)dz^{2}+f_{2}(0,z)d\varphi^{2}. \tag{3.5}\] Two quantities associated with an event horizon are the event horizon area \(A_{H}\) and the Hawking temperature \(T_{H}\); they read \[A_{H}=2\pi\int_{z_{H_{1}}}^{z_{H_{2}}}dz\sqrt{f_{1}(0,z)f_{2}(0,z)}\,\qquad T _{H}=\frac{1}{2\pi}\lim_{\rho\to 0}\sqrt{\frac{f_{0}(\rho,z)}{\rho^{2}f_{1}( \rho,z)}}. \tag{3.6}\] The horizon has a spherical topology (despite the possible presence of conical singularities). A suggestive way to graphically represent its shape - which is generically very different from a round 2-sphere - is to define an _effective_ horizon radius R [4], by introducing an angular variable \(z=z(\theta)\), such that the horizon metric (3.5) becomes \[d\sigma^{2}={\rm R}^{2}(\theta)(d\theta^{2}+\sin^{2}\theta d \varphi^{2})\,\qquad\mbox{with}\ \ {\rm R}=\frac{\sqrt{f_{2}(0,z)}}{\sin\theta}\, \tag{3.7}\] where \[\theta(z)=2\arctan\left[C{\rm exp}\left(\int_{z_{H_{1}}}^{z}dx \sqrt{\frac{f_{1}(0,x)}{f_{2}(0,x)}}\right)\right]. \tag{3.8}\] As with the vacuum BW solution [4], the constant \(C\) is fixed by by requiring the horizon to be regular at the pole opposite to the other hole. Alternatively, one can use the standard approach developed by Smarr for the Kerr BH [46], and consider an isometric embedding of the horizon geometry (3.5) in \(\mathbb{E}^{3}\). Let us now consider the case of a generic spacelike \(\varphi-\)rod, for \(z_{S_{1}}\leqslant z\leqslant z_{S_{2}}\). Therein, one can further specify the generic expansion (3.3) to have \(f_{20}(z)=0\), such that, as \(\rho\to 0\): \[f_{2}(\rho,z)=\rho^{2}f_{22}(z)+\rho^{4}f_{24}(z)+\ldots. \tag{3.9}\] One important feature here is that the constraint equation \(E_{\rho}^{z}=0\) implies the condition \(f_{10}(z)/f_{22}(z)=\)const., \(i.e.\) a well-defined periodicity for the coordinate \(\varphi\), albeit not necessarily of \(2\pi\). A periodicity different from \(2\pi\) leads to the occurrence of a conical singularity. Its strength can be measured by means of the quantity \[\delta=2\pi\left(1-\lim_{\rho\to 0}\sqrt{\frac{f_{2}(\rho,z)}{ \rho^{2}f_{1}(\rho,z)}}\right). \tag{3.10}\] Then \(\delta>0\) corresponds to a conical deficit, while \(\delta<0\) corresponds to a conical excess. As with the "vacuum" case, a conical deficit can be interpreted as a string stretched along a certain segment of the \(z-\)axis, while a conical excess is a strut pushing apart the rods connected to that segment. A rescaling of \(\varphi\) can be used to eliminate possible conical singularities on a given \(\varphi\)-rod; but in the generic case, once this is fixed, there remain conical singularities along other \(\varphi\)-rods. Since we are interested in asymptotically flat solutions, we impose \(\delta=0\) for the semi-infinite spacelike rods. Another quantity of interest is the proper length of a \(\varphi\)-rod \[L=\int_{z_{S_{1}}}^{z_{S_{2}}}dz\sqrt{f_{1}(0,z)}\ ; \tag{3.11}\] for a finite rod, \(L\) differs from the coordinate distance \(z_{S_{1}}-z_{S_{2}}\). Let us now consider global quantities. For large \((\rho,|z|)\), the functions \(f_{i}\) should approach the Minkowski background functions, while the scalar field vanishes. The ADM mass \(M\) of the solutions can be read off from the asymptotic expression of the metric component \(g_{tt}\) \[-g_{tt}=f_{0}\sim 1-\frac{2GM}{\sqrt{\rho^{2}+z^{2}}}+\ldots. \tag{3.12}\] The balanced solutions with \(N\) horizons satisfy the Smarr relation [39] \[M=\frac{1}{2G}T_{H}\sum_{i=1}^{N}A_{H}^{(i)}+M_{(\Phi)}\, \tag{3.13}\] where \[M_{(\Phi)}=-\int d^{3}x\sqrt{-g}(2T_{t}^{t}-T_{\alpha}^{\alpha})\, \tag{3.14}\] is the contribution to the total mass of the matter outside the event horizon. They also satisfy the \(1^{st}\) law of thermodynamics \[dM=T_{H}\frac{1}{4G}\sum_{i=1}^{N}dA_{H}^{(i)}. \tag{3.15}\] To measure the hairiness of a configuration we define the parameter [40] \[p\equiv\frac{M_{(\Phi)}}{M}\, \tag{3.16}\] with \(p=0\) in the vacuum case and \(p=1\) for horizonless configurations. Finally, let us mention that, as discussed in Section 3.3, the single BH limit of this framework leads to results similar to those found by employing a metric ansatz in term of the usual spherical coordinates. ### The boundary conditions for the 2BHs construction The above considerations allow for a consistent construction of Einstein-scalar field generalizations of the BW solution by solving numerically the field equations (A.1), (A.2) within a non-perturbative approach. The presence of an arbitrary number of horizons is automatically imposed by the rod structure, leading to a standard boundary value problem. We assume the rod structure of a generic 2BH system to mimic that of the BW solution considered in Fig. 1: a semi-infinite spacelike rod \([-\infty,z_{1}]\) in the \(\varphi\)-direction (with \(f_{2}(0,z)=0\)); a first (finite) timelike rod in the interval \([z_{1},z_{2}]\) (with \(f_{0}(0,z)=0\)); another spacelike rod \([z_{2},z_{3}]\) (with \(f_{2}(0,z)=0\)); a second (finite) timelike rod \([z_{3},z_{4}]\) (with \(f_{0}(0,z)=0\)); finally, a second semi-infinite spacelike rod along \([z_{4},\infty]\) (with \(f_{2}(0,z)=0\)), again in the \(\varphi\)-direction. The \(\mathbb{Z}_{2}\)-symmetric BW solution has, in accordance to Fig. 1 (right panel): \[z_{1}=-\Delta z/2-\bar{\mu}\,\qquad z_{2}=-\Delta z/2\,\qquad z_{3}=\Delta z /2\,\qquad z_{4}=\Delta z/2+\bar{\mu}. \tag{3.17}\] In practice, we have found it convenient to take \[f_{i}=f_{i}^{(0)}e^{2F_{i}}\, \tag{3.18}\] where \(f_{i}^{(0)}\) are background functions, given by the metric functions of the BW solution (2.17)-(2.19), with the dictionary (3.2), while \(F_{i}\) are unknown functions encoding the corrections to the BW metric. The equations satisfied by the \(F_{i}\) can easily be derived from (A.1) and we shall not display them here. In this approach, the functions \(f_{i}\) automatically satisfy the desired rod structure, which are enforced by the use of background functions \(f_{i}^{(0)}\), 'absorbing' also the divergencies associated with coordinate singularities and (for \(f_{2}\)) coming from the imposed asymptotic behaviour.7 We assume that \(F_{i}\) are finite everywhere. Footnote 7: A qualitatively similar approach has been used in Ref. [41] to construct generalizations of the Emparan-Reall black ring solution [22]. The boundary conditions satisfied by the metric functions \(F_{i}\) are \[\partial_{\rho}F_{i}|_{\rho=0}=0\,\qquad\mbox{for}\ \ -\infty<z<\infty\, \qquad\mbox{and}\qquad F_{i}=0\qquad\mbox{for}\qquad\rho\to\infty\ \ \mbox{or}\ \ z\to\pm\infty. \tag{3.19}\] Asymptotic flatness imposes \(F_{1}=F_{2}\) for the semi-infinite spacelike rods, while \(F_{1}-F_{2}\) takes a constant value for a finite spacelike rod. Moreover, \(F_{1}-F_{0}\) is constant for a timelike rod. The boundary conditions for the scalar field are \[\partial_{\rho}\phi|_{\rho=0}=0\,\qquad\mbox{for}\ \ -\infty<z<\infty\, \qquad\mbox{and}\qquad\phi=0\qquad\mbox{for}\qquad\rho\to\infty\qquad\mbox{ or}\qquad z\to\pm\infty\,\] except for \(m\neq 0\) (with \(m\) the integer in the scalar ansatz (2.5)), in which case one imposes \[\phi|_{\rho=0}=0\,\quad\mbox{for a $\varphi$}-\mbox{rod}. \tag{3.20}\] We focus on solutions possessing a \(\mathbb{Z}_{2}\)-symmetry, \(i.e.\) with two identical BHs, such that the thermal equilibrium is guaranteed. Then the auxiliary metric functions \(F_{i}\) satisfy the condition \(F_{i}(\rho,-z)\)=\(F_{i}(\rho,z)\), with Neumann boundary conditions at \(z=0\). The situation with the scalar field is different. Although we have found evidence for the existence of 2BH solutions with an \(even\) parity scalar field amplitude - \(i.e.\)\(\phi(\rho,-z)=\phi(\rho,z)\) - all such configuration studied so far still possess a conical singularity, \(\delta\neq 0\). On the other hand, balanced solutions exist for odd-parity scalar fields, \(\phi(\rho,-z)=-\phi(\rho,z)\), this being the case for all solutions reported in this work.8 The energy-momentum tensor of the scalar field is still invariant under the transformation \(z\to-z\), with the existence of two regions (on the semi-infinite spacelike rods), where the scalar energy has the strongest support. The existence of configurations with \(\delta=0\) can presumably be attributed to the extra-interaction between these two distinct constituents. Footnote 8: The same model containts single BH solutions with an odd-parity scalar field and \(m\geqslant 0\). The phase diagram is complicated, and will be reported elsewhere. We have solved the resulting set of four coupled non-linear elliptic PDEs numerically, subject to the above boundary conditions. Details on the used numerical methods and on a new coordinate system better suited for the numerical study are presented in Appendix B. ### The single (spherical) BH limit in Weyl-type coordinates Before addressing the construction of 2BH solutions, it is interesting to consider the limit of the proposed formalism with \[\Delta z=0\, \tag{3.21}\] \(i.e.\) a single BH horizon. This study is technically simpler, although it contains already some basic ingredients of the general 2BHs case. For a generic matter content, the spherically symmetric solutions are usually studied in Schwarzschild-like coordinates.9 A common parameterization is Footnote 9: The considerations in this subsection can easily be generalized for a different metric gauge choice in (3.22). \[ds^{2}=-N(r)e^{-2\delta(r)}dt^{2}+\frac{dr^{2}}{N(r)}+r^{2}(d \theta^{2}+\sin^{2}\theta d\varphi^{2})\, \tag{3.22}\] where \(r\) is a radial coordinate and \(0\leqslant\theta\leqslant\pi\). The event horizon is located at some \(r=r_{h}>0\), where \(N(r_{h})=0\) and \(\delta(r_{h})\) is finite. The metric functions \(N(r)\) and \(\delta(r)\) are found by solving the Einstein-matter field equations. Any specific geometry written in the form (3.22) can, however, be transformed into Weyl-like coordinates. In principle, the coordinate transformation between \((\rho,z)\) in the Weyl-like line element (3.1) and \((r,\theta)\) in (3.22) is simple enough, with \[\rho=c_{0}\sinh T(r)\sin\theta\,\qquad z=c_{0}\cosh T(r)\cos\theta\,\qquad{\rm where }\qquad T(r)\equiv\int\frac{dr}{r\sqrt{N(r)}}. \tag{3.23}\] The constant \(c_{0}\) is usually fixed by imposing that asymptotically \(\sqrt{\rho^{2}+z^{2}}\to r\). Then, the (generic) expressions of the metric functions in (3.1) read \[f_{0}(\rho,z)=N(r)e^{-2\delta(r)}\,\qquad f_{1}(\rho,z)=\frac{r^{2}}{c_{0}^ {2}[\cosh^{2}T(r)-\cos^{2}\theta]}\,\qquad f_{2}(\rho,z)=r^{2}\sin^{2}\theta\, \tag{3.24}\] with \(r,\theta\) functions of \(\rho\) and \(z\), as found from (3.23). For the (simplest) case of a Schwarzschild solution, in the gauge (3.22) it has \(N(r)=1-2M/r\) and \(\delta(r)=0\); then one finds (taking \(c_{0}=M\)) \[T(r)=2\log\left(\sqrt{\frac{r}{2\bar{\mu}}}+\sqrt{\frac{r}{2\bar{\mu}}-1} \right)\, \tag{3.25}\] and the coordinate transformation (2.14). Then, from (3.24), one finds, upon using the dictionary (3.2) with \({\cal C}=0\), the forms (2.12) and (2.13), which is the \(\Delta z=0\) limit of the BW solution. It turns out, however, that the integral (3.23) which determines \(T(r)\), can only be computed analytically for very special cases. In general, one can find an analytic expression for \(T(r)\) - and thus for the coordinate transformation - only for \(r\to r_{h}\), or for large \(r\). Assuming that the horizon is non-extremal, the generic behavior as \(r\to r_{h}\) of the functions \(N(r)\) and \(\delta(r)\) is \[N(r)=N_{1}(r-r_{h})+{\cal O}(r-r_{h})^{2}\,\qquad\delta(r)=\delta_{0}+{\cal O }(r-r_{h})\, \tag{3.26}\] with \(N_{1}>0\) and \(\delta_{0}\) model-specific parameters. Then, from (3.23), one finds the following general expressions \[T(r)=\frac{2\sqrt{r-r_{h}}}{r_{h}\sqrt{N_{1}}}+\ldots\,\qquad{\rm and} \qquad\rho=c_{0}\frac{2\sqrt{r-r_{h}}}{r_{h}\sqrt{N_{1}}}\sin\theta\,\qquad z=c_{0}\left(1+ \frac{2(r-r_{h})}{r_{h}^{2}N_{1}}\right)\cos\theta\, \tag{3.27}\] which is the leading order result in a \((r-r_{h})\)-expansion. The same approximation implies \[r=r_{h}+\frac{r_{h}^{2}N_{1}}{4}\frac{\rho^{2}}{c_{0}^{2}-z^{2}}\,\qquad \cos\theta=\frac{z}{c_{0}}. \tag{3.28}\] Then the standard near horizon behaviour (3.26) of a generic solution translates into a well-defined Schwarzschild-like rod-structure in Weyl-like coordinates. For example, for \(-c_{0}\leqslant z\leqslant c_{0}\) and \(\rho\to 0\) one finds the standard timelike rod behaviour discussed above, with the leading coefficients in (3.3)-(3.4) given by: \[f_{10}(z)=\frac{r_{h}^{2}}{c_{0}^{2}-z^{2}}\,\qquad f_{20}(z)=r_{h}^{2} \left(1-\frac{z^{2}}{c_{0}^{2}}\right)\,\qquad f_{02}(z)=\frac{r^{-2\delta_{0}}N_{1}^{2}r_{h}^{2}}{4(c_{0}^{2}-z^{2} )}. \tag{3.29}\] One can easily verify that the above expressions imply the same form of the Hawking temperature and horizon area \(T_{H}=e^{-\delta_{0}}N_{1}/(4\pi)\) and \(A_{H}=4\pi r_{h}^{2}\), as found for a Schwarzschild-like line element. Also, outside this \(z-\)interval, \(g_{\varphi\varphi}\equiv f_{2}\rightarrow\rho^{2}\) as \(\rho\to 0\), while other functions are strictly positive. Further progress can be achieved within a numerical approach, \(i.e.\) by computing the same solutions in Weyl-type coordinates (3.1) and in the Schwarzschild-like coordinate system (3.22) (a case which is technically much simpler, since it results in ordinary differential equations rather than PDEs) and comparing the results. We have considered this task for the case of spherically symmetric BH solutions of the considered model (2.1) and an _even_-parity (spherically symmetric) scalar field. When using the coordinate system (3.1), the problem was solved by using the formalism in Section 3.2, in particular the ansatz (3.18) with a background corresponding to the Schwarzschild solution (2.12) and (2.13). Comparing the results found in two different coordinate systems makes clear that the proposed framework can be used to study non-vacuum BHs in Weyl-type coordinates. ## 4 An illustration: two BHs balanced by their scalar hair We shall now apply the formalism developed to a concrete case. We shall construct 2BHs configurations, numerically, balanced by their scalar hair. Since we will be dealing with neutral, static BHs, and minimally coupled scalar fields, this requires the scalar potential to violate the weak energy condition in some spacetime regions [33]. In subsection 4.1 we provide details about the scalar field model and in subsection 4.2 the constructed solutions are reported. ### The scalar field potential and scaling properties We shall assume a \(Q\)-ball type scalar field potential which is bounded from below, \[U(\Phi)=\mu^{2}|\Phi|^{2}-\lambda|\Phi|^{4}+\nu|\Phi|^{6}\, \tag{4.1}\] where \(\mu,\ \lambda,\ \nu\) are positive constants. Differently from the canonical \(Q\)-ball case [42], however, here the scalar field has no harmonic time dependence. Importantly, the potential _is not_ strictly positive, with \[\lambda^{2}>4\mu^{2}\nu. \tag{4.2}\] Turning now to scaling symmetries of the problem, we notice first that the equations of motion are invariant under the following scaling of the coordinates \(x^{i}=(\rho,z)\) together with the parameters of the scalar potential (in the relations below, the functions or constants which are not mentioned explicitly remain invariant): \[x^{i}=c\tilde{x}^{i}\,\qquad\mu=\frac{1}{c}\tilde{\mu}\,\qquad\lambda=\frac{ \tilde{\lambda}}{c^{2}}\,\qquad\nu=\frac{\tilde{\nu}}{c^{4}}\, \tag{4.3}\] with an arbitrary \(c>0\). Some relevant quantities scale as \[M=c\bar{M}\,\qquad T_{H}=\frac{1}{c}\bar{T}_{H}\,\qquad A_{H}=c^{2}\bar{A}_{ H}\,\qquad L=c\bar{L}. \tag{4.4}\] The equations are also invariant under a suitable scaling of the scalar field together with some coupling constants, which do not affect the coordinates: \[\phi=c\tilde{\phi}\,\qquad\lambda=\frac{\tilde{\lambda}}{c^{2}}\,\qquad\nu= \frac{\tilde{\nu}}{c^{2}}\,\qquad G=\frac{\tilde{G}}{c^{2}}\, \tag{4.5}\] while \(M=c^{2}\tilde{M}\), with \(T_{H},A_{H}\) and \(L\) unaffected. These symmetries are used in practice to simplify the numerical study of the solutions. First, the symmetry (4.3) is employed to work in units of length set by the scalar field mass, \[\tilde{\mu}=1\,\qquad i.e.\quad c=\frac{1}{\mu}. \tag{4.6}\] The second symmetry (4.5) is used to set to unity the coefficient of the quartic term in the scalar field potential,10 Footnote 10: Alternatively, one can use (4.5) to set \(\nu=1\) in the potential (4.1). \[\bar{\lambda}=1\,\qquad i.e.\quad c=\frac{1}{\sqrt{\lambda}}. \tag{4.7}\] It follows that two mass scales naturally emerge, one set by gravity, \(M_{\rm Pl}\equiv 1/\sqrt{G}\), and the other one set by the scalar field parameters, \(M_{0}\equiv\mu/\sqrt{\lambda}\). The ratio of these fundamental mass scales defines the dimensionless coupling constant \[\alpha\equiv\frac{M_{0}}{M_{\rm Pl}}\, \tag{4.8}\] which is relevant in the physics of the solutions. Apart from \(\alpha\), the second dimensionless input parameter is the (scaled) constant for the sextic term in the scalar potential, with \[\beta\equiv\frac{\nu\mu^{2}}{\lambda^{2}}. \tag{4.9}\] Then, the scaled scalar potential reads \(U(\phi)=\phi^{2}-\phi^{4}+\beta\phi^{6}\), while the Einstein equations become \(R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}=2\alpha^{2}T_{\alpha\beta}\). To summarize, after using the scaling symmetries, the problem still possesses four input parameters: \[\{\alpha,\ \ \beta\}\ \ {\rm and}\ \ \{\Delta z,\ \ \bar{\mu}\}\, \tag{4.10}\] two of them determined by the scalar field potential and the other two by the BW background "seed" solution. In the solutions with scalar hair, \(\Delta z\) and \(\bar{\mu}\) are still correlated with, but not strictly corresponding to, the distance between the horizons and the horizons mass, respectively. All quantities shown in this work are given in natural units set by \(G\) and \(\mu\), which, in order to simplify the plots, we take to unity in what follows. ### The balanced 2BHs system with scalar hair The _decoupling_ limit \(\alpha\to 0\), where \(\alpha\) defines the coupling of the scalar field to gravity as given by (4.8), corresponds to solutions of the scalar field equation (2.7) on a fixed BW geometry, \(i.e.\) (3.1) with (3.2) and (2.17)-(2.19) (thus, \(F_{i}=0\)). Some basic properties of the self-gravitating solutions are already present Figure 2: The conical excess/deficit \(\delta\) as defined by the relation (3.10) is shown as a function of the input parameter \(\bar{\mu}\) (related to the horizon size) for 2BHs with real scalar hair (\(m=0\)). The coordinate distance (parameter) between the horizons \(\Delta z\) is fixed, as well the theory parameters \(\alpha,\beta\). The three dots correspond to the solutions displayed in Fig. 3. The inset shows a similar plot, with \(\delta\) as a function of the theory parameter \(\alpha\) which measures the strength of gravity. The _balanced_ configurations are highlighted with black dots. in this case. First, they possess an odd-parity scalar field, describing a configuration with two constituents located at \(z=\pm z_{0}\), with \(z_{0}\) fixed by the maximum of the energy density, as resulting from numerics. For all solutions in this work (including the ones with scalar back-reaction), we have found that \(z_{0}>\Delta z+\bar{\mu}\). An interesting feature of the decoupling limit analysis is that the scalar field never trivializes; we could not find any indication for the existence of linear scalar clouds on a BW background. Also, although more work is necessary, these _non-linear cloud_ solutions appear to exist for arbitrary values of the BW parameters \(\Delta z,\bar{\mu}\). The backreaction is included by starting with the solutions in the decoupling limit, \(i.e.\) with given scalar field parameter \(\beta\) and BW background parameters \((\Delta z,\bar{\mu})\), and slowly increasing the coupling \(\alpha\). Most of the qualitative features of the BW solution are still preserved by their generalizations with scalar hair obtained in this way. The generic solutions possess a conical singularity which prevents their collapse, and no other pathologies. Moreover, by using the formalism in [3], one can show that the hairy solutions with conical singularities still admit a consistent thermodynamical description. In particular, when working with the appropriate set of thermodynamical variables, the Bekenstein-Hawking law still holds, with the entropy \(S=A_{H}/4\). The key aspect we wish to emphasise here is the evolution of the conical singularity strength \(\delta\) with increasing \(\alpha\). While \(\delta<0\) in the decoupling limit, one finds that \(|\delta|\) decreases as \(\alpha\) increases, with the existence of a critical value \(\alpha_{c}\) where \(\delta=0\), \(i.e.\) a balanced configuration. Moreover, \(\delta\) becomes positive for \(\alpha>\alpha_{c}\). This behaviour is shown for illustrative values of \((\Delta z,\bar{\mu})\) in the inset of Fig. 2. A more physical scanning of the solutions, on the other hand, fixes the theory, \(i.e.\) fixes \((\alpha,\,\beta)\), which are constants of the model. Then a sequences of balanced solutions in a given theory can be found by fixing the parameter \(\Delta z\) (related to the coordinate distance between the two BHs) and varying the input parameter \(\bar{\mu}\) (related to the BH size). As seen in the main panel of Fig. 2 the system becomes balanced for a critical value of the parameter \(\bar{\mu}\), only. For larger (smaller) \(\bar{\mu}\) the BHs are too heavy (light) and there is a conical excess/strut (deficit/string) in between them. This is illustrated in Fig. 3, where we display the horizon shape11, as given by the effective-radius function R, relation (3.7), for the three solutions highlighted with (red, black and blue) dots in Fig. 2. By repeating this procedure a (continuous) set of balanced solutions is found by varying the input parameter \(\Delta z\) and'shooting' for the critical values of \(\bar{\mu}\) which give \(\delta=0\).12 To Figure 3: The _effective_ horizon shape is shown for three different 2BH solutions with real scalar hair highlighted with (red, black, and blue) dots in Fig. 2. The coordinate distance between the horizons is \(\Delta z=0.4\), while \(\bar{\mu}=0.064\), \(0.091\), and \(0.137\), respectively (from left to the right). There is string for \(\delta>0\) (leftmost panel) and a strut for \(\delta<0\) (rightmost panel), connecting the two horizons. The middle configuration is balanced. summarize, by varying the value of \(\Delta z\) and by adjusting the value of \(\bar{\mu}\) via a'shooting' algorithm, the full spectrum of balanced 2BHs with given \((\alpha,\beta)\) can be recovered numerically, in principle. The balanced solutions have no singularities on and outside the horizon. This can be seen by computing the Ricci or the Kretschmann scalars, which are found to be finite everywhere. The scalar field is distributed around the two horizons. But we have found that the energy density, as given by the component \(T_{t}^{t}\) of the energy-momentum tensor always becomes negative in a region around the horizons. This also holds for the Komar mass-energy density \(T_{\alpha}^{\alpha}-2T_{t}^{t}\) - see the left panel in Fig. 5 -, although its integral (3.14) is always positive. The scalar field profile is a also smooth function - Fig. 5 (right panel). A full scanning of the parameter space of balanced solutions is beyond the purposes of this work. We shall focus on balanced solutions with a fixed set of the theory parameters (\(\alpha=0.0775\), \(\beta=0.0011\)) and \(m=1\), which we have studied more systematically13. It is reasonable to expect, albeit unproven, that one Figure 4: Profiles of the horizon embedding in \(\mathbb{E}^{3}\) for the first (”upper”) BH are shown for a set of \(m=1\) balanced solutions marked in Fig. 6. The distance between the BHs decreases from (1) to (5). Figure 5: The Komar mass-energy density (left panel) and the scalar field amplitude (right panel) are shown for a typical balanced \(m=0\) 2BHs system with scalar hair. The theory constants are \(\alpha=0.0775\), \(\beta=0.0011\) and geometric input parameters are \(\Delta z=1.24\), \(\bar{\mu}=0.071\). such case captures the basic features of the full space of solutions, at least for a nonzero \(m\). We emphasise, moreover, that we have established the existence of balanced solutions for other choices of \((\alpha,\beta)\). The numerical results indicate that, once the model is fixed, there exists a continuous set of balanced solutions, which can be labelled by \(L\), the proper distance between the horizons. The most relevant quantities resulting from numerics are shown in Fig. 6 as a function of \(L\) (we recall that all quantities there are given in units set by \(\mu\) and \(G\)). One observes that the solutions exist for a finite range of the proper distance, only (with \(L_{\rm min}>0\)). Thus, although one may expect to find balanced solutions with an arbitrary small or large \(L\), this is not confirmed by numerics. Moreover, one, two or even three different solution may exist for a given proper distance between the horizons, with a complicated branch structure in terms of \(L\). One observes that all relevant quantities vary significantly as \(L\) varies. Moreover, as expected from the study in the decoupling limit, \(p\) never vanishes, and it attains a minimal value at \(L_{\rm max}\). Thus, for any distance \(L\), a fraction of the spacetime energy must be stored in the scalar hair. Concerning the behaviour of the solutions at the end of the \(L\)-interval, the numerical results suggest the existence of two limiting configurations with nonzero values for both \(M\) and \(A_{H}\), while the Hawking temperature vanishes, see right panel in Fig. 6. Although we could not obtain accurate enough solutions closer to these critical configurations, they are likely singular, with a divergent Ricci scalar outside the horizon. On general grounds, we expect all 2BH balanced solutions to be unstable against small perturbation, since their existence requires a fine-tunning between the BHs parameters. This is supported by the observation that, for any value of the mass, the configuration maximizing the entropy corresponds to a (single) Schwarzschild vacuum BH, and not to a hairy 2BHs solution. ## 5 Further remarks The asymptotically flat static 2BHs system in "vacuum" GR necessarily possesses a conical singularity. A main purpose of this work is to report the existence of static, balanced 2BHs configurations in Einstein-scalar models, without any electromagnetic fields and keeping asymptotic flatness. The solutions are supported by the existence of negative energy densities, as allowed by a choice of a scalar potential which is not strictly positive. Our study indicates the existence of a one-parameter family of balanced solutions, which can be parametrized by the physical distance between the horizons. Our approach is non-perturbative, by solving directly the Einstein-scalar field equations in a Weyl-type coordinate system, subject to proper boundary conditions. Figure 6: The ADM mass \(M\), the horizon area \(A_{H}\) (left panel), the hairiness parameter \(p\) and the Hawking temperature (right panel) of the balanced 2BHs system with \(m=1\), as functions of the proper distance \(L\) between the horizons. In fact, the considered Einstein-scalar theory can be taken as a simple toy-model for other cases which may be physically more interesting, for instance avoiding negative scalar energies. Similar solutions are likely to exist in a variety of other models, with similar mechanisms at work. For example, effective negative energy densities naturally occur in a theories with a Gauss-Bonnet term non-minimally coupled with a scalar field, see \(e.g.\)[43, 44]. Therefore it is natural to conjecture the existence of static balanced 2BHs solutions also in such models, although their investigation should be a more intricate task, by virtue of the complexity of the equations of motion. Another possible balancing mechanism is to include the effects of rotation. This is suggested by the situation with the black rings in five spacetime dimensions [22]. In that case, the static black ring is unbalanced, being supported against collapse by conical singularities [27]. Adding rotation balances the ring for a critical value of the event horizon velocity [22]. One may expect that co-rotation of two BHs may help to alleviate the need for negative energies, within four dimensional (non-vacuum) 2BH solutions.14 Indeed, one may consider the existence of spinning balanced binary BHs in a model with a complex massive scalar field _without_ self-interactions or negative scalar energies, where the existence of hair is allowed via the synchronization mechanism [48, 49]. Footnote 14: We recall that the double-Kerr “vacuum” solution still possess conical singularities, see \(e.g.\)[45, 12] and the references therein. ## Acknowledgements This work is supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020. The authors acknowledge support from the projects CERN/FIS-PAR/0027/2019, PTDC/FIS-AST/3041/2020, CERN/FIS-PAR/0024/2021 and 2022.04560.PTDC. This work has further been supported by the European Union's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 Grant No. FunFiCO-777740 and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 Grant No. NewFunFiCO-101086251. Computations have been performed at the Argus and Blafis cluster at the U. Aveiro. ## Appendix A Field equations in the parameterization (3.1) For model (2.1), with the ansatz (3.1) and (2.5), an appropriate combination of the Einstein equations, \(E_{t}^{t}=0\), \(E_{\rho}^{\rho}+E_{z}^{z}=0\) and \(E_{\varphi}^{\varphi}=0\), yield the following set of equations for the functions \(f_{0},\ f_{1}\) and \(f_{2}\): \[\nabla^{2}f_{0}-\frac{1}{2f_{0}}(\nabla f_{0})^{2}+\frac{1}{2f_{ 2}}(\nabla f_{0})\cdot(\nabla f_{2})+4Gf_{0}f_{1}U(\phi)=0\,\] \[\nabla^{2}f_{1}-\frac{1}{f_{1}}(\nabla f_{1})^{2}-\frac{f_{1}}{2 f_{0}f_{2}}(\nabla f_{0})\cdot(\nabla f_{2})+4Gf_{1}\left[(\nabla\phi)^{2}-\frac{m^{2} f_{1}\phi^{2}}{f_{2}}\right]=0\,\] \[\nabla^{2}f_{2}-\frac{1}{2f_{2}}(\nabla f_{2})^{2}+\frac{1}{2f_{0 }}(\nabla f_{0})\cdot(\nabla f_{2})+4Gf_{1}(f_{2}U(\phi)+2m^{2}\phi^{2})=0\.\] (A.1) The equation for the scalar field amplitude \(\phi\) is \[\nabla^{2}\phi+\frac{1}{2f_{0}}(\nabla f_{0})\cdot(\nabla\phi)+\frac{1}{2f_{ 2}}(\nabla f_{2})\cdot(\nabla\phi)-\frac{m^{2}f_{1}}{f_{2}}\phi-\frac{f_{1}}{ 2}\frac{dU(\phi)}{d\phi}=0\.\] (A.2) We have defined, acting on arbitrary functions \(\mathcal{F}(\rho,z)\) and \(\mathcal{G}(\rho,z)\), \[(\nabla\mathcal{F})\cdot(\nabla\mathcal{G})\equiv\partial_{\rho}\mathcal{F} \partial_{\rho}\mathcal{G}+\partial_{z}\mathcal{F}\partial_{z}\mathcal{G}\,\qquad\nabla^{2}\mathcal{F}\equiv\partial_{\rho}^{2}\mathcal{F}+\partial_{z }^{2}\mathcal{F}\.\] (A.3) ## Appendix B A new coordinate system and details on the numerics Even though the 2BHs solutions reported in this paper can be constructed by employing the Weyl-type coordinates \((\rho,z)\), the metric ansatz (3.1) has a number of disadvantages. For example, the coordinate range is unbounded for both \(\rho\) and \(z\), which makes it difficult to extract with enough accuracy the value of the mass parameter \(M\) from the asymptotic form of the metric function \(f_{0}\). Thus, in practice, to solve numerically the Einstein-scalar field equations, we have found it more convenient to introduce the new coordinates \((r,\theta)\) related to \((\rho,z)\) in (3.1) by \[\rho=\frac{r^{2}-a^{2}}{r}\sin\theta\,\qquad z=\frac{r^{2}+a^{2}}{r}\cos \theta\,\qquad\mbox{with}\qquad\bar{\mu}=\frac{(a-b)^{2}}{b}\,\qquad\Delta z=4a\,\] (B.1) with ranges \(a\leqslant r<\infty\) and \(0\leqslant\theta\leqslant\pi\), and reparametrize the metric (3.1) as \[ds^{2}=-\hat{f}_{0}(r,\theta)dt^{2}+\hat{f}_{1}(r,\theta)(dr^{2}+r^{2}d\theta ^{2})+\hat{f}_{2}(r,\theta)d\varphi^{2}\.\] (B.2) The rod structure in (3.1) is still preserved for the new coordinate system, although it becomes less transparent - see Fig. 7. The two BHs horizons are now located15 Footnote 15: The horizons vanish in the limit \(b\to a\), in which case the coordinate transformation \[{\cal R}=r\sqrt{1+\frac{a^{4}}{r^{4}}+\frac{2a^{2}\cos 2\theta}{r^{2}}},\ \ {\cal R}\sin\Theta=\frac{(r^{2}-a^{2})\sin\theta}{r}.\] (B.3) results in the flat spacetime metric in usual spherical coordinates, \(ds^{2}=d{\cal R}^{2}+{\cal R}^{2}(d\Theta^{2}+\sin^{2}\Theta d\varphi^{2})-dt^{2}\). at: (i) \(\theta=0\), for \(a\leqslant r\leqslant b\); and (ii) at \(\theta=\pi\), \(a\leqslant r\leqslant b\). The rod separating the BHs is located at \(r=a\) and \(0\leqslant\theta\leqslant\pi\). Analogous to the case of Weyl-type coordinates, we define \[\hat{f}_{i}=\hat{f}_{i}^{0}e^{2\hat{F}_{i}}\,\] (B.4) with the background functions \(\hat{f}_{i}^{0}\) corresponding to the vacuum BW solution expressed in the \((r,\theta)\)-coordinates. Their explicit expression reads: \[\hat{f}_{0}^{0}(r,\theta)=\frac{a^{2}b-(a-b)^{2}r+br^{2}-2abr\cos \theta+\sqrt{(b^{2}+r^{2}-2br\cos\theta)(a^{4}+b^{2}r^{2}-2a^{2}br\cos\theta)} }{-2abr+a^{2}(b+r)+br(b+r)-2abr\cos\theta+\sqrt{(b^{2}+r^{2}-2br\cos\theta)(a^ {4}+b^{2}r^{2}-2a^{2}br\cos\theta)}}\] \[\times\frac{a^{2}b-(a-b)^{2}r+br^{2}+2abr\cos\theta+\sqrt{(b^{2}+ r^{2}+2br\cos\theta)(a^{4}+b^{2}r^{2}+2a^{2}br\cos\theta)}}{-2abr+a^{2}(b+r)+ br(b+r)+2abr\cos\theta+\sqrt{(b^{2}+r^{2}+2br\cos\theta)(a^{4}+b^{2}r^{2}+2a^{ 2}br\cos\theta)}}\,\] (B.5) Figure 7: The 2BHs rod structure in Fig. 1 is displayed for \((r,\theta)\)-coordinates, as defined by Eq. (B.1). \[\hat{f}_{1}^{0}(r,\theta)=\frac{S(r,\theta)\Omega(r,\theta)}{\hat{f}_{0}(r,\theta)} \,\qquad\hat{f}_{2}^{0}(r,\theta)=\frac{(r^{2}-a^{2})^{2}\sin^{2}\theta}{r^{2}} \frac{1}{\hat{f}_{0}(r,\theta)}\,\] (B.6) with \[S(r,\theta)=\frac{(a^{2}-r^{2})^{2}}{2\sqrt{(b^{4}+r^{4}-2b^{2}r^{2}\cos 2 \theta)(a^{8}+b^{4}r^{4}-2a^{4}b^{2}r^{2}\cos 2\theta)}}\] (B.7) \[\times\frac{a^{4}b+2a(a^{2}+b^{2})r^{2}+b^{4}-(a+b)^{2}r(a^{2}+r^{ 2})\cos\theta+2a^{2}br^{2}\cos 2\theta+(a^{2}+r^{2}-2ar\cos\theta)\sqrt{(b^{2} +r^{2}-2br\cos\theta)(a^{4}+b^{2}r^{2}-2a^{2}br\cos\theta)}}{a^{4}+r^{4}-2a^{2 }r^{2}\cos 2\theta}\] \[\times\frac{a^{4}b+2a(a^{2}+b^{2})r^{2}+b^{4}+(a+b)^{2}r(a^{2}+r^{ 2})\cos\theta+2a^{2}br^{2}\cos 2\theta+(a^{2}+r^{2}+2ar\cos\theta)\sqrt{(b^{2} +r^{2}+2br\cos\theta)(a^{4}+b^{2}r^{2}+2a^{2}br\cos\theta)}}{a^{4}-2a(a^{2}+b ^{2})r^{2}+br^{4}-(a-b)^{2}r^{2}\cos\theta+2a^{2}br^{2}\cos 2\theta+(a^{2}+r^{2}+2ar \cos\theta)\sqrt{(b^{2}+r^{2}-2br\cos\theta)(a^{4}+b^{2}r^{2}-2a^{2}br\cos \theta)}}\] \[\times\frac{a^{4}b^{2}-(a^{2}+b^{2})^{2}+b^{2}r^{4}+2a^{2}b^{2}r^ {2}\cos 2\theta+\sqrt{(b^{4}+r^{4}-2b^{2}r^{2}\cos 2\theta)(a^{8}+b^{4}r^{4}-2a^{ 4}b^{2}r^{2}\cos 2\theta)}}{a^{4}b-2a(a^{2}+b^{2})r^{2}+br^{4}+(a-b)^{2}r(a^{2}+r^{ 2})\cos\theta+2a^{2}br^{2}\cos 2\theta+(a^{2}+r^{2}-2ar\cos\theta) \sqrt{(b^{2}+r^{2}+2br\cos\theta)(a^{4}+b^{2}r^{2}+2a^{2}br\cos\theta)}}\] and \[\Omega(r,\theta)=1+\frac{a^{4}}{r^{4}}-\frac{2a^{2}\cos 2\theta}{r^{2}}\.\] (B.8) With this parameterization, we solve numerically the resulting set of four coupled non-linear elliptic PDEs for the functions \((\hat{F}_{i},\phi)\), subject to the set of boundary conditions we now describe. At \(r=a\) one imposes (\(i=0,1,2\)) \[\partial_{r}\hat{F}_{i}|_{r=a}=0,\ \partial_{r}\phi|_{r=a}=0\ \ \mbox{for}\ \ m=0,\ \ \mbox{and}\ \ \ \phi|_{r=a}=0\ \ \mbox{for}\ \ m\neq 0\.\] (B.9) The constraint equation \(E_{r}^{\theta}=0\) implies that \(\hat{F}_{2}-\hat{F}_{1}|_{r=a}=\)const. (\(i.e.\) a constant value of the conical deficit/excess \(\delta\)). At \(\theta=0\) the boundary conditions are \[\partial_{\theta}\hat{F}_{i}|_{\theta=0}=\partial_{\theta}\phi|_{\theta=0}=0\.\] (B.10) where again, the constraint equation \(E_{r}^{\theta}=0\) requires \(\hat{F}_{0}-\hat{F}_{1}=\)const. for \(a\leqslant r\leqslant b\) (\(i.e.\) a constant value of the Hawking temperature). For \(r>b\) one finds another supplementary condition, \(\hat{F}_{2}=\hat{F}_{1}\), thus the absence of a conical singularity on the outer \(z-\)axis, while for \(m\neq 0\) one imposes \(\phi|_{\theta=0}=0\) instead of a Newmann boundary condition. Similar boundary conditions are found for \(\theta=\pi.\) At infinity, one imposes the conditions \[\hat{F}_{i}|_{r=\infty}=\phi|_{r=\infty}=0\,\] (B.11) Moreover, the problem still possesses a \(Z_{2}\)-symmetry, which allows to solve the equations for \(0\leqslant\theta\leqslant\pi/2\), only. The following boundary conditions are imposed at \(\theta=\pi/2\) \[\partial_{\theta}\hat{F}_{i}|_{\theta=\pi/2}=\phi|_{\theta=\pi/2}=0\.\] (B.12) All numerical calculations are performed by using a professional finite difference solver, which uses a Newton-Raphson method. A detailed presentation of the this code is presented in [47]. In our approach, one introduces the new radial variable \(x=(r-a)/(c+r)\) (with \(c\) a properly chosen constant) which maps the semi-infinite region \([a,\infty)\) to the compact region \([0,1]\). The equations for \(\hat{F}_{i}\) are discretized on a non-equidistant grid in \(x\) and \(\theta\). Typical grids used have sizes around \(200\times 50\) points, covering the integration region \([0,1]\times[0,\pi/2]\). The numerical error for the solutions reported in this work is estimated to be typically \(<10^{-3}\). However, the errors increase when studying solutions with small \(a\) and for \(b\) much larger than \(a\) (\(i.e.\) a large separation of the BHs). ## Appendix C Numerical construction of the 2RNBHs solution The simplest application of the proposed formalism consists in recovering the double Reissner-Nordstrom solution by solving numerically the Einstein-Maxwell equations \[R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R=2G\left(F_{\alpha\gamma}F_{\beta}^{ \gamma}-\frac{1}{4}g_{\alpha\beta}F^{2}\right)\,\quad\nabla_{\alpha}F^{\alpha\beta}=0\,\] (C.1) with \(F=dA\) the Maxwell field strngth tensor. Restricting again to a \(\mathbb{Z}_{2}\)-symmetric solution, we consider the metric ansatz (B.2), (B.4) and a purely electric Maxwell field, \[A=V(r,\theta)dt\,\] (C.2) and solve numerically the equations for \(\hat{F}_{i}\), \(V\) by using the approach described above. The boundary conditions satisfied by the electrostatic potential \(V\) are \[\partial_{r}V|_{r=a}=0,\ \ V|_{r=\infty}=V_{0},\ \ \partial_{\theta}V|_{ \theta=\pi/2}=0\,\] (C.3) together with \[V|_{\theta=0}=0\ \ \text{for}\ \ a\leqslant r\leqslant b\ \ \text{and}\ \ \partial_{\theta}V|_{\theta=0}=0\ \ \text{for}\ \ r>b\.\] (C.4) The numerical approach is similar to that employed for 2BHs with scalar hair, the input parameters being the rod coordinates \(\{a,b\}\) together with \(V_{0}\). However, the corresponding exact solution is known in this case16, and can be written in the form (B.2), (B.4) with Footnote 16: We recall that the 2RNBHs solution can be generated from the BW vacuum one by using a suitable Harrison transformation (see _e.g._[4]). \[e^{2\hat{P}_{0}}=\frac{1}{P^{2}},\ \ e^{2\hat{F}_{1}}=e^{2\hat{F}_{1}}=P^{2}, \ \ V=\frac{\tanh\gamma}{P}\hat{f}_{0}^{0},\ \ \text{with}\ \ P=\cosh^{2}\gamma-\sinh^{2}\gamma\hat{f}_{0}^{0}\,\] (C.5) where \(\gamma\) is an (arbitrary) real parameter corresponding to the electrostatic potential of the configurations, \(\tanh\gamma=V_{0}\). Then a straightforward computation leads to the following expression of several quantities of interest \[M=\frac{(a-b)^{2}}{b}\frac{1+V_{0}^{2}}{1-V_{0}^{2}}\,\qquad Q_{e}= \frac{2(a-b)^{2}}{b}\frac{V_{0}}{1-V_{0}^{2}}\,\] (C.6) \[A_{H}=\frac{8\pi(a-b)^{4}(a^{2}+b^{2})}{b^{2}(a+b)^{2}}\frac{1} {(1-V_{0}^{2})^{2}}\,\qquad T_{H}=\frac{b(a+b)^{2}}{8\pi(a-b)^{2}(a^{2}+b^{2})}(1-V_{0}^{2})^{2}\,\] with \(Q_{e}\) the total electric charge17. Figure 8: The mass \(M\) and the Hawking temperature \(T_{H}\) of the 2RNBHs system are shown as a function of the electrostatic potential \(V_{0}\) for both theory (red line) and numerical results (dots). The insets show the relative difference. The geometric input parameters are \(a=0.1\), \(b=0.221\). In Fig. 8 we show a comparison between the theory and numerical results for the mass and Hawking temperature as a function of \(V_{0}\) for some fixed \((a,b)\). The insets there give an overall estimate for the numerical accuracy of the solutions, which is consistent with other diagnostics provided by the solver (similar results hold for \(Q_{e}\) and \(A_{H}\); also a similar picture is found when varying \(a\) or \(b\) at fixed \(V_{0}\)). This supports that the proposed numerical scheme can be used in the construction of multi-BH solutions in the presence of matter fields.
2309.14311
Parallelizing a 1-Dim Nagel-Schreckenberg Traffic Model
The Nagel-Schreckenberg model is a stochastic one-dimensional traffic model. In this assignment, we guide students through the process of implementing a shared-memory parallel and reproducible version of an existing serial code that implements this model, and to analyze its scaling behavior. One of the key elements in this traffic model is the presence of randomness, without which it would lack realistic phenomena such as traffic jams. Its implementation thus requires techniques associated with Monte Carlo simulations and pseudo-random number generation (PRNG). PRNGs are notoriously tricky to deal with in parallel when combined with the requirement of reproducibility. This assignment was created for the graduate course PHY1610 Scientific Computing for Physicists at the University of Toronto, which had its origin in the training program of the SciNet HPC Consortium, and is also very suitable for other scientific disciplines. Several variations of the assignment have been used over the years.
Ramses van Zon, Marcelo Ponce
2023-09-25T17:30:47Z
http://arxiv.org/abs/2309.14311v1
# Parallelizing a 1-Dim Nagel-Schreckenberg Traffic Model ###### Abstract. The Nagel-Schreckenberg model is a stochastic one-dimensional traffic model (Bogorst et al., 2010). In this assignment, we guide students through the process of implementing a shared-memory parallel and reproducible version of an existing serial code that implements this model, and to analyze its scaling behavior. One of the key elements in this traffic model is the presence of randomness, without which it would lack realistic phenomena such as traffic jams. Its implementation thus requires techniques associated with _Monte Carlo_ simulations and _pseudo-random number generation_ (PRNG). PRNGs are notoriously tricky to deal with in parallel when combined with the requirement of reproducibility. This assignment was created for the graduate course _PHY161 Scientific Computing for Physicists_ at the University of Toronto, which had its origin in the training program of the SciNet HPC Consortium, and is also very suitable for other scientific disciplines. Several variations of the assignment have been used over the years. parallel programming, random numbers, reproducibility, simulation + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + Footnote †: journal: Soft authors contributed equally to this research. + ## 1. Rationale The main rationale of this assignment is to present students with a time-stepping, stochastic simulation and guide them through the process of creating a parallel implementation. In this case, the system to simulate is the one-dimensional Nagel-Schreckenberg traffic model(Bogorst et al., 2010). Simulating this model requires using pseudo-random number generators(Bogorst et al., 2010) in parallel, a tricky and often overlooked topic in scientific computing courses. We created this assignment for the "Scientific Computing" Physics course taught to graduate students at the University of Toronto, Canada. This course aims to teach students programming skills to develop scientific applications, using C/C++, best practices in software engineering, use of well established libraries, and train them in parallel computing techniques such as shared-memory programming (i.e. OpenMP) and distributed-memory programming (i.e. MPI). The course, which consistently gets positive evaluations, is highly practical and applied, requiring students to develop code on our teaching cluster1. This course originated in the training program of the SciNet HPC Consortium2. Because of this, the course is also suitable for other scientific disciplines and many of its topics also fit in an undergraduate curriculum. Footnote †: journal: Soft authors contributed equally to this research. ## 2. Concepts The implementation of the Nagel-Schreckenberg traffic model requires _Monte Carlo_ techniques, which in turn require a _pseudo-random number generator_ (PRNG). Both of these are topics covered in our course, and we use this model because it is an excellent and easily relatable example of a stochastic simulation. For the assignment, a starter code in C++ is given, and _OpenMP_ should be used to parallelize the code on _shared-memory multi-core computers_. One of the nice features of this problem is that it can be solved using either a _grid_ representation or an _agent-based_ one. The grid representation assigns a value to every point on the circular road of length \(L\), while the agent-based implementation keeps track of the positions and velocities of the \(N\) cars as (two) vectors of length \(N\) on that circular road. Each implementation has its advantages and disadvantages, but in particular the agent-based approach significantly simplifies the parallelization of PRNG. Pseudo-random numbers are generated by sequentially deriving a number from an internal state that gets updated with every next number. Before drawing the first number, the state is initialized from a'seed' value, which is often a single integer. The state update algorithm is deterministic, and therefore the sequence is reproducible if the same seed is used. The resulting sequence of numbers should nonetheless be nearly indistinguishable from being statistically independent and evenly distributed. The circular road is also an example of _periodic boundary conditions_, which have a wider applicability in computer simulations. In the course, students are made familiar with programming in C++, best practices in software development such as modularity, version control, unit testing, documentation, use of external libraries, make, file formats such as ASCII, binary, self-describing formats, etc. To be able to do this assignment, students should already have good working knowledge of C++ and how to use the C++1 standard random library. The starting code is moderately modular, so familiarity with the concepts of C++ headers and implementation files is helpful. Knowledge of OpenMP is required to do the assignment, including the parallel, for, and threadprivate compiler directives. The assignment was designed and tested using the _make_ utility and a simple makefile for building and running the software. While we strongly recommend using this approach, it is possible to use e.g. cmake as well, or to compile and run the code manually. The code does not require external libraries. One of the trickiest parts in the parallel implementation of this model, and the one highlighted in this assignment, is dealing with the PRNG in parallel in such a way that the output of the parallel code _exactly reproduces_ that of the serial code. Scientific **reproducibility** is a very urgent and critical topic nowadays in many scientific disciplines that heavily rely on computational technologies. Without this requirement, one possible solution to parallelize the code and its PRNG function, would be to have each of the threads sampling from its own random number generator, starting from different seeds, thus having a different random number sequence in each thread. However, this leads to different results when the number of threads used changes. Although this may be tolerable in some situations, reproducibility between using various number of threads is a requirement of this assignment. Reproducibility requires there to be only one sequence of random numbers from which to sample and shared among the different threads, so that one gets the same results on the same hardware, independent of the number of threads. While generating a random number sequence is generally a serial process and therefore not parallelizable, for several random number generators, there are algorithms for quickly "moving ahead". Because these are not yet implemented in the C++ standard random library, the starting code of this assignment provides an implementation of this fast-forward algorithm for one of the C++ linearly congruent generators. ## 3. Limitations Depending on the parameters, software implementation and characteristics of the hardware, the amount of computation can result in similar or even smaller than the cost of I/O operations - e.g. if we decide to save data at higher rates of iterations. We provide examples of parameter files to help to emphasize the relevance of the computational part of the simulation, and it is possible to switch off output completely as well. The scaling behavior that students may observe depends highly on the level to which they managed to reduce the cost of fast-forwarding the random number generators and other serial parts of the code. Scaling beyond a single socket can be less than ideal due to NUMA effects. Finally, using more virtual cores than physical ones on CPUs which support "simultaneous multithreading" (also known as "hyperthreading") should be avoided; even if there is a small benefit, the timing results would be hard to interpret. ## 4. Variations In this assignment, we focus on the parallelization of the algorithm, in particular the PRNG and parallelization aspects of its implementation using a shared-memory approach such as OpenMP. In other variation that we have used in the past, we have asked students to create their own serial implementation from scratch, or to adapt the output to use the NetCDF library. This problem offers many other Figure 1. Visualization of the 1-dim simulation of the Nagel-Schreckenberg traffic model – with 200 cars, length of 1000, probability \(p=0.13\) and maximum velocity \(v_{max}=5\). The figure shows the emergence of irregularities (“traffic jams”) in the flow of the vehicles, and how it propagates backwards in position and forward in time. Without the random contributions to the model, these irregularities would not occur. opportunities for variation that address other HPC aspects. One could ask students to link to another PRNG library, to implement a distributed-memory parallel code using the Message Passing Interface, to port the code to use Graphics Processing Units, to run a series of parameter study cases and take advantage of embarrassingly parallel jobs, to perform scaling analysis, to do a performance analysis by profiling the code, to change boundary conditions, etc.
2309.04102
Bounds on tower mass scales in the presence of throats of different warping
In Type IIB flux compactification realizing the metastable de Sitter (dS) vacuum, the uplift potential can be generated by $\overline{\rm D3}$-branes at the tip of Klebanov-Strassler throat. Then the uplift potential obeys the scaling law with respect to the tower mass scale $m_{\rm sc}$, which can be the Kaluza-Klein (KK) mass scale associated with the throat containing $\overline{\rm D3}$-branes or the bulk tower mass scales, depending on the warping of the throat. On the other hand, in the presence of another throat of stronger warping, the KK mass scale associated with this throat is lower than $m_{\rm sc}$. Nevertheless, the Higuchi bound and the condition that the tower mass scale is higher than the gravitino mass provide the upper bound on $m_{\rm sc}$ determined by the lowest tower mass scale (or gravitino mass). This bound also can be interpreted as the lower bound on the lowest tower mass scale determined by $m_{\rm sc}$. We investigate this bound in detail when the throat containing $\overline{\rm D3}$-branes is strongly and weakly warped, respectively.
Min-Seok Seo
2023-09-08T03:44:37Z
http://arxiv.org/abs/2309.04102v2
# Bounds on tower mass scales in the presence of throats of different warping ###### Abstract In Type IIB flux compactification realizing the metastable de Sitter (dS) vacuum, the uplift potential can be generated by \(\overline{\rm D3}\)-branes at the tip of Klebanov-Strassler throat. Then the uplift potential obeys the scaling law with respect to the tower mass scale \(m_{\rm sc}\), which can be the Kaluza-Klein (KK) mass scale associated with the throat containing \(\overline{\rm D3}\)-branes or the bulk tower mass scales, depending on the warping of the throat. On the other hand, in the presence of another throat of stronger warping, the KK mass scale associated with this throat is lower than \(m_{\rm sc}\). Nevertheless, the Higuchi bound and the condition that the tower mass scale is higher than the gravitino mass provide the upper bound on \(m_{\rm sc}\) determined by the lowest tower mass scale (or gravitino mass). This bound also can be interpreted as the lower bound on the lowest tower mass scale determined by \(m_{\rm sc}\). We investigate this bound in detail when the throat containing \(\overline{\rm D3}\)-branes is strongly and weakly warped, respectively. Introduction The swampland program [1] has provided a set of conjectured constraints that the low energy effective field theory (EFT) must satisfy in order to have a UV completion in quantum gravity (for reviews, see, e.g., [2, 3, 4, 5, 6, 7]). Among various proposals in the program, the instability of de Sitter (dS) space formulated by the dS swampland conjecture is of particular interest [8, 9, 10] (see also [11, 12, 13]) as string realization of the metastable dS vacuum [14, 15] requires a tuning between several ingredients such as flux compactification, non-perturbative effect, and uplift, which has led to the debate on the consistency of the model. In the justification of the dS swampland conjecture the distance conjecture plays the crucial role [13]. It states that the infinite distance limit of the moduli space corresponds to the corner of the landscape, at which the EFT becomes invalid as the mass scale of a tower of states decreases rapidly [16]. Such a descent of a tower of states implies the rapid increase in the number of low energy degrees of freedom hence their production in dS space can violate the covariant entropy bound [17] given by (horizon area)/4 (see also [18, 19, 20, 21, 22]). The dS swampland conjecture raised the suspicion that our universe well described by a positive cosmological constant, \(\Lambda=3m_{\rm Pl}^{2}H^{2}\), where \(H\) is the Hubble parameter (the inverse of the horizon radius), is close to the swampland in the moduli space. In this regard, there have been attempts to formulate the closeness of our universe to the swampland in the form of the scaling law, reflecting the distance conjecture. The anti-dS(AdS)/dS distance conjecture focuses on the smallness of \(\Lambda\sim 10^{-123}m_{\rm Pl}^{4}\) and suggests the relation \(m\sim|\Lambda|^{\alpha}\), where \(m\) is some tower mass scale [23]. If \(m\) is the Kaluza-Klein (KK) mass scale, the lower bound on \(\alpha\) is given by \(1/4\)[24], which can be obtained from the observational bound on the size of extra dimensions [25, 26]. Moreover, any nonzero mass of the state in dS space is required to be larger than the Higuchi bound [27] (see [28] for a review) given by \(\sqrt{s(s-1)}H\sim\Lambda^{1/2}\), where \(s\) is the spin of the state, indicating that \(\alpha\) is smaller than \(1/2\). On the other hand, in the effective supergravity description of string theory, \(|\Lambda|\) of the AdS vacuum (for the metastable dS vacuum, the size of the AdS vacuum energy density before uplift which will be denoted by \(V_{\rm AdS}\)) is smaller than \(3m_{\rm Pl}^{2}m_{3/2}^{2}\), where \(m_{3/2}\) is the gravitino mass and the inequality is saturated in the supersymmetric case. The gravitino distance conjecture claims that it may be \(m_{3/2}\) rather than \(|\Lambda|\) which obeys the scaling law [29, 30, 31]. In the string model realizing the metastable dS vacuum based on Type IIB flux compactification, when the uplift potential \(V_{\rm up}\) is generated by \(\overline{\rm D3}\)-branes at the tip of the Klebanov-Strassler throat [32], it turns out that the size of \(V_{\rm up}\) obeys the scaling law with respect to the tower mass scale [33]. If the throat containing \(\overline{\rm D3}\)-branes is strongly warped, the mass scale of the KK modes localized in this throat region satisfies \(m_{\rm KK}^{\rm throat}\sim V_{\rm up}^{1/4}\)[34]. We note that in the presence of a number of throats [35], the KK mass scale associated with the throat of the strongest warping is the lowest tower mass scale. Thus, if the warping of the throat containing \(\overline{\rm D3}\)-branes is the strongest, the scaling law relates \(V_{\rm up}\) and the lowest tower mass scale. Moreover, the exponent \(1/4\) is nothing more than the inverse of the number of noncompact dimensions over which \(\overline{\rm D3}\)-branes are extended. In contrast, when the warping is extremely weak, both the bulk tower mass scales and \(V_{\rm up}\) scale with respect to the size of internal volume so we can find the scaling law between them [33]. More concretely, the string scale \(m_{s}\) satisfies \(m_{s}\sim V_{\rm up}^{1/4}\) where the exponent is the inverse of the number of noncompact dimensions. In addition, the bulk KK mass scale, the lowest tower mass scale in the absence of the throat of stronger warping, also obey's the scaling law but the exponent in this case is given by \(1/3\) : \(m_{\rm KK}^{\rm bulk}\sim V_{\rm up}^{1/3}\). Since the uplift potential given by \(V_{\rm up}=\Lambda+|V_{\rm AdS}|\) is connected to the tower mass scale \(m_{\rm sc}\) through the scaling law \(m_{\rm sc}\sim V_{\rm up}^{\alpha}\) with \(\alpha=1/4\) or \(1/3\), we find the inequality \(m_{\rm sc}>\Lambda^{\alpha}\) for the positive \(\Lambda\). Hence, unlike the AdS/dS distance conjecture suggesting the equality, \(m_{\rm sc}\) may not be as light as \(\sim\Lambda^{\alpha}\). Meanwhile, if there exists a throat whose warping is stronger than that of the throat containing \(\overline{\rm D3}\)-branes, the KK mass scale associated with it is lower than \(m_{\rm sc}\), the tower mass scale satisfying the scaling law with respect to \(V_{\rm up}\). Nevertheless, for the EFT based on the four-dimensional supergravity description to be valid, the lowest tower mass scale is required to be larger than \(m_{3/2}\). Combining this with the Higuchi bound and the fact that \(|V_{\rm AdS}|\) is smaller than \(3m_{\rm Pl}^{2}m_{3/2}^{2}\), we obtain the inequality obeyed by the lowest tower mass scale as well as \(m_{\rm sc}\). This will be explored in detail in this article. As we will see, in the presence of the lower tower mass scale, \(m_{\rm sc}\) has the upper bound determined by the lower tower mass scale, or equivalently, the lowest tower mass scale has the lower bound determined by \(m_{\rm sc}\). We expect that our results may be useful in the phenomenological study on the structure of extra dimensions. This article is organized as follows. Section 2 is devoted to the brief reviews on the Higuchi bound and the connection between tower mass scales and \(V_{\rm up}\), which provide the background for our discussion. Based on them, in Section 3, we obtain the upper bound on \(m_{\rm sc}\) in terms of the lower tower mass scale when the throat containing \(\overline{\rm D3}\)-branes is strongly and weakly warped, respectively. Then we conclude. ## 2 Reviews on Higuchi bound, tower mass scale and uplift ### Higuchi bound We begin our review on the Higuchi bound with the discussions on the unitary irreducible representations (UIRs) of SO(1,4) dS isometry group and their masses. For details, we refer the reader to [28] and references therein. The'mass' of the state in the UIR is determined by the quadratic Casimir, \[Q=-\frac{1}{2}L_{AB}L^{AB}, \tag{1}\] where \(L_{AB}\)\((A,B=0,1,\cdots,4)\) are SO(1,4) generators, as the field equation of motion is reduced to its eigenvalue equation, \[\Big{(}Q-\langle Q\rangle\Big{)}{\cal K}(x)=0. \tag{2}\] Here the field \({\cal K}(x)\) carries the tensor or spinor indices corresponding to the UIR to which the field belongs. The eigenvalue of \(Q\) is given by [36] (see also Section 12 of [28]) \[\langle Q\rangle=-s(s+1)-(q+1)(q-2), \tag{3}\] where \(s\) is interpreted as a spin in the \(H\to 0\) limit and \(q\in\mathbb{C}\) is determined by the type of the representation. 1 The types of SO(1,4) UIRs are classified in [36], and it turns out that under the Poincare contraction, i.e., in the \(H\to 0\) limit, a particular set of UIRs can be reduced to the wavefunction in Minkowski space in a sensible way (the positive and negative frequency modes are well separated) [37] (see also Section 14 of [28]) : Footnote 1: The \(H\to 0\) limit is also given by \(\langle Q\rangle=-s(s+1)+\frac{9}{4}+\nu^{2}\), where \(\nu\geq 0\) for an integer \(s\) and \(j_{r}(j_{r}+1)\) (\(j_{\ell},j_{r}\in\mathbb{Z}/2\)), respectively, the irreducible representation of SO(4) is characterized by \((j_{\ell},j_{r})\). Then \(s\) is the infimum (greatest lower bound) of \(j_{\ell}+j_{r}\), which is interpreted as a spin in the \(H\to 0\) limit. On the other hand, SO(1,4)/SO(4) generators raise/lower the \(j_{\ell}\) and \(j_{r}\) values, and their contributions to the SO(1,4) quadratic Casimir \(Q\) determine the value of \(q\). For example, in the discrete series of representations, we can find two dual representations in which the values of \((\max(j_{r}-j_{\ell}),\min(j_{r}-j_{\ell}))\) for the states satisfying \(j_{r}+j_{\ell}=s\) are given by \((q,s)\) and \((-s,-q)\), respectively. * The principal series of representations : In this case, \(q=\frac{1}{2}+i\nu\), where \(\nu\geq 0\) for an integer \(s\) and \(\nu>0\) for a half-integer \(s\), such that \[\langle Q\rangle=-s(s+1)+\frac{9}{4}+\nu^{2}.\] (4) The representation of this type is also called the massive representation since in the \(H\to 0\) limit, \(\nu H\) is reduced to the mass of the field. * The complementary series of representations : The value of \(s\) in this case is only an integer. Moreover, \(q=\frac{1}{2}+\nu\), where \(\nu\in\mathbb{R}\) and \[0<|\nu|<\frac{3}{2}\qquad\mbox{for}\quad s=0,\] (5) \[0<|\nu|<\frac{1}{2}\qquad\mbox{for}\quad s=1,2,3,\cdots,\] giving \[\langle Q\rangle=-s(s+1)+\frac{9}{4}-\nu^{2}.\] (6) The \(H\to 0\) limit of the representation of this type is sensible when \(s=0\) and \(\nu=1/2\) (thus \(q=1\)), which corresponds to the conformally coupled massless spin-0 field. * The discrete series of representations : In this case, \(q=0,1,\cdots,s-1,s\) for an integer \(s\) and \(q=\frac{1}{2},\cdots,s-1,s\) for a half-integer \(s\). The sensible representation in the \(H\to 0\) limit requires that \(s=q>0\), which is reduced to the massless spin-\(s\) field. Moreover, the quadratic Casimir of dS isometry is not exactly identified with the Laplacian : the quadratic Casimir eigenvalue equation given by (2) is written as [38] \[\big{(}\Box-H^{2}s(s+2)-H^{2}\langle Q\rangle\big{)}{\cal K}(x)=0. \tag{7}\] On the other hand, from the fact that the representation belonging to the discrete series satisfying \(s=q>0\) becomes the massless spin-\(s\) field in the \(H\to 0\) limit, it was suggested to define the'mass' of the state in dS space by [38] \[m^{2}=H^{2}\big{(}\langle Q\rangle-\langle Q_{s=q}\rangle\big{)}=H^{2}\big{(} \langle Q\rangle+2(s^{2}-1)\big{)}=(s-q)(s+q-1)H^{2}, \tag{8}\] such that \(m^{2}=0\) for \(s=q>0\). For the representations in the principal series, the mass defined in this way can be written as \(m^{2}=\left(s-\frac{1}{2}\right)^{2}H^{2}+\nu^{2}H^{2}\) (note that \(q=\frac{1}{2}+i\nu\)), so for a finite value of \(s\), \(m^{2}\) is reduced to \(\nu^{2}H^{2}\) in the \(H\to 0\) limit. In terms of \(m\), (2) becomes \[\left(\Box-(2-s(s-2))H^{2}-m^{2}\right)\!{\cal K}(x)=0, \tag{9}\] which indeed coincides with the equation of motion used in the previous literatures, e.g., [27, 39, 40, 41]. It is remarkable that apart from the issue of the sensible \(H\to 0\) limit, regarding (8) as a mass of any state in the UIR, one finds that the nonzero value of mass has a lower bound called the Higuchi bound. More concretely, the value of \(m^{2}\) turns out to be either zero or larger than \(s(s-1)H^{2}\), which is meaningful for \(s>1\). This bound is saturated by the representations belonging to the complementary series with \(\nu=\pm\frac{1}{2}\) (\(q=1,0\)) and the discrete series with \(q=0\). For instance, for \(s=2\), the lower bound on the nonzero value of \(m^{2}\) is given by \(2H^{2}\)[27] (see also [39] for \(s>2\)). ### Tower mass scales in the presence of throat and uplift potential Throughout this article, we consider Type IIB Calabi-Yau orientifold compactifications containing a number of Klebanov-Strassler throats, in which the dilaton and complex structure moduli are stabilized by fluxes [42]. The Kahler moduli are stabilized by non-perturbative effect [14] and possibly the additional \(\alpha^{\prime}\) corrections [15], and the potential stabilizing all the moduli is uplifted by \(\overline{\rm D3}\)-branes at the tip of one of throats. The string scale \(m_{s}\), the mass scale of a tower of string excitations is given by \(1/(2\pi\sqrt{\alpha^{\prime}})\). Since the ten-dimensional gravitational coupling is given by \(\kappa_{10}^{2}=g_{s}^{2}/(4\pi m_{s}^{8})\), denoting the volume of the internal manifold by \({\cal V}/m_{s}^{6}\), we obtain the relation between \(m_{s}\) and the four-dimensional Planck scale \(m_{\rm Pl}\) : \[m_{s}=\frac{g_{s}}{\sqrt{4\pi{\cal V}}}m_{\rm Pl}. \tag{10}\] Moreover, under the compactification, there can be various KK mass scales depending on where the KK modes are localized. The mass scale of the KK modes in the bulk is given by \[m_{\rm KK}^{\rm bulk}=\frac{2\pi m_{s}}{{\cal V}^{1/6}}=\sqrt{ \pi}\frac{g_{s}}{{\cal V}^{2/3}}m_{\rm Pl}. \tag{11}\] On the other hand, the mass scale of the KK modes localized in the throat region is determined by how strong the warping of the throat is. To see this, we note that the metric near the tip of the throat is given by \[ds^{2}=e^{2\Omega_{4}(x,y)}g_{\mu\nu}dx^{\mu}dx^{\nu}+e^{2\Omega _{6}(x,y)}g_{mn}dx^{m}dx^{n}, \tag{12}\] where \[e^{2\Omega_{4}(x,y)}=e^{2A(y)}e^{2\Omega(x)},\qquad e^{2\Omega_{ 6}(x,y)}=e^{-2A(y)}\sigma(x)^{1/2}. \tag{13}\] This throat geometry is supported by fluxes of \(F_{3}\) and \(H_{3}\), the flux quanta of which in string units will be denoted by \(M\) and \(K\), respectively. In the metric, \(\sigma(x)\) is the scalar part of the volume modulus, the vacuum expectation value of which satisfies \({\cal V}=\langle\sigma^{3/2}\rangle\) under the normalization \(\int d^{6}y\sqrt{g_{6}}e^{-4A}=m_{s}^{-6}\). Then the Weyl factor \(e^{2\Omega(x)}\) can be written as \[e^{2\Omega(x)}=\frac{{\cal V}\ell_{s}^{6}}{\sigma(x)^{3/2}\int d^{6}y\sqrt{g_{6 }}e^{-4A}}=\frac{\langle\sigma^{3/2}\rangle}{\sigma(x)^{3/2}}, \tag{14}\] such that \(\langle e^{2\Omega(x)}\rangle=1\). The warping of the throat is typically parametrized by the 'warp factor' defined by \(e^{-4A}\), which can be written as \[e^{-4A(y)}=1+\frac{e^{-4A_{0}(y)}}{\sigma(x)}, \tag{15}\] where \[\begin{split}& e^{-4A_{0}(y)}=2^{2/3}\frac{(g_{s}M)^{2}}{(2\pi)^{ 4}|z|^{4/3}}I(\eta),\\ & I(\eta)=\int_{\eta}^{\infty}dx\frac{x\coth x-1}{\sinh^{2}x}( \sinh(2x)-2x)^{1/3}.\end{split} \tag{16}\] Then the throat KK mass scale is given by \(m_{\rm KK}^{\rm throat}=\langle e^{\Omega_{4}}\rangle/(\langle e^{\Omega_{6}} \rangle R)=\langle e^{2A}\rangle/(R{\cal V}^{1/6})\), where \(R\) is the typical length scale of the throat : \(R\sim|z|^{1/3}/m_{s}\) for the A-cycle and \(R\sim\eta_{\rm UV}|z|^{1/3}/m_{s}\) for the B-cycle, where \(\eta_{\rm UV}\sim\log\left(\frac{1}{|z|}\right)=\frac{2\pi}{g_{s}}\frac{K}{M }(>1)\) is the length of the throat. When the throat is strongly warped, \(e^{-4A}\gg 1\) is satisfied, and the throat KK mass scale is highly redshifted by the warp factor [43] (see also [44, 45] for recent discussions) : \[m_{\rm KK}^{\rm throat}=\frac{2^{1/2}3^{1/6}\pi^{3/2}}{I(0)^{1/2}}\frac{|z|^{1/ 3}}{M{\cal V}^{1/3}}m_{\rm Pl}. \tag{17}\] Moreover, for the KK modes localized along the B-cycle of the throat, the corresponding KK mass scale is additionally suppressed by \(\eta_{\rm UV}\)[44]. Therefore, the KK mass scale associated with the throat of the strongest warping (that is, the smallest \(|z|\) and the largest \(\eta_{\rm UV}\)) is typically the lowest tower mass scale. In contrast, when the throat is extremely weakly warped, i.e., \(e^{-4A}\simeq 1\), the throat KK mass scale is given by \[m_{\rm KK}^{\rm throat}=\frac{g_{s}}{|z|^{1/3}{\cal V}^{2/3}}m_{\rm Pl}. \tag{18}\] Comparing this with (11), one finds that for the extremely weakly warped throat, the bulk KK mass scale \(m_{\rm KK}^{\rm bulk}\) is the lowest tower mass scale. Now we move onto the uplift potential \(V_{\rm up}\). When \(\overline{\rm D3}\)-branes at the tip of the throat are extended over the noncompact four-dimensional spacetime, the induced metric is given by \[ds_{\overline{\rm D3}}^{2}=e^{2\Omega_{4}(x,y)}g_{\mu\nu}dx^{\mu}dx^{\nu}=e^{2 A(y)}e^{2\Omega(x)}g_{\mu\nu}dx^{\mu}dx^{\nu}, \tag{19}\] from which \(V_{\rm up}\) is written as \[V_{\rm up}=2p\frac{T_{3}}{g_{s}}e^{4\Omega_{4}(x,y)}=4\pi p\frac{m_{s}^{4}}{g_ {s}}e^{4A(y)}e^{4\Omega(x)}, \tag{20}\] where \(T_{3}=2\pi m_{s}^{4}\) is the brane tension and \(p\) is the number of \(\overline{\rm D3}\)-branes. For the strongly warped throat (\(e^{-4A}\gg 1\)), we obtain \[V_{\rm up}=\frac{2^{4/3}\pi^{3}}{I(0)}\frac{g_{s}p}{M^{2}}\frac{|z|^{4/3}}{{ \cal V}^{4/3}}m_{\rm Pl}^{4}. \tag{21}\] Comparing this with (17), one finds the scaling law, \[m_{\rm KK}^{\rm throat}\sim\frac{1}{g_{s}^{1/4}M^{1/2}p^{1/4}}\langle V_{\rm up }\rangle^{1/4}, \tag{22}\] where the exponent \(1/4\) is the inverse of the number of noncompact dimensions over which \(\overline{\rm D3}\)-branes are extended. Meanwhile, when the throat is weakly warped (\(e^{-4A}\simeq 1\)), the uplift potential is given by \[V_{\rm up}=\frac{g_{s}^{3}}{4\pi}\frac{p}{{\cal V}^{2}}m_{\rm Pl}^{4}, \tag{23}\] from which one finds two scaling laws with respect to \(m_{s}\) and \(m_{\rm KK}^{\rm bulk}\) given by (10) and (11), respectively : \[\begin{split}& m_{s}\sim\Big{(}\frac{g_{s}}{4\pi p}\Big{)}^{1/4} \langle V_{\rm up}\rangle^{1/4},\\ & m_{\rm KK}^{\rm bulk}\sim\frac{1}{p^{1/3}}\Big{\langle}\frac{ V_{\rm up}}{m_{\rm Pl}^{4}}\Big{\rangle}^{1/3}m_{\rm Pl}.\end{split} \tag{24}\] We note that the exponent in the scaling law with respect to \(m_{s}\) is \(1/4\), the inverse of the number of noncompact dimensions. Moreover, \(m_{\rm KK}^{\rm bulk}\) is the lowest tower mass scale. ## 3 Relation between two tower mass scales In string model, the metastable dS vacuum is realized by uplift of the AdS vacuum, indicating that the positive cosmological constant \(\Lambda\) can be written as \[\Lambda=3m_{\rm Pl}^{2}H^{2}=-|V_{\rm AdS}|+V_{\rm up}. \tag{25}\] As reviewed in the previous section, \(V_{\rm up}\) obeys the scaling law with respect to some tower mass scale which will be denoted by \(m_{\rm sc}\), \[V_{\rm up}=v_{0}m_{\rm Pl}^{4}\Big{(}\frac{m_{\rm sc}}{m_{\rm Pl}}\Big{)}^{1/ \alpha}. \tag{26}\] For the strongly warped throat, \(m_{\rm sc}\) is identified with \(m_{\rm KK}^{\rm throat}\) given by (17) and \(\alpha=1/4\). For the extremely weakly warped throat, \(m_{\rm sc}\) corresponds to \(m_{s}\) given by (10) (\(\alpha=1/4\)) or \(m_{\rm KK}^{\rm bulk}\) given by (11) (\(\alpha=1/3\)). We now consider another throat of stronger warping such that the mass scale \(m_{0}\) of KK modes localized in this throat is lower than \(m_{\rm sc}\). Denoting the conifold modulus and the flux quanta of \(F_{3}\) associated with the throat of the stronger warping by \(z_{\ell}\) and \(M_{\ell}\), respectively, we obtain \[m_{0}\simeq\frac{|z_{\ell}|^{1/3}}{M_{\ell}{\cal V}^{1/3}}m_{\rm Pl}, \tag{27}\] just like (17). The Higuchi bound imposes that the nonzero \(m_{0}\) is larger than \(\sqrt{s(s-1)}H\), or equivalently, \(H<m_{0}/\sqrt{s(s-1)}\). Combining this with the fact that \(|V_{\rm Ads}|\) is smaller than \(3m_{\rm Pl}^{2}m_{3/2}^{2}\), (25) provides the inequality \[3m_{\rm Pl}^{2}m_{3/2}^{2}>|V_{\rm Ads}|>v_{0}m_{\rm Pl}^{4}\Big{(}\frac{m_{\rm sc }}{m_{\rm Pl}}\Big{)}^{1/\alpha}-\frac{3}{s(s-1)}m_{\rm Pl}^{2}m_{0}^{2}, \tag{28}\] which relates three scales, \(m_{\rm sc}\), \(m_{0}\), and \(m_{3/2}\). Noting that \(m_{3/2}\) is the characteristic mass scale of the four dimensional supergravity formalism, we expect that this is lower than \(m_{0}\), the KK mass scale implying the existence of extra dimensions, hence \(3m_{\rm Pl}^{2}m_{0}^{2}>3m_{\rm Pl}^{2}m_{3/2}^{2}\). Then we obtain the inequality \[3\Big{(}1+\frac{1}{s(s-1)}\Big{)}m_{\rm Pl}^{2}m_{0}^{2}>v_{0}m_{\rm Pl}^{4} \Big{(}\frac{m_{\rm sc}}{m_{\rm Pl}}\Big{)}^{1/\alpha}, \tag{29}\] which indicates that \(m_{\rm sc}\) (\(m_{0}\)) has the upper (lower) bound determined by \(m_{0}\) (\(m_{\rm sc}\)). We investigate this inequality in detail when the throat containing \(\overline{\rm D3}\)-branes is strongly and weakly warped, respectively. ### Strong warping case (\(e^{-4A}\gg 1\)) When the throat containing \(\overline{\rm D3}\)-branes is strongly warped, the scaling law (22) is satisfied, indicating \(m_{\rm sc}=m_{\rm KK}^{\rm throat}\) (given by (17)), \(\alpha=1/4\), and \(v_{0}\simeq g_{s}pM^{2}\). Then (28) is written as \[3m_{\rm Pl}^{2}m_{3/2}^{2}>|V_{\rm Ads}|>(g_{s}pM^{2})(m_{\rm KK}^{\rm throat})^ {4}-\frac{3}{s(s-1)}m_{\rm Pl}^{2}m_{0}^{2}, \tag{30}\] and (29) becomes \[m_{\rm KK}^{\rm throat}<\Big{(}\frac{3(s^{2}-s+1)}{(g_{s}pM^{2})s(s-1)}\Big{)}^ {1/4}m_{\rm Pl}^{1/2}m_{0}^{1/2}, \tag{31}\] or equivalently, \[\Big{(}\frac{m_{0}}{m_{\rm KK}^{\rm throat}}\Big{)}^{2}>(g_{s}pM^{2})\frac{s(s- 1)}{3(s^{2}-s+1)}\Big{(}\frac{m_{\rm KK}^{\rm throat}}{m_{\rm Pl}}\Big{)}^{2}. \tag{32}\] We may rewrite this bound in terms of the explicit expressions (17) for \(m_{\rm KK}^{\rm throat}\) and (27) for \(m_{0}\) to obtain the constraint on the warping of throats : \[\Big{(}\frac{|z_{\ell}|}{|z|}\Big{)}^{1/3}\frac{M}{M_{\ell}}>\Big{(}\frac{(g_ {s}pM^{2})s(s-1)}{3(s^{2}-s+1)}\Big{)}^{1/2}\frac{|z|^{1/3}}{M{\cal V}^{1/3}}. \tag{33}\] The bound (31) shows that even if the throat containing \(\overline{\rm D3}\)-branes is not of the strongest warping so the associated KK mass scale is not the lowest tower mass scale, it cannot be arbitrarily higher than the lowest KK mass scale \(m_{0}\), but bounded by \(\lesssim m_{\rm Pl}^{1/2}m_{0}^{1/2}\). This in turn means that \(m_{0}\) cannot be arbitrarily small, but has the lower bound determined by \(m_{\rm KK}^{\rm throat}\). Indeed, the size of deformation at the tip of the throat is given by \((g_{s}M)^{1/2}/(2\pi m_{s})\), 2 which is required to be much larger than the string length \(m_{s}^{-1}\) for the effective supergravity description to be valid. This indicates that \(g_{s}M\gg 1\). Moreover, we can further impose \(g_{s}M^{2}\gg p\) because otherwise the conifold modulus \(z\) is stabilized at \(0\)[47] (see, however, [48] for a recent counterargument). Both of these constraints impose that \(g_{s}pM^{2}\gg 1\), so our bound is more or less stronger than the simple inequality \(m_{\rm KK}^{\rm throat}<{\cal O}(1)m_{\rm Pl}^{1/2}m_{0}^{1/2}\). Footnote 2: This comes from the fact that for the strongly warped throat, \(e^{-2A}\) is approximated by \(e^{-2A_{0}}/\sigma^{1/2}\) (see (15)). As can be inferred from (16), it depends on \(|z|\) and \(\sigma\) through the combination \(1/(|z|^{2/3}\sigma^{1/2})\), which is cancelled by the prefactor \(\sigma^{1/2}\) in \(G_{mn}=e^{-2A}\sigma^{1/2}g_{mn}\) and the overall factor \(|z|^{2/3}\) in \(g_{mn}\)[46]. As a result, the overall factor in \(G_{mn}\) is given by \((g_{s}M)/(2\pi m_{s})^{2}\). In fact, (31) as the upper bound on \(m_{\rm KK}^{\rm throat}\) is useful when the size of \(\Lambda\) is not negligibly small, which may be realized in the inflationary cosmology. In contrast, \(\Lambda\) in our universe is as small as \(10^{-123}m_{\rm Pl}^{4}\) so it is reasonable to take \(|V_{\rm up}|\) to be much larger than \(\Lambda\). Since \(V_{\rm up}\simeq|V_{\rm AdS}|\) in this case, the condition \(|V_{\rm AdS}|<3m_{\rm Pl}^{2}m_{3/2}^{2}\) gives a more stringent bound, \[m_{\rm KK}^{\rm throat}\lesssim\Big{(}\frac{3}{g_{s}pM^{2}}\Big{)}^{1/4}m_{\rm Pl }^{1/2}m_{3/2}^{1/2}. \tag{34}\] But still, the bound (31) is valid as the lower bound on \(m_{0}\) determined by \(m_{\rm KK}^{\rm throat}\). In any case, it is remarkable that some intermediate new physics scale has the upper bound determined by another much lower scale. We can also compare the bound (31) with the bound obtained from the species scale \(\Lambda_{\rm sp}\)[49, 50], \[\Lambda_{\rm sp}=\frac{m_{\rm Pl}}{\sqrt{N_{\rm tot}}}, \tag{35}\] above which gravity is no longer weakly coupled to matter. Here \(N_{\rm tot}\) is the number of low energy degrees of freedom below \(\Lambda_{\rm sp}\), hence given by \[\begin{split}& N_{\rm tot}=N_{0}+N_{\rm KK},\\ & N_{0}=\frac{\Lambda_{\rm sp}}{m_{0}},\qquad N_{\rm KK}=\frac{ \Lambda_{\rm sp}}{m_{\rm KK}}.\end{split} \tag{36}\] For \(m_{0}\ll m_{\rm KK}\), we have \(N_{0}\gg N_{\rm KK}\) then \(\Lambda_{\rm sp}\simeq m_{\rm Pl}/\sqrt{N_{0}}\), from which we obtain \(\Lambda_{\rm sp}\simeq m_{\rm Pl}^{2/3}m_{0}^{1/3}\). Since \(m_{\rm KK}<\Lambda_{\rm sp}\), a condition \(m_{\rm KK}<m_{\rm Pl}^{2/3}m_{0}^{1/3}\) is satisfied, but this is less stringent than (31). ### Weak warping case (\(e^{-4A}\simeq 1\)) When the throat containing \(\overline{\rm D3}\)-branes is extremely weakly warped, two scaling laws given by (24) are satisfied : for \(m_{\rm sc}=m_{\rm KK}^{\rm bulk}\) (given by (11)) \(v_{0}\simeq p\) and \(\alpha=1/3\) while for \(m_{\rm sc}=m_{s}\) (given by (10)) \(v_{0}\simeq(4\pi p)/g_{s}\) and \(\alpha=1/4\). We first consider the case \(m_{\rm sc}=m_{\rm KK}^{\rm bulk}\), in which the inequality (28) is written as \[3m_{\rm Pl}^{2}m_{3/2}^{2}>|V_{\rm AdS}|>pm_{\rm Pl}(m_{\rm KK}^{ \rm bulk})^{3}-\frac{3}{s(s-1)}m_{\rm Pl}^{2}m_{0}^{2}. \tag{37}\] In the presence of an additional throat which is strongly warped, the KK modes localized in this throat provide the lowest tower mass scale given by (27). Requiring \(m_{0}>m_{3/2}\), we obtain \[3\Big{(}1+\frac{1}{s(s-1)}\Big{)}m_{\rm Pl}^{2}m_{0}^{2}>pm_{\rm Pl }(m_{\rm KK}^{\rm bulk})^{3}, \tag{38}\] or equivalently, \[m_{\rm KK}^{\rm bulk}<\Big{(}\frac{3(s^{2}-s+1)}{ps(s-1)}\Big{)} ^{1/3}m_{\rm Pl}^{1/3}m_{0}^{2/3}. \tag{39}\] Thus, \(m_{\rm KK}^{\rm bulk}\) has the upper bound depending on \(m_{0}\), with the exponent given by \(2/3\). At the same time, this inequality may be interpreted as the lower bound on \(m_{0}\) as well. Putting the explicit expressions for \(m_{0}\) and \(m_{\rm KK}^{\rm bulk}\) into the inequality gives the constraint on the strongest warp factor : \[\frac{3(s^{2}-s+1)}{ps(s-1)}\Big{(}\frac{|z_{\ell}|^{1/3}}{M_{ \ell}}\Big{)}^{2}>\frac{g_{s}^{3}}{{\cal V}^{4/3}}. \tag{40}\] Just like the previous case in which the throat containing \(\overline{\rm D3}\)-branes is strongly warped, we have a more stringent upper bound on \(m_{\rm KK}^{\rm bulk}\) determined by \(m_{3/2}\) if \(|V_{\rm AdS}|\simeq V_{\rm up}\gg\Lambda\), \[m_{\rm KK}^{\rm bulk}<\frac{3}{p}m_{\rm Pl}^{1/3}m_{3/2}^{2/3}. \tag{41}\] On the other hand, when \(m_{\rm sc}=m_{s}\), the inequality (28) is written as \[3m_{\rm Pl}^{2}m_{3/2}^{2}>|V_{\rm AdS}|>\frac{4\pi p}{g_{s}}m_{ s}^{4}-\frac{3}{s(s-1)}m_{\rm Pl}^{2}m_{0}^{2}, \tag{42}\] and (29) reads \[m_{s}<\Big{(}\frac{3g_{s}(s^{2}-s+1)}{(4\pi p)s(s-1)}\Big{)}^{1/ 4}m_{\rm Pl}^{1/2}m_{0}^{1/2}. \tag{43}\] Putting an explicit expression for \(m_{0}\) given by (27) into the inequality, we obtain the bound on the strongest warp factor : \[\frac{m_{s}}{m_{\rm Pl}}<\Big{(}\frac{3g_{s}(s^{2}-s+1)}{(4\pi p)s (s-1)}\Big{)}^{1/4}\Big{(}\frac{m_{0}}{m_{\rm pl}}\Big{)}^{1/2}=\Big{(}\frac {3g_{s}(s^{2}-s+1)}{(4\pi p)s(s-1)}\Big{)}^{1/4}\Big{(}\frac{|z_{\ell}|^{1/6}} {M_{\ell}^{1/2}{\cal V}^{1/6}}\Big{)}. \tag{44}\] For \(|V_{\rm AdS}|\simeq V_{\rm up}\gg\Lambda\), we have a more stringent bound on \(m_{s}\), \[m_{s}<\Big{(}\frac{3g_{s}}{4\pi p}\Big{)}^{1/4}m_{\rm Pl}^{1/2}m_ {3/2}^{1/2}. \tag{45}\] We can also compare our lower bound on \(m_{s}\) given by (43) with the lower bound on \(m_{s}\) considered in [51, 52]. This comes from the observation that the mass and spin of string excitations satisfy the Regge trajectory relation, \[m^{2}=(s-1)m_{s}^{2}, \tag{46}\] or \(m^{2}\simeq sm_{s}^{2}\) for large \(s\). This relation, however, violates the Higuchi bound \(m^{2}>s(s-1)H^{2}\simeq s^{2}H^{2}\) when the spin is larger than \(s_{\rm max}=(m_{s}/H)^{2}\), which implies that the cutoff scale is lower than \(\sqrt{s_{\rm max}}m_{s}=m_{s}^{2}/H\). If we identify the cutoff scale with \(m_{\rm Pl}\), we obtain the inequality \(m_{s}>H^{1/2}m_{\rm Pl}^{1/2}\). Combining this with (44), we obtain the bound \[\frac{H}{m_{\rm Pl}}<\Big{(}\frac{3g_{s}(s^{2}-s+1)}{(4\pi p)s(s-1)}\Big{)}^{1/ 2}\frac{m_{0}}{m_{\rm Pl}}=\Big{(}\frac{3g_{s}(s^{2}-s+1)}{(4\pi p)s(s-1)} \Big{)}^{1/2}\Big{(}\frac{|z_{\ell}|^{1/3}}{M_{\ell}{\cal V}^{1/3}}\Big{)}, \tag{47}\] or roughly, \(H<m_{0}\), which is more or less equivalent to the Higuchi bound. The similar conclusion can be drawn by combining \[3m_{\rm Pl}^{2}m_{0}^{2}>3m_{\rm Pl}^{2}m_{3/2}^{2}>|V_{\rm AdS}|=\frac{4\pi p }{g_{s}}m_{s}^{4}-3m_{\rm Pl}^{2}H^{2} \tag{48}\] with \(m_{s}>H^{1/2}m_{\rm Pl}^{1/2}\) : \[3m_{\rm Pl}^{2}m_{0}^{2}>\Big{(}\frac{4\pi p}{g_{s}}-3\Big{)}m_{\rm Pl}^{2}H^ {2}. \tag{49}\] Meanwhile, if the cutoff scale is given by the species scale \(\Lambda_{\rm sp}\simeq m_{\rm Pl}^{2/3}m_{0}^{1/3}\), we obtain \(m_{s}>H^{1/2}\Lambda_{\rm sp}^{1/2}>H^{1/2}m_{\rm Pl}^{1/3}m_{0}^{1/6}\). Combining this with (44) gives \[\frac{Hm_{0}^{1/3}}{m_{\rm Pl}^{4/3}}<\Big{(}\frac{3g_{s}(s^{2}-s+1)}{(4\pi p )s(s-1)}\Big{)}^{1/2}\Big{(}\frac{m_{0}}{m_{\rm Pl}}\Big{)}, \tag{50}\] or roughly, \(m_{0}>(H/m_{\rm Pl})^{3/2}m_{\rm Pl}\), while with (48) gives the trivial bound \[3m_{\rm Pl}^{2}m_{0}^{2}>3m_{\rm Pl}^{2}m_{3/2}^{2}>\frac{4\pi p}{g_{s}}m_{s} ^{4}-3m_{\rm Pl}^{2}H^{2}>\frac{4\pi p}{g_{s}}H^{2}m_{\rm Pl}^{4/3}m_{0}^{2/3} -3m_{\rm Pl}^{2}H^{2}, \tag{51}\] since the rightmost term is negative. We close this section by pointing out that the spin \(s\) in the Regge trajectory relation (46) is identified with the level of string excitations in the Minkowski background. Moreover, the massive field in Minkowski space is obtained by the Poincare contraction (taking \(H\to 0\) limit) of the representation in the principal series in which the squared dS mass given by \(\big{(}s-\frac{1}{2}\big{)}^{2}H^{2}+\nu^{2}H^{2}\). Since \(\nu/H\) is the mass of the field in Minkowski space, one may be tempted to identify \(\nu/H\), rather than the dS mass, with the mass in (46), \(\sqrt{s-1}m_{s}\). In this case, the additional term \((s-\frac{1}{2})^{2}H^{2}\) in the squared dS mass is regarded as the effect of interaction with the background geometry. Then the condition \(sm_{s}^{2}>s^{2}H^{2}\) can be interpreted as follows. So far as the model for dS space based on the four-dimensional particle description is concerned, \(m_{s}\) as well as \(m_{\rm KK}\) is larger than \(H\). That is, for the string excitations, \(\nu^{2}H^{2}\simeq sm_{s}^{2}\) in the squared dS mass is typically dominant over \((s-\frac{1}{2})^{2}H^{2}\) such that if \(H\ll m_{s}\), the dS mass is approximated by the mass in the Minkowski background. This approximation breaks down when \(s>s_{\rm max}\simeq(m_{s}/H)^{2}\), i.e., the condition \(sm_{s}^{2}>s^{2}H^{2}\) is violated. In this case, the dS mass can be approximated by \((s-\frac{1}{2})H\) implying that neglecting \(H\) is no longer a good approximation even if \(H\ll m_{s}\). Conclusions In this article, we investigate the particular case of Type IIB orientifold compactification with fluxes in which the internal manifold contains a number of throats and the warping of the throat containing \(\overline{\rm D3}\)-branes is not the strongest. Then the tower mass scale \(m_{\rm sc}\) satisfying the scaling law with respect to the uplift potential generated by \(\overline{\rm D3}\)-branes is not the lowest, but has the upper bound determined by the lowest tower mass scale \(m_{0}\), typically given by the KK mass scale associated with the throat of the strongest warping. This also may be interpreted as the lower bound on \(m_{0}\) determined by \(m_{\rm sc}\). When the exponent \(\alpha\) in the scaling law \(m_{\rm sc}\sim V_{\rm up}^{\alpha}\) is \(1/4\), the inverse of the number of noncompact spacetime dimensions over which \(\overline{\rm D3}\)-branes are extended, the upper bound on \(m_{\rm sc}\) is given by \(\sim m_{\rm Pl}^{1/2}m_{0}^{1/2}\). This shows that if \(m_{0}\) is about 10 TeV, just above the scale accessible at the LHC search, \(m_{\rm sc}\) cannot be higher than the intermediated scale, \(\sim 10^{11}\) GeV. This bound is applied to \(m_{\rm sc}=m_{\rm KK}^{\rm throat}\) when the throat containing \(\overline{\rm D3}\)-branes is strongly warped and \(m_{\rm sc}=m_{s}\) when the throat containing \(\overline{\rm D3}\)-branes is extremely weakly warped. On the other hand, when the throat containing \(\overline{\rm D3}\)-branes is extremely weakly warped, \(\alpha=1/3\) is allowed for \(m_{\rm sc}=m_{\rm KK}^{\rm bulk}\). In this case, the upper bound on \(m_{\rm sc}\) is given by \(m_{\rm Pl}^{1/3}m_{0}^{2/3}\) which is about \(5\times 10^{8}\) GeV when \(m_{0}\simeq 10\) TeV. We also point out that the cosmological constant in our universe can be much smaller than the uplift potential, which allows the stronger upper bound on \(m_{\rm sc}\) in which \(m_{0}\) is replaced by the lower scale, \(m_{3/2}\). These bounds tell us how the structure of the internal manifold is reflected in the relations between different tower mass scales. In particular, our setup in which the internal manifold contains a number of throats and the warping of the throat associated with uplift is not the strongest predicts that the evidences of the extra dimensions as well as the string may be found under the intermediate scale depending on the value of \(m_{0}\). ### Acknowledgements This work is motivated by Eran Palti's question about the author's parallel session talk at SUSY 2023 conference.
2309.10177
Self-Sustaining Multiple Access with Continual Deep Reinforcement Learning for Dynamic Metaverse Applications
The Metaverse is a new paradigm that aims to create a virtual environment consisting of numerous worlds, each of which will offer a different set of services. To deal with such a dynamic and complex scenario, considering the stringent quality of service requirements aimed at the 6th generation of communication systems (6G), one potential approach is to adopt self-sustaining strategies, which can be realized by employing Adaptive Artificial Intelligence (Adaptive AI) where models are continually re-trained with new data and conditions. One aspect of self-sustainability is the management of multiple access to the frequency spectrum. Although several innovative methods have been proposed to address this challenge, mostly using Deep Reinforcement Learning (DRL), the problem of adapting agents to a non-stationary environment has not yet been precisely addressed. This paper fills in the gap in the current literature by investigating the problem of multiple access in multi-channel environments to maximize the throughput of the intelligent agent when the number of active User Equipments (UEs) may fluctuate over time. To solve the problem, a Double Deep Q-Learning (DDQL) technique empowered by Continual Learning (CL) is proposed to overcome the non-stationary situation, while the environment is unknown. Numerical simulations demonstrate that, compared to other well-known methods, the CL-DDQL algorithm achieves significantly higher throughputs with a considerably shorter convergence time in highly dynamic scenarios.
Hamidreza Mazandarani, Masoud Shokrnezhad, Tarik Taleb, Richard Li
2023-09-18T22:02:47Z
http://arxiv.org/abs/2309.10177v1
Self-Sustaining Multiple Access with Continual Deep Reinforcement Learning for Dynamic Metaverse Applications ###### Abstract The Metaverse is a new paradigm that aims to create a virtual environment consisting of numerous worlds, each of which will offer a different set of services. To deal with such a dynamic and complex scenario, considering the stringent quality of service requirements aimed at the 6th generation of communication systems (6G), one potential approach is to adopt self-sustaining strategies, which can be realized by employing Adaptive Artificial Intelligence (Adaptive AI) where models are continually re-trained with new data and conditions. One aspect of self-sustainability is the management of multiple access to the frequency spectrum. Although several innovative methods have been proposed to address this challenge, mostly using Deep Reinforcement Learning (DRL), the problem of adapting agents to a non-stationary environment has not yet been precisely addressed. This paper fills in the gap in the current literature by investigating the problem of multiple access in multi-channel environments to maximize the throughput of the intelligent agent when the number of active User Equipments (UEs) may fluctuate over time. To solve the problem, a Double Deep Q-Learning (DDQL) technique empowered by Continual Learning (CL) is proposed to overcome the non-stationary situation, while the environment is unknown. Numerical simulations demonstrate that, compared to other well-known methods, the CL-DDQL algorithm achieves significantly higher throughputs with a considerably shorter convergence time in highly dynamic scenarios. Metaverse, 6G, Self-Sustainability, Non-Stationary, Multiple Access, Media Access Control (MAC), Adaptive AI, Continual Learning (CL), Deep Reinforcement Learning (DRL), Double Deep Q-Learning (DDQL). ## I Introduction The Metaverse is regarded as an advanced stage and the long-term vision of digital transformation that promises the creation of a 3-dimensional online virtual environment similar to the physical world [1]. This paradigm is expected to succeed the Internet in revolutionizing novel ecosystems of service provisioning in all walks of life (e.g., in extended reality, teleportation, unmanned mobility, and e-commerce), bringing even more challenges to the development of future wireless networks, which are already aimed at providing microsecond-level latency, bounded jitter, multi-gigabit-level throughput, extremely high reliability, and extremely low energy consumption [2]. Given that the Metaverse environment will be comprised of a variety of worlds, each of which will provide different types of services, such quality standards need to be maintained in light of the fact that the Metaverse environment is constantly subject to change. To face such highly dynamic environments where effective decisions must be made on a microsecond basis, various new paradigms have been introduced [3, 4]. As a potential strategy, one such paradigm is to employ mechanisms aiming to deliver "self-sustainability" as one of the driving factors toward the 6th generation of wireless communication systems (6G) [5]. A self-sustaining network maintains its efficiency and effectiveness despite variable conditions. Unsurprisingly, a solution that fits well with the concept of self-sustaining networks is Adaptive Artificial Intelligence (Adaptive AI), where the mindset of _once-in-a-lifetime train models_ has been transformed into a new mindset in which models are _continually re-trained_ with new data and conditions. It is expected that Adaptive AI will be one of the most important enablers to facilitate the provision of emerging services, including Metaverse applications [5], and Gartner refers to it as one of the strategic technology trends in 2023 [6]. Controlling multiple access to the frequency spectrum is one of the aspects of the self-sustaining feature that exists in 6G. In this scenario, a set of ever-fluctuating User Equipments (UEs) compete with one another for access to one or multiple frequency channels. Because these UEs are mobile and can be moved constantly from one access point to another at high speeds and frequencies, the number and type of them that have data to transmit over the frequency spectrum may vary over time. In addition, the traffic pattern might shift, either within a single UE from one moment to the next or across multiple UEs in terms of the active services seeking connection. The conditions of channels can change as well, influenced by a wide variety of noise sources and other environmental circumstances. Therefore, in order to realize future self-sustaining wireless networks, adaptive multiple access algorithms are essential. In recent years, Deep Reinforcement Learning (DRL) has been leveraged for adaptive multiple access to the frequency spectrum. For instance, Yu _et al._[7] adopted DRL to design a Media Access Control (MAC) protocol without assuming the protocol of other coexisting UEs. They considered a heterogeneous environment with a slotted uplink channel. The same authors extended their work to non-uniform scenarios, in which channel sensing requires one time slot but information packet transmission requires multiple time slots [8]. Jadoon _et al._[9] utilized DRL to optimize both throughput and packet age. Their research is compatible with machine-type communications on the assumption that the UEs are not saturated. Doshi _et al._[10] formulated the coexistence of multiple base stations over a shared channel, optimizing the signal-to-interference-plus-noise ratio of UEs. Besides, Guo _et al._[11] developed a solution for multi-agent scenarios to support delay-sensitive requests. Although innovative techniques have thus far been proposed, the problem of adapting agents to a non-stationary environment has not been addressed. Since DRL cannot reuse previously learned knowledge, adapting to every change could be time-consuming, depending on the distance between context transitions. Therefore, the aforementioned approaches cannot be used in Metaverse scenarios considering their highly dynamic nature. This paper fills in the gap in the current literature by investigating the problem of multiple access in non-stationary, multi-channel, unknown environments in order to maximize the throughput of the intelligent agent by avoiding collisions with incumbent users. The non-stationarity is caused by intermittent changes in the set of active UEs. To solve the problem, a Double Deep Q-Learning (DDQL) technique empowered by Continual Learning (CL) is proposed, exploiting prior knowledge acquired throughout the agent's lifetime. Although a number of tools have been proposed to overcome non-stationary situations [12], CL is the approach concerned with the adaptation of DRL-based agents [13]. The remainder of this paper is organized as follows: Section II introduces the background of DRL and CL. The system model and proposed approach are presented in Section III. Finally, numerical results are illustrated and analyzed in Section IV, followed by concluding remarks in Section V. ## II Background ### _Double Deep Q-Learning (DDQL)_ In Reinforcement Learning (RL), as a subset of machine learning techniques, an agent learns through trial and error how to optimize a given decision-making problem. The designer of the system specifies the reward function regarding the predefined design goals, and by learning and following the optimal strategy, the agent will maximize cumulative discounted rewards starting from any initial state. Q-Learning is probably the most recognized among the different algorithms introduced for model-free RL problems [14]. Each state-action pair is assigned a numeric value in Q-Learning, known as the Q value, and this value is gradually updated by the following equation, which is the weighted average of the old value and the new information, that is \[Q(s_{\tau},a_{\tau}) += \sigma[Y_{\tau}^{QL}-Q(s_{\tau},a_{\tau})], \tag{1}\] where \(s_{\tau}\) and \(a_{\tau}\) are the agent's state and action at time slot \(\tau\) respectively, \(\sigma\) is a scalar step size, and \(Y_{\tau}^{QL}\) is the target, defined by \[Y_{\tau}^{QL} =r_{\tau+1}+\gamma\ \text{max}_{a\in\mathcal{A}}Q(s_{\tau+1},a), \tag{2}\] where \(r_{\tau+1}\) is the reward at time slot \(\tau+1\), \(\gamma\in[0,1]\) is a discount factor that balances the importance of immediate and future rewards, and \(\mathcal{A}\) is the set of actions. Since the majority of worthwhile problems are too large to discover all possible combinations of states and actions and learn all state-action values, Double Deep Q-Learning (DDQL) is a ground-breaking alternative to approximate them, wherein 1) Deep Neural Networks (DNNs) are used to approximate Q values, and 2) the selection and evaluation of actions are decoupled [15]. In DDQL, the state is provided as the input, and the \(Q\) function of all possible actions, denoted by \(Q(s,.;\mathbf{\mathcal{W}})\), is generated as the output, where \(\mathbf{\mathcal{W}}\) is the set of DNN parameters. The target of DDQL is as follows: \[Y_{\tau}^{DDQL}=r_{\tau+1}+\gamma\ \widehat{Q}(s_{\tau+1},a^{\prime},\mathbf{ \mathcal{W}}_{\tau}^{-}), \tag{3}\] and the update function of \(\mathbf{\mathcal{W}}\) is \[\mathbf{\mathcal{W}}_{\tau+1}=\mathbf{\mathcal{W}}_{\tau}+\sigma[Y_{\tau}^{DQL}-Q(s_{ \tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau})]\nabla_{\mathbf{\mathcal{W}}_{\tau}}\cdot Q (s_{\tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau}), \tag{4}\] where \(a^{\prime}=\text{argmax}_{a\in\mathcal{A}}Q(s_{\tau+1},a,\mathbf{\mathcal{W}}_{ \tau})\). In this model, \(\mathbf{\mathcal{W}}\) represents the set of weights for the main (or evaluation) \(Q\) and is updated in each step, whereas \(\mathbf{\mathcal{W}}^{-}\) is for the target \(\widehat{Q}\) and is replaced with the weights of the main network every \(t\) steps. In other words, \(\widehat{Q}\) remains a periodic copy of \(Q\). The DDQL agent is represented in Fig. 1. To improve the efficiency, the observed transitions are stored in a memory bank known as the experience memory, and the neural network is updated by randomly sampling from this pool. ### _Continual Learning (CL)_ In real-world settings, especially in the ever-changing Metaverse ecosystem, it is anticipated that the probability transition function or reward function will change over the lifetime of the agent. This non-stationarity necessitates a distinction between training and testing periods. Recent advances in DRL have demonstrated impressive efficiency in a variety of tasks, but they frequently pivot around an agent that focuses on mastering a narrow task of interest. Besides, after any significant change, RL agents frequently require additional training to adapt to the new environment, and even after this training, they lack the ability to generalize to new variations, even for simple problems. Therefore, dynamic environments necessitate novel learning mechanisms distinct from other Fig. 1: DDQL agent. types of learning (such as meta or multi-task learning). CL is concerned with the adaptation of the RL agent to the evolution of these environments over time [13]. In CL, the system contains \(\mathcal{M}\) contexts (or tasks) \(\mathcal{T}_{m}\) sequentially, where \(m\in\{1,2,...,\mathcal{M}\}\). While _catastrophic forgetting_ (i.e., losing performance on old tasks after learning new tasks) is a critical issue that CL seeks to address, _interference_ is another issue that has yet to be handled. Interference occurs when two tasks have incompatible (or even contradictory) optimal actions for the same observation. To effectively manage these challenges, Kessler _et al._[16] proposed an algorithm, named OWL. This algorithm 1) employs a single network with a shared feature extractor but multiple heads, parameterized by linear layers to fit individual tasks; and 2) flushes the experience replay buffer prior to beginning learning for a new task. At the time of testing, task selection is approached as a multi-armed bandit problem in order to adaptively choose the optimal policy. Additionally, the authors employed the Elastic Weight Consolidation (EWC) mechanism to prevent forgetting between tasks. This algorithm slows down learning on specific weights based on their significance to previously observed tasks. The OWL algorithm is the foundation of our method, which is described in the following section. ## III Proposed Approach ### _System Model_ We consider a single small cell covered by a Small Base Station (SBS) with \(n\in\{0,\ldots,\mathcal{N}\}\) User Equipments (UEs) competing over \(C\) time-slotted channels (see Fig. 2). Except for one (i.e., the CL-DDQL agent, or simply _the agent_), all UEs periodically transmit their packets using the Time-Division Multiple Access (TDMA) protocol. For example, a headset may transmit visual recordings to its control center every second. The environment is non-stationary due to the fluctuating number of active TDMA users. Changes in the number of active Metaverse users can be attributed to a variety of factors, including users' mobility and bandwidth-saving strategies. In the headset example, if the user is inactive, data may be transmitted every 10 seconds. A context (or task) is defined as a collection of active UEs with unique identifiers on specific channels (e.g., UE \(0\) on channel \(1\) and UE \(1\) on channel \(2\) would constitute a simple context). Consequently, context transitions occur when a UE enters or leaves a channel. It is assumed that the agent is informed of the arrivals and departures of other UEs via SBS. However, the agent is unaware of the transmission profiles, so it must learn to coexist with these UEs. The agent's transmissions are independent of SBS to avoid unnecessary signaling overhead in scheduling grant decoding. However, it relies on the SBS's ACK signals issued at the end of each packet transmission (or channel sensing) to indicate successful transmission (or channel idleness). The transmission of control messages is assumed to occur over a separate, collision-free channel. Similar to Yu _et al._[8], we assume UEs with variable-length packets, where \(k\in\{1,...,\mathcal{K}\}\) represents the packet length, as this is more practical than fixed-size packets [7]. Unlike Yu _et al._[8], however, our approach takes multiple channels into account, making it even more applicable in high-bandwidth Metaverse environments. To coexist successfully with other UEs, the objective function of the agent is to maximize its throughput by utilizing idle time slots in the channels. ### _Agent Customization_ The first step in exploiting an RL agent for a particular problem is to define the agent's action, reward, and state space. We define the action space as set \(\mathcal{A}=\{a:(k,c)|k\in\{1,...,\mathcal{K}\},c\in\{1,...,\mathcal{C}\}\}\), where \(a:(0,c)\) points to sensing channel \(c\) for one time-slot, and \(a:(k>0,c)\) denotes the transmission of a packet with length \(k\) on channel \(c\). Since the agent is designed to maximize its throughput, the reward is equal to the length of successfully transmitted packets. In the case of sensing channel \(c\), the observation set would be \(\boldsymbol{O}\) = {_Busy, Idle_}, whereas it would be \(\boldsymbol{O}\) = {_Success, Collision_} in the case of packet transmission. The state of the agent is the sequence of the most recent \(\mathcal{H}\) (observation, packet length, channel) tuples. To further enhance the Q function, we employ the dueling mechanism in the DDQL agent's evaluation network (Fig. 1). Two estimators are utilized in this mechanism: one for the state value function and one for the state-dependent action advantage function. The primary advantage is the ability to generalize learning across actions without modifying the learning algorithm, which improves policy evaluation in the presence of numerous actions with similar values. The evaluation network mechanism of the DDQL agent is detailed in Fig. 3. In this module, the state is fed to a Long Short-Term Memory (LSTM) feature extractor in order to discover patterns that are consistent across all contexts. Afterwards, two sequences (or streams) of fully interconnected layers are utilized. The streams are designed to provide separate estimates of the state value function and the state-dependent action advantage function, denoted \(\mathcal{V}\) and \(\mathcal{V}^{\prime}\), respectively. The two streams are combined to produce Q values as the final step. Additionally, to update the Q function, the target function in Fig. 2: System model. (3) must be transformed due to the non-uniformity of action lengths: \[Y_{\tau}^{\star}=\frac{(1-\gamma^{d_{\tau}})}{(1-\gamma)\ d_{\tau}}\ r_{\tau+1}+ \gamma^{d_{\tau}}\ \widehat{Q}(s_{\tau+1},a^{\prime},\mathbf{\mathcal{W}}_{\tau}^{-}), \tag{5}\] where \(d_{\tau}\) is the length of the action. For actions of length one (sensing the channel or sending a single time slot packet), (3) and (5) are obviously equivalent. However, future time slots are discounted for larger packages. ### _CL Mechanism_ To accommodate the non-stationary nature of the environment, the proposed DDQL agent should be modified to remember previously learned contexts and rerun the training procedure for new contexts. In order to accomplish this, a CL mechanism is proposed and detailed in Algorithm. 1. In this algorithm, \(\mathcal{T}\) represents the lifetime of the agent, whereas \(\epsilon^{\prime}\) and \(\widetilde{\epsilon}\) are small positive integers used to control the \(\epsilon\)-greedy mechanism. Through each step, if the agent is informed of a new context (\(\phi\)) by SBS, it saves the current experience memory and weights before examining the recorded contexts (\(\mathbf{\Omega}\)). If \(\phi\) has been viewed previously, its experience memory and weights are loaded again. Otherwise, these parameters and \(\epsilon\) reset after step 1. Following this, the reward and observation are collected and used to update the weights via the experience memory. Note that the action in each iteration is chosen by the \(\epsilon\)-greedy policy that follows the evaluation function of the corresponding agent with probability \((1-\epsilon)\) and chooses a random action with probability \(\epsilon\). During the training process, the probability decreases linearly from \(\epsilon\) to \(\widetilde{\epsilon}\). ``` Input:\(\mathcal{T}\), \(\epsilon^{\prime}\), and \(\widetilde{\epsilon}\) 1\(\Omega\leftarrow\emptyset\), \(\mathbf{\mathcal{W}}\leftarrow\mathbf{0}\), \(\mathbf{\mathcal{W}}^{-}\leftarrow\mathbf{0}\), \(\epsilon\gets 1\), \(memory\leftarrow\{\}\) 2foreach\(\tau\) in \([0:\mathcal{T}]\)do 3ifnew context \(\phi\) is announcedthen 4 save the current context memory and weights 5if\(\phi\notin\mathbf{\Omega}\)then 6\(\mathbf{\Omega}\leftarrow\mathbf{\Omega}\cup\{\phi\}\) 7 reset \(\mathbf{\mathcal{W}},\mathbf{\mathcal{W}}^{-},memory\), and \(\epsilon\) 8 9elseif\(\phi\in\mathbf{\Omega}\)then 10 reload \(\mathbf{\mathcal{W}},\mathbf{\mathcal{W}}^{-}\), and \(memory\) of \(\phi\) 11\(\zeta\leftarrow\) generate a random number from \([0:1]\) 12if\(\zeta>\epsilon\)then 13\((k,c)\gets argmax_{a\in\mathbf{\mathcal{A}}}Q(s_{\tau},a,\mathbf{\mathcal{W}})\) 14 15else 16 select a random \((k,c)\) from \(\mathbf{\mathcal{A}}\) 17 transmit the packet, and get \(O_{\tau}\) and \(r_{\tau+1}\) 18 calculate \(s_{\tau+1}\) 19\(memory\gets memory\cup\{(s_{\tau},(k,c),r_{\tau+1},s_{\tau+1})\}\) 20choose a sample form \(memory\), and train the agent 21if\(\epsilon>\widetilde{\epsilon}\)then 22\(\epsilon\leftarrow\epsilon-\epsilon^{\prime}\) ``` **Algorithm 1**CL-DDQL ## IV Evaluation Within this section, a numerical analysis into the effectiveness of the proposed CL-DDQL method is conducted. The hyper-parameters and configurations are listed in Table I. In order to test the efficacy of our strategy, we carried out a series of experiments on a computer running a 64-bit operating system that was equipped with 16 NVIDIA Tesla V100 Graphics Processing Units (GPUs) and 10 gigabytes of Non-Volatile Memory express (NVMe) storage. PyTorch was utilized to effectively implement both the evaluation and target networks. In each experiment, comparisons are made between the CL-DDQL, DDQL, and Random algorithms. The only difference between DDQL and CL-DDQL is that the CL-DDQL agent has a context management mechanism, whereas the DDQL algorithm lacks remembrance, so each announced context appears to be new to it. Finally, the Random agent transmits a packet that has a random length over a random channel. This will be accomplished without any prior knowledge or any specific adjustments being made to the configuration. To compare algorithms, we use three metrics: normalized agent throughput, collision rate, and convergence time. The normalized agent throughput is computed by summing the length of the packets successfully transmitted over the last 1000 time slots (excluding headers) and dividing it by the maximum achievable throughput sum within the same window. The collision rate is the ratio of collision observations to total observations in the last 1000 time slots. Time between the occurrence of a context change and when the agent's throughput reaches a steady state is the convergence time. All metrics are averaged over 10 simulation rounds. In the first scenario, we establish fixed context transition points and fixed context specifications in order to better illustrate the efficacy of our strategy. Then, in the second scenario, we evaluate our scheme in a more realistic setting by assuming stochastic transition points and context specifications. Fig. 3: Evaluation network of DDQL (Fig. 1) ### _Scenario 1: Fixed Change Points_ In this scenario, it is assumed that context transitions occur at specific times, as outlined in Table II. (\(k,\tau,f,c\)) identifies a TDMA UE that transmits a packet of size \(k\) beginning on the \(\tau\)-th time slot of each frame of size \(f\) on channel \(c\). Clearly, the first and final quarters of the simulation take place in the same context; therefore, the CL-DDQL agent should utilize its prior knowledge of the first context when encountering it again. Fig. 4 verifies that the CL-DDQL agent possesses the required backward transfer capability for non-stationary environments. In addition, the agent utilizes its forward transfer capability when confronted with novel contexts. Despite the fact that the second and third contexts are distinct from the first, the pre-trained feature extractor enables the CL-DDQL algorithm to converge significantly more quickly than the conventional DDQL algorithm. In addition, the figures reveal that DDQL has greater variations in all metrics, which is highly undesirable in wireless networks. Evidently, Random, the method with the lowest complexity, is also inefficient. ### _Scenario 2: Stochastic Change Points_ In this scenario, context shifts occur intermittently. When a UE arrives on a channel, it remains active at a rate of \(1/\beta\) according to an exponential distribution. After its departure, a new UE will replace it. The new UE has a novel (i.e., previously unseen) profile with probability \(P\) and a repetitive profile with probability \(1-P\). In our simulations, we set \(P\) to 0.5. Moreover, the parameters of UE profiles are sampled from a set of distributions, namely \(\{\mathcal{U}_{1,4},\mathcal{U}_{4,8},\mathcal{U}_{8,12},\mathcal{U}_{1,C}\}\) respectively, where \(\mathcal{U}\) represents the Uniform distribution. Two experiments are defined by hyper-parameters \(C\) (number of channels) and \(\beta\) (mean duration of UE existence in the network). In the first experiment, the number of channels is set to 2, but \(\beta\) varies from 20 to 100 percent of simulation time (and so the duration of contexts varies). In the second experiment, \(\beta\) remains constant while the number of channels ranges from 1 to 5. As Fig. 5 demonstrates, the more frequent the context transitions (lower values for \(\beta\)), the more continual learning improves the performance. This is due to the increased likelihood of encountering repetitive contexts. In addition, the performance of CL-DDQL is hardly impacted by an increase in the rate at which contexts are transited, making it suitable for the highly dynamic environments of the Metaverse. Nonetheless, both algorithms perform better in environments with less variability. For the second experiment, Fig. 6 illustrates that as the number of channels increases, the CL-DDQL algorithm becomes marginally more advantageous than DDQL. Notwithstanding, the performance of the two algorithms is not significantly impacted by the number of channels, leading us to conclude that while a greater number of channels provides more idle time slots for the agent, it also increases problem dimensions and thus the number of novel contexts to be explored. ## V Conclusion In this paper, the multi-channel multiple access problem was investigated while taking into account a non-stationary scenario in which the number of active UEs might shift over the course of time. The primary objective was to achieve maximum throughput while avoiding collisions with existing users. Initially, we introduced DRL and CL as two Adaptive AI mechanisms that could aid in the realization of self-sustaining networks. Afterward, a DDQL-based agent that is empowered by CL is designed. This agent is in charge of making decisions regarding spectrum access, such as adjusting a channel and modifying the length of the packet that needs to be transmitted. The effectiveness of the suggested agent was proved by the numerical results. Compared to other well-known methods, the CL-DDQL algorithm was shown to achieve significantly higher throughputs with a considerably shorter convergence time in highly dynamic unknown environments. As a potential future work, we intend to tackle the problem by incorporating non-stationary channels with varying state probability distribution functions. In addition, we plan to enhance the CL-enabled DDQL-based method for accessing the spectrum for semantically-aware scenarios in which transmitting a subset of active UEs is sufficient to construct the parallel near-real-world experience, which could be a game-changer for bringing the Metaverse into existence by filtering out redundant data and maximizing the utilization of scarce communication resources. ## Acknowledgment This research work is partially supported by the European Union's Horizon 2020 Research and Innovation Program through the Charity project under Grant No. 101016509, the Academy of Finland 6G Flagship program under Grant No. 346208, and the Academy of Finland IDEA-MILL project under Grant No. 352428.
2309.16506
On the local linearization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing
In this note, we establish a bi-parameter linear localization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing.
Jingyu Huang, Tadahiro Oh, Mamoru Okamoto
2023-09-28T15:12:41Z
http://arxiv.org/abs/2309.16506v3
On the linear localization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing ###### Abstract. In this note, we establish a bi-parameter linear localization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing. Key words and phrases:linear localization; stochastic wave equation; multiplicative noise; null coordinates 2020 Mathematics Subject Classification: 35R60, 35L05, 60H15 ## 1. Introduction We consider the following stochastic wave equation (SNLW) on \(\mathbb{R}\times\mathbb{R}\): \[\begin{cases}\partial_{t}^{2}u-\partial_{x}^{2}u=F(u)\xi\\ (u,\partial_{t}u)|_{t=0}=(u_{0},u_{1}),\end{cases}\qquad(t,x)\in\mathbb{R} \times\mathbb{R}, \tag{1.1}\] where \(F:\mathbb{R}\to\mathbb{R}\) is a Lipschitz continuous function and \(\xi\) denotes the (Gaussian) space-time white noise on \(\mathbb{R}\times\mathbb{R}\) whose space-time covariance is formally given by \[\mathbb{E}\big{[}\xi(t_{1},x_{1})\xi(t_{2},x_{2})\big{]}=\delta(t_{1}-t_{2}) \delta(x_{1}-x_{2}). \tag{1.2}\] The expression (1.2) is merely formal but we can make it rigorous by testing it against a test function. **Definition 1.1**.: A two-parameter white noise \(\xi\) on \(\mathbb{R}^{2}\) is a family of centered Gaussian random variables \(\{\xi(\varphi):\varphi\in L^{2}(\mathbb{R}^{2})\}\) such that \[\mathbb{E}\big{[}\xi(\varphi)^{2}\big{]}=\|\varphi\|_{L^{2}(\mathbb{R}^{2})}^{ 2}\quad\text{and}\qquad\mathbb{E}\big{[}\xi(\varphi_{1})\xi(\varphi_{2})\big{]} =\langle\varphi_{1},\varphi_{2}\rangle_{L^{2}(\mathbb{R}^{2})}.\] In [16], Walsh studied the Ito solution theory for (1.1) and proved its well-posedness. See, for example, [16, p.323, Exercise 3.7] and [5, p.45], where the fundamental properties of solutions to (1.1) are stated (implicitly). For readers' convenience, we state and prove basic properties of solutions to (1.1) in Appendix A. Our main goal in this note is to study the local fluctuation property of solutions to (1.1). Let us first consider the following stochastic heat equation: \[\begin{cases}\partial_{t}u-\partial_{x}^{2}u=F(u)\xi\\ u|_{t=0}=u_{0},\end{cases}\qquad(t,x)\in\mathbb{R}\times\mathbb{R}. \tag{1.3}\] It is well known that, under suitable assumptions on \(F\) and \(u_{0}\), the solution to (1.3) _locally linearizes_; namely by letting \(Z_{\text{heat}}\) denote the linear solution satisfying \(\partial_{t}Z_{\text{heat}}-\partial_{x}^{2}Z_{\text{heat}}=\xi\) with \(Z_{\text{heat}}|_{t=0}=0\), the solution \(u\) to (1.3) satisfies \[u(t,x+\varepsilon)-u(t,x)=F(u(t,x))\big{\{}Z_{\text{heat}}(t,x+\varepsilon)-Z _{\text{heat}}(t,x)\big{\}}+R_{\varepsilon}(t,x), \tag{1.4}\]
2309.09426
Joint Demosaicing and Denoising with Double Deep Image Priors
Demosaicing and denoising of RAW images are crucial steps in the processing pipeline of modern digital cameras. As only a third of the color information required to produce a digital image is captured by the camera sensor, the process of demosaicing is inherently ill-posed. The presence of noise further exacerbates this problem. Performing these two steps sequentially may distort the content of the captured RAW images and accumulate errors from one step to another. Recent deep neural-network-based approaches have shown the effectiveness of joint demosaicing and denoising to mitigate such challenges. However, these methods typically require a large number of training samples and do not generalize well to different types and intensities of noise. In this paper, we propose a novel joint demosaicing and denoising method, dubbed JDD-DoubleDIP, which operates directly on a single RAW image without requiring any training data. We validate the effectiveness of our method on two popular datasets -- Kodak and McMaster -- with various noises and noise intensities. The experimental results show that our method consistently outperforms other compared methods in terms of PSNR, SSIM, and qualitative visual perception.
Taihui Li, Anish Lahiri, Yutong Dai, Owen Mayer
2023-09-18T01:53:10Z
http://arxiv.org/abs/2309.09426v1
# Joint Demosaicing and Denoising With Double Deep Image Priors ###### Abstract Demosaicing and denoising of RAW images are crucial steps in the processing pipeline of modern digital cameras. As only a third of the color information required to produce a digital image is captured by the camera sensor, the process of demosaicing is inherently ill-posed. The presence of noise further exacerbates this problem. Performing these two steps sequentially may distort the content of the captured RAW images and accumulate errors from one step to another. Recent deep neural-network-based approaches have shown the effectiveness of joint demosaicing and denoising to mitigate such challenges. However, these methods typically require a large number of training samples and do not generalize well to different types and intensities of noise. In this paper, we propose a novel joint demosaicing and denoising method, dubbed JDD-DoubleDIP, which operates directly on a single RAW image without requiring any training data. We validate the effectiveness of our method on two popular datasets--Kodak and McMaster--with various noises and noise intensities. The experimental results show that our method consistently outperforms other compared methods in terms of PSNR, SSIM, and qualitative visual perception. Taihui Li\({}^{1}\)1 Anish Lahiri \({}^{2}\) Yutong Dai \({}^{2}\) Owen Mayer \({}^{2}\)\({}^{1}\) Computer Science and Engineering, University of Minnesota, Minneapolis, USA \({}^{2}\) Sony Corporation of America, R&D US Laboratory, San Jose, USA Image Signal Processing, Deep Image Prior, RAW Images, Demosaicing, Denoising Footnote 1: Work performed while at Sony Corporation of America. ## 1 Introduction A RAW image (a.k.a. mosaic image) is sensor data directly captured by digital cameras. In RAW images, only a third of the color information required to produce a high-quality full-color RGB image is available. Hence, demosaicing is necessary to interpolate the missing color components. However, this is inherently an ill-posed problem [1, 2] and the presence of noise in RAW images due to various factors (e.g., Poisson noise due to lighting conditions, Gaussian noise from electronics, etc.) further exacerbates this problem [3]. Traditionally, demosaicing and denoising are handled sequentially in the camera processing pipeline, but this may lead to content distortion in images and the accumulation of errors from one processing step to another. Furthermore, determining the optimal processing order also becomes a challenge [4, 5]. In contrast, adopting a joint demosaicing and denoising (JDD) strategy can naturally overcome the aforementioned challenges [6, 1]. Classical methods for JDD include the specialized design of filters with constraints [6] and the use of certain heuristics such as total variation minimization [7], self-similarity [8], learned nonparametric random fields [9], and sequential energy minimization [10]. Recently, with the resurgence of deep neural networks (DNNs), several studies [1, 11, 2, 12] have shown the benefits of DNN-based JDD over traditional methods. One main stream among these methods is data-driven JDD. Gharbi et al. [1] cast JDD as a supervised learning problem and use a convolutional neural network to directly learn a mapping between noisy RAW images and full-color RGB images. Ehret et al. [11] propose a mosaic-to-mosaic training strategy that learns demosaicing and denoising on RAW images only, without full-color ground-truth RGB images. Liu et al. [2] design an additional branch to estimate the green channel and then use the estimated green channel as a guide to recover all missing values. Though these methods have significantly improved the demosaicing and denoising performance, they require massive training data to learn the models well. It is not only expensive to acquire these data sets, but there are no ground-truth data in practice. Worse, once a DNN model has been trained on a particular noise and noise intensity, it does not generalize well to other noise and noise intensities. This motivates the need for developing single-instance JDD methods. The recent deep image prior (DIP) method [13] demonstrates that the neural network itself can serve as an implicit prior for natural images to solve image restoration problems by fitting a single noisy observation. Although DIP has garnered growing interest recently, its application in RAW image demosaicing remains relatively scarce. Park et al. [12] for the first time present a DIP-based approach (V-DIP) for RAW image demosaicing and denoising, obviating the need for training data. However, V-DIP has an additional optimization objective to update targets and formulates the problem as image inpainting without explicitly considering denoising. To address the limitations of V-DIP, we propose a dual-branch Figure 1: The PSNR comparisons of demosaicing and denoising on RAW images—Kodim07 (first column) and McMaster06 (second column)—with Gaussian noise \(\sigma=30\) (first row) and Poisson noise \(\lambda=25\) (second row). We provide comprehensive comparisons in Table 1 and Table 2. model, dubbed _JDD-DoubleDIP_, which utilizes information from a denoising DIP branch to guide the joint demosaicing and denoising objective. To do this, we first adopt a DIP dedicated to account for noise removal (_denoising_). Then, we employ a second DIP which jointly performs denoising and missing-value interpolation (_demosaicing_), and is guided by the output from the denoising DIP. We join these two DIPs by amalgamating their respective loss functions into a single optimization objective and update their parameters simultaneously. By doing so, we have not only explicitly induced denoising guidance information into the model, but we also enabled these two DIPs to collaborate with each other for better performance. Fig. 1 shows the superiority of our model over other models. ## 2 Our Method ### Background: Deep Image Prior Given a single noisy observation \(\mathbf{y}\), deep image prior (DIP) [13] uses a structured deep neural network \(G_{\mathbf{\theta}}(\cdot)\) parameterized by \(\mathbf{\theta}\) to fit this noisy observation and restore its corresponding clean image \(\mathbf{x}\). Specifically, DIP solves the following optimization problem: \[\min_{\mathbf{\theta}}\ \ell(\mathbf{y},f\circ G_{\mathbf{\theta}}(\mathbf{z})) \tag{1}\] where \(\circ\) denotes function composition, \(f\) represents the forward measurement operator (e.g., \(f\) is an identity operator for image denoising), \(\mathbf{z}\) is a random input seed sampling from an uniform distribution, and \(\ell\) is a loss function (e.g., mean squared error). After obtaining the optimal solution \(\mathbf{\theta}^{*}\) of Eq. (1), the clean image \(\mathbf{x}\) could be easily restored with a forward pass \(G_{\mathbf{\theta}^{*}}(\mathbf{z})\). Despite the fact that DIP is learned without any training dataset, it has shown tremendous promise in a variety of tasks ranging from classical image restoration [13, 14, 15], to advanced computational imaging problems [16, 17, 18], and even beyond (e.g., time series [19]). ### Problem Formulation We represent the clean RAW image as \(\mathbf{x}^{1ch}\in\mathbb{R}^{H\times W}\), noise as \(\mathbf{n}\in\mathbb{R}^{H\times W}\), and the noisy RAW image as \(\widehat{\mathbf{x}}^{1ch}=\mathbf{x}^{1ch}+\mathbf{n}\), where \(H\) and \(W\) are the height and width of the image. We further define a mask \(\mathbf{m}\in\mathbb{R}^{H\times W\times 3}\), where each spatial location has \(1\) in the channel corresponding to the color acquired at that position in the RGGB Bayer pattern, \(0\) in other channels. We also introduce an operation \(\mathcal{T}\), which maps the single-channel RAW data into a full-color image, where the non-Bayer components are \(0\) (representing the missing pixels) and the originally sampled RAW data placed at the respective Bayer location and channels. The goal of JDD is to reconstruct a high-quality full-color RGB image \(\mathbf{x}^{8ch}\in\mathbb{R}^{H\times W\times 3}\) from a single noisy observation \(\widehat{\mathbf{x}}^{3ch}=\mathcal{T}\widehat{\mathbf{x}}^{1ch}\). To do so, we need to fill in the missing pixels in the RAW image \(\widehat{\mathbf{x}}^{3ch}\) (_demosaicing_), as well as to remove noisy components \(\mathbf{n}\) (_denoising_). In a manner similar to V-DIP [12], we also formulate the demosaicing procedure as an image-inpainting problem, since both attempt to reconstruct the absent pixels. ### Double DIPs for Joint Demosaicing and Denoising DIP can restore high-quality images for inpainting when the observation is clean [13, 14], however, its performance deteriorates dramatically if additional noise is present [20]. Thus, we posit that conceptualizing the demosaicing procedure as an inpainting task and employing a DIP to directly restore absent values, without meticulous consideration of noise removal, may lead to suboptimal solutions. To overcome this limitation, we introduce a novel dual-branch model for demosaicing (DM) and denoising (DN), dubbed _JDD-DoubleDIP_, which consists of two DIPs (see Fig. 2): _DIP-Denoising_\(G_{\mathbf{\theta}}\), parameterized by \(\mathbf{\theta}\) and _DIP-Demosaicing_1\(G_{\mathbf{\phi}}\), parameterized Figure 2: Framework of JDD-DoubleDIP, consisting of a _DIP-Denoising_\(G_{\mathbf{\theta}}\) and a _DIP-Demosaicing_\(G_{\mathbf{\phi}}\). \(G_{\mathbf{\theta}}\) is used to denoise the given noisy RAW image and provide “clean” guidance to \(G_{\mathbf{\phi}}\); while \(G_{\mathbf{\phi}}\) attempts to recover the desired high-quality full-color RGB image based on 1) the given noisy RAW image \(\widehat{\mathbf{x}}^{3ch}\) and 2) the denoised output of \(G_{\mathbf{\theta}}\). We join these two branches by combining their respective losses \(\ell_{\text{DID-DoubleDIP}}=\sqrt{\ell_{\text{DN}}}+\sqrt{\ell_{\text{DM}}}\) and train them simultaneously. By doing so, these two branches collaborate with each other, yielding better demosaicing and denoising results. by \(\mathbf{\phi}\). Specifically, _DIP-Denoising_\(G_{\mathbf{\phi}}\) is dedicated to remove the noisy components in the given noisy single-channel RAW image \(\mathbf{\tilde{x}}^{1ch}\). Its loss function thus can be formulated as Eq. (2): Footnote 1: [https://github.com/](https://github.com/) in the distribution of the pretrained model (e.g., \(\sigma\in\{10,20\}\)), Deepjoint performs slightly better than our method and Deepjoint\({}^{*}\) further improves the performance of Deepjoint; while when the test noise intensity is out of the distribution of the pretrained model (e.g., \(\sigma\in\{30,50,70\}\)), the superiority of our approach becomes apparent, even compared with the Deepjoint\({}^{*}\) setting, thereby validating the distribution shift issue of data-driven methods; 3) applying a running average smoothing on the outputs of our method, which is indicated as \(\text{Ours}^{+}\), further improves the performance. ### RAW Images with Poisson Noise Now, we test our method on RAW images with Poisson noise, which is another dominant noise in camera sensors, especially in scenarios with low-light conditions. We simulate the pixel-wise independent Poisson noise as follows: for each pixel \(x\in[0,1]\), the noisy pixel is Poisson distributed with rate \(\lambda x\) and we test different intensities of noise by varying \(\lambda\in\{65,45,25,15,5\}\). We report the experimental results in Table 2, which is consistent with our observations in Section 3.2, reassuring the effectiveness of our method. We also depict a visual comparison in Fig. 3. It is evident that the reconstructions by other methods introduce some artifacts (e.g., residual noise and distortion artifacts), while our reconstruction preserves more texture details and is more perceptually pleasing, reinforcing the benefits of our method. ### JDD-DoubleDIP vs. Over-Parameterization To verify that the benefits of our method are not a consequence of over-parameterization owing to the use of two DIPs in conjunction, we design a counterpart for our method named _DM-DM_ in which we replace _DIP-Denoising_ with another _DIP-Demosaicing_, resulting in a network parameterized similarly to ours. For simplicity, here we only experiment with McMaster on Gaussian noise (\(\sigma=30\)) and Poisson noise (\(\lambda=25\)). The experimental results are reported in Table 3. It is evident that our method yields higher PSNR and SSIM on both noises, and our method is much more stable ( our method has \(>10\times\) lower standard deviation compared with _DM-DM_), suggesting that the benefits of our method are likely not due to over-parameterization. ## 4 Conclusion In this paper, we propose a novel joint demosaicing and denoising method, dubbed _JDD-DoubleDIP_, which consists of a denoising DIP to explicitly account for noise removal and a demosaicing DIP for high-quality full-color RGB image generation. By training these two DIPs jointly, our method yields better PSNR, SSIM, and perceived visual quality on Kodak and McMaster datasets under different noise types and intensities compared to other methods. In the future, we would like to investigate the explicit incorporation of prior information from the green channel into our method to boost performance, as it has twice as much information as the red or blue channels in a RAW image. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \multicolumn{13}{c}{} & \multicolumn{6}{c}{PSNR \(\uparrow\)} & \multicolumn{6}{c}{SSIM \(\uparrow\)} \\ \hline \hline Dataset & Method & \(\lambda\)=65 & \(\lambda\)=45 & \(\lambda\)=25 & \(\lambda\)=15 & \(\lambda\)=5 & \(\lambda\)=65 & \(\lambda\)=45 & \(\lambda\)=25 & \(\lambda\)=15 & \(\lambda\)=5 \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & DIP (u.) & 26.184 (0.015) & 25.429 (0.004) & 24.181 (0.02) & 23.175 (0.025) & 21.069 (0.031) & 0.675 (0.001) & 0.637 (0.0015) & 0.5766 (0.0011) & 0.5301 (0.0015) & 0.4506 (0.0022) \\ & DIP (u.) & 27.349 (0.026) & 26.628 (0.028) & 25.537 (0.016) & 24.584 (0.021) & 22.297 (0.016) & 0.745 (0.0005) & 0.716 (0.0010) & 0.668 (0.0005) & 0.625 (0.0014) & 0.519 (0.0016) \\ & V-DIP & 27.131 (0.049) & 26.382 (0.036) & 25.253 (0.038) & 24.174 (0.009) & 21.910 (0.029) & 0.756 (0.0014) & 0.657 (0.0022) & 0.667 (0.0022) & 0.645 (0.0021) & 0.547 (0.0017) \\ & Deepjoint & 28.883 (NA) & 26.850 (NA) & 25.254 (NA) & 19.366 (NA) & 14.728 (NA) & 0.770 (NA) & 0.682 (NA) & 0.487 (NA) & 0.347 (NA) & 0.168 (NA) \\ & Ours & 28.116 (0.029) & 27.436 (0.036) & 26.257 (0.041) & 25.428 (0.029) & 22.750 (0.013) & 0.780 (0.0015) & 0.754 (0.0005) & 0.711 (0.0009) & 0.676 (0.0009) & 0.593 (0.0018) \\ & Ours\({}^{+}\) & 28.515 (0.049) & 27.820 (0.035) & 26.593 (0.038) & 25.536 (0.028) & 22.844 (0.020) & 0.794 (0.0009) & 0.769 (0.0005) & 0.726 (0.0007) & 0.690 (0.0013) & 0.602 (0.0012) \\ \hline \multirow{6}{*}{ \begin{tabular}{} \end{tabular} } & DIP (u.) & 26.610 (0.015) & 25.774 (0.004) & 24.431 (0.004) & 23.412 (0.002) & 21.422 (0.0023) & 0.722 (0.0012) & 0.686 (0.0013) & 0.627 (0.0028) & 0.583 (0.0021) & 0.501 (0.0046) \\ & DIP (u.) & 28.074 (0.015) & 27.311 (0.042) & 26.064 (0.018) & 24.940 (0.008) & 22.329 (0.011) & 0.791 (0.0007) & 0.766 (0.0009) & 0.720 (0.0012) & 0.676 (0.0009) & 0.570 (0.0009) \\ \cline{1-1} & V-DIP & 27.908 (0.040) & 26.990 (0.028) & 25.664 (0.05) & 24.535 (0.009) & 22.004 (0.015) & 0.801 (0.0012) & 0.776 (0.0002) & 0.733 (0.017) & 0.692 (0.0009) & 0.594 (0.0019) \\ \cline{1-1} & Deepjoint & 29.149 (NA) & 27.210 (NA) & 23.472 (NA) & 20.590 (NA) & 15.748 (NA) & 0.788 (NA) & 0.715 (NA) & 0.595 (NA) & 0.497 (NA) & 0.299 (NA) \\ \cline{1-1} & Ours & 28.805 (0.017) & 28.056 (0.022) & 26.836 (0.022) & 25.698 (0.018) & 25.698 (0.018) & 23.029 (0.011) & 0.813 (0.0014) & 0.792 (0.0106) & 0.753 (0.012) & 0.717 (0.0119) & 0.632 (0.0307) \\ \cline{1-1} & Ours\({}^{+}\) & 29.286 (0.015) & 28.510 (0.019) & 27.222 (0.017) & 26.019 (0.004) & 23.123 (0.016) & 0.825 (0.0014) & 0.804 (0.0100) & 0.766 (0.0013) & 0.730 (0.021) & 0.641 (0.0300) \\ \hline \end{tabular} \end{table} Table 2: The quantitative results of demosaicing and denoising of RAW images with Poisson noise. **Deepjoint**: we use the maximum noise intensity of the pretrained model as input. \(\text{Ours}^{+}\): we smooth the output of our model by using running average: \(\mathbf{x}_{\text{smooth}}^{3ch}=0.99*\mathbf{x}_{\text{smooth}}^{3ch}+0.01*G_{ \phi}(\boldsymbol{z}_{\text{DM}})\). Figure 3: The visual comparisons of demosaicing and denoising on the RAW image (McMaster18) with Poisson noise (\(\lambda=45\)).
2309.17397
Analytic and Gevrey class regularity for parametric semilinear reaction-diffusion problems and applications in uncertainty quantification
We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments.
Alexey Chernov, Tung Le
2023-09-29T16:59:39Z
http://arxiv.org/abs/2309.17397v1
# Analytic and Gevrey class regularity ###### Abstract We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments. keywords: semilinear problems, reaction-diffusion, parametric regularity analysis, numerical integration, Quasi-Monte Carlo methods Msc: [2023] 65N25, 65C30, 65D30, 65D32, 65N30 + Footnote †: journal: CAMWA ## 1 Introduction Elliptic semilinear problems arise in numerous applications in natural sciences and engineering. Prominent examples are reaction-diffusion-type problems with nonlinear reaction (reproduction) terms for modelling of various processes such as phase separation, combustion, soil-moisture-physics, biological population genetics, etc. For analysis of parametric semilinear problems we refer to works by Hansen and Schwab [2], where the particular case of a stochastic parameter perturbation has been addressed, see also the recent work [3] for semilinear eigenvalue problems under uncertainty. The regularity of the solution of the problem with respect to the parameter is important for construction of efficient numerical approximations in the parameter domain. For example, if the quantity of interest is solution's average value over a prescribed parameter domain, Monte Carlo and Quasi-Monte Carlo methods can be applied for the numerical integration. However, the use of Quasi-Monte Carlo integration does only pay off if the solution features certain higher regularity properties. In this paper we first theoretically address these regularity considerations and then demonstrate their implications in a series of numerical experiments. Let us consider a prototypic real second-order elliptic semilinear partial differential equation of the general form \[\begin{split}-C_{m}^{2}\nabla\cdot(\mathbf{a}(\mathbf{x},\mathbf{y})\nabla u (\mathbf{x},\mathbf{y}))+b(\mathbf{x},\mathbf{y})\begin{bmatrix}u(\mathbf{x},\mathbf{y})\end{bmatrix}^ {m}&=C_{m}\,f(\mathbf{x},\mathbf{y})&(\mathbf{x},\mathbf{y})\in D\times U,\\ u(\mathbf{x},\mathbf{y})&=0&(\mathbf{x},\mathbf{y})\in\partial D\times U,\end{split} \tag{1}\] where the derivative operator \(\nabla\) acts in the physical variable \(\mathbf{x}\in D\), where \(D\) is a bounded Lipschitz domain in \(\mathbb{R}^{d}\). The above semilinear problem could be reduced to linear by choosing \(b\equiv 0\) or \(m=1\). The vector of parameters \(\mathbf{y}=(y_{1},y_{2},\dots)\in U\) has either finitely many or countably many components. For example, if \(\mathbf{y}\) is a random parameter, the model with \(U:=[-\frac{1}{2},\frac{1}{2}]^{\mathbb{N}}\) and \(\mathbf{y}\in U\) being a countably-dimensional vector of independently and identically distributed uniform random variables has been frequently used in the literature [4; 5; 6; 7]. The necessary restrictions on the power \(m\in\mathbb{N}\), the dimension \(d\) of the domain \(D\) are in this range throughout this paper and denote the corresponding set of parameters \((d,m)\) by \(\mathcal{M}\) as following, cf. [2] \[d=1\text{ or }d=2: m\in\mathbb{N} \tag{2}\] \[d=3: H_{0}^{1}(D)\hookrightarrow L^{6}(D) \text{ hence }1\leq m\leq 5\] \[d=4: H_{0}^{1}(D)\hookrightarrow L^{4}(D) \text{ hence }1\leq m\leq 3\] \[d=5: H_{0}^{1}(D)\hookrightarrow L^{10/3}(D) \text{ hence }1\leq m\leq 2\] \[d=6: H_{0}^{1}(D)\hookrightarrow L^{3}(D) \text{ hence }1\leq m\leq 2\] \[d\geq 7: m=1,\] and \(C_{m}\) is the constant of Sobolev embedding \(H_{0}^{1}(D)\hookrightarrow L^{m}(D)\), see e.g. [8]. Without loss of generality, in the following we assume that the coefficients \(a(\cdot,\mathbf{y}),b(\cdot,\mathbf{y})\) and \(\left\|f(\cdot,\mathbf{y})\right\|_{H^{-1}(D)}\) admit the uniform bounds \[1\leq a(\mathbf{x},\mathbf{y})\leq\frac{\overline{a}}{2},\qquad\left|b(\mathbf{x},\mathbf{y}) \right|\leq\frac{\overline{b}}{2},\qquad\left\|f(\cdot,\mathbf{y})\right\|_{H^{-1 }(D)}\leq\frac{\overline{f}}{2} \tag{3}\] for all \(\mathbf{y}\in U\) and almost all \(\mathbf{x}\in D\). Thus, for every fixed \(\mathbf{y}\in U\) and under a reasonable assumption, the problem (1) is well-posed, for example [2]. Since the coefficients depend on the parameters \(\mathbf{y}\), the solution \(u\) will depend on \(\mathbf{y}\) as well. Particularly, if \(\mathbf{y}\) is random, then \(u(\mathbf{x},\mathbf{y})\) will be random too. In this paper we present a rigorous regularity analysis for the solution \(u\) with respect to the parameter \(\mathbf{y}\) in the general case where the given data \(a,b\) and \(f\) are infinitely differentiable functions of \(\mathbf{y}\) belonging to the Gevrey class \(G^{\delta}\) for some fixed \(\delta\geq 1\). The scale of Gevrey classes is a nested scale of the parameter \(\delta\) that fills the gap between analytic and \(C^{\infty}\) functions \[G^{\delta}\subset G^{\delta^{\prime}}\subset C^{\infty},\qquad 1\leq\delta< \delta^{\prime}. \tag{4}\] The case of analytic functions (i.e. Gevrey-\(\delta\)-class function with \(\delta=1\)) is the simplest and arguably most important of this scale and has been addressed for parametric/stochastic semilinear problem before, see [2]. The above mentioned work considers the coefficient of the type of the affine parametrization and prove the analyticity of the solution using elegant complex analysis arguments. [3] for the Besides complex argument, the real-variable argument is also has been used as a powerful tool to achieve analytic regularity. For the analysis of the eigenvalue problems we refer to [5; 9]. However, the direct application of the real-variable argument typically leads to suboptimal estimates. To overcome this, in [1], we suggest a modified argument, namely _alternative-to-factorial technique_, and obtain optimal regularity for the eigenvalue problem. The aim of this paper is to introduce this approach and enhance it in the example of the parametric semilinear problems. The structure of the paper is as follows. In Section 2.1 we introduce the falling factor notation, which is the main tool for our _alternative-to-factorial technique_ first introduced in [1]. In Section 2.3 we introduce Gevrey-class function and formulate the regularity assumptions on the coefficients of the semilinear problem (1). In Section 3 we summarize the properties of elliptic semilinear problems needed for the forthcoming regularity analysis. In Section 4 we present the proof of the main result. The meaning and validity of the main regularity result is illustrated by the applications and numerical experiments in Section 5. ## 2 Preliminary ### The falling factorial estimates The deficiency of the real-variable inductive argument for nonlinear problems is a consequence of the Leibniz product rule and the triangle inequality. It can be seen already in one-dimensional case and eigenvalue problem, see [1, Section 2.1] and [10, Chapter 1]. To overcome these difficulties, we would use _alternative-to-factorial technique_ as introduced in [1, Section 2.2]. To summarize this we collect some elementary results on falling factorial as following. For a given \(q\in\mathbb{R}\) and a non-negative integer \(n\in\mathbb{N}_{0}\) the _falling factorial_ is defined as \[(q)_{n}:=\left\{\begin{array}{cc}1,&n=0,\\ q(q-1)\ldots(q-n+1),&n\geq 1.\end{array}\right. \tag{5}\] For \(q<1\) the falling factorial \((q)_{n}\) is a sign-alternating sequence of \(n\). To further simplify the notation and avoid keeping track of the sign alteration, we denote the _absolute value of the falling factorial of \(\frac{1}{2}\)_ by \[\left[\tfrac{1}{2}\right]_{n}:=\left|\left(\tfrac{1}{2}\right)_{n}\right|.\] This notation appears somewhat non-standard, but quite convenient, as we will see in the forthcoming analysis. The two sided-estimate \[\left[\tfrac{1}{2}\right]_{n}\leq n!\leq 2\cdot 2^{n}\left[\tfrac{1}{2}\right]_ {n}, \tag{6}\] is rather crude but sufficient for our analysis, see [1, Section 2.1] for a refined version. The following combinatorial identities are remarkable properties of the falling factorial. The first and the second estimates in (7) are stated here for a shifted summation range, cf. [1, Lemma 2.3], and, thus, require a new proof given below. **Lemma 2.1**.: _For all integers \(n\geq 1\) and \(k\geq 2\) the following identities hold_ \[\sum_{i=1}^{n}\binom{n}{i}\left[\tfrac{1}{2}\right]_{i}\left[ \tfrac{1}{2}\right]_{n+1-i}=\left[\tfrac{1}{2}\right]_{n+1},\quad\sum_{i=0}^{ n}\binom{n}{i}\left[\tfrac{1}{2}\right]_{i}\left[\tfrac{1}{2}\right]_{n+1-i}=2 \left[\tfrac{1}{2}\right]_{n+1},\quad\sum_{i=0}^{k}\binom{k}{i}\left[\tfrac{1} {2}\right]_{i}\left[\tfrac{1}{2}\right]_{k-i}=4\left[\tfrac{1}{2}\right]_{k}. \tag{7}\] Proof.: We choose the function \(f(x)=\frac{1}{2}(1-\sqrt{1-x})\) and \[g(x)=f(x)f^{\prime}(x)=\left(\frac{1}{2}(1-\sqrt{1-x})\right)\cdot\left(\frac{ 1}{4}\frac{1}{\sqrt{1-x}}\right)=\frac{1}{8}\left(\frac{1}{\sqrt{1-x}}-1 \right)=\frac{1}{2}f^{\prime}(x)-\frac{1}{8}. \tag{8}\] From [1, Section 2.2], we know that \(f^{(n)}(0)=\frac{1}{2}\left[\tfrac{1}{2}\right]_{n}\) for all \(n\in\mathbb{N}\). Thus, on the one hand, for all \(n\geq 1\) we have \[g^{(n)}(0)=\frac{1}{2}f^{(n+1)}(0)=\frac{1}{4}\left[\tfrac{1}{2} \right]_{n+1}.\] On the other hand, by Leibniz product rule and since \(f(0)=0\), we have \[g^{(n)}(0)=\sum_{i=1}^{n}\binom{n}{i}\,f^{(i)}(0)f^{(n+1-i)}(0)=\frac{1}{4} \sum_{i=1}^{n}\binom{n}{i}\,\left[\tfrac{1}{2}\right]_{i}\left[\tfrac{1}{2} \right]_{n+1-i}.\] This shows the first identity in (7). Increasing both sides of this identity by \(\left[\tfrac{1}{2}\right]_{n+1}\), we observe that the second identity in (7) is also valid. The third identity follows for \(f(x)=\frac{1}{2}(1-\sqrt{1-x})\) and \(g=f^{2}\), see e.g. [1, Lemma 2.3]. **Corollary 2.2**.: With the convention that the empty sum equals zero, the Lemma 2.1 extends to all non-negative integers \(n\in\mathbb{N}_{0}\) as \[\sum_{i=1}^{n}\binom{n}{i}\left[\tfrac{1}{2}\right]_{i}\left[ \tfrac{1}{2}\right]_{n+1-i}\leq\left[\tfrac{1}{2}\right]_{n+1}, \tag{9}\] \[\sum_{i=0}^{n}\binom{n}{i}\left[\tfrac{1}{2}\right]_{i}\left[ \tfrac{1}{2}\right]_{n+1-i}\leq 2\left[\tfrac{1}{2}\right]_{n+1},\] (10) \[\sum_{i=0}^{n}\binom{n}{i}\left[\tfrac{1}{2}\right]_{i}\left[ \tfrac{1}{2}\right]_{n-i}\leq 4\left[\tfrac{1}{2}\right]_{n}. \tag{11}\] ### Multiindex notation The following standard multiindex notations will be used in what follows, see e.g. [11; 4]. We denote the countable set of finitely supported sequences of nonnegative integers by \[\mathcal{F}:=\big{\{}\mathbf{\nu}=(\nu_{1},\nu_{2},\dots)\ :\ \nu_{j}\in\mathbb{N}_{0}, \ \text{and}\ \nu_{j}\neq 0\ \text{for only a finite number of}\ j\big{\}}\subset\mathbb{N}^{\mathbb{N}}, \tag{12}\] where the summation \(\mathbf{\alpha}+\mathbf{\beta}\) and the partial order relations \(\mathbf{\alpha}<\mathbf{\beta}\) and \(\mathbf{\alpha}\leq\mathbf{\beta}\) of elements in \(\mathbf{\alpha},\mathbf{\beta}\in\mathcal{F}\) are understood componentwise. We write \[|\mathbf{\nu}|:=\sum_{j\geq 1}\nu_{j}, \mathbf{\nu}!:=\prod_{j\geq 1}\nu_{j}!, \mathbf{R}^{\mathbf{\nu}}=\prod_{j\geq 1}R_{j}^{\nu_{j}}\] for the absolute value, the multifactorial and the power with the multi-index \(\mathbf{\nu}\) and a sequence \(\mathbf{R}=\{R_{j}\}_{j\geq 1}\) of positive real numbers. Notice that \(|\mathbf{\nu}|\) is finite if and only if \(\mathbf{\nu}\in\mathcal{F}\). For \(\mathbf{\nu}\in\mathcal{F}\) supported in \(\{1,2,\dots,n\}\), we define the partial derivative with respect to the variables \(\mathbf{y}\) \[\partial^{\mathbf{\nu}}u=\frac{\partial^{|\mathbf{\nu}|}u}{\partial y_{1}^{\nu_{1}} \partial y_{2}^{\nu_{2}}\dots\partial y_{n}^{\nu_{n}}}.\] For two multiindices \(\mathbf{\nu},\mathbf{\eta}\in\mathcal{F}\) we define the binomial coefficient by \[\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}=\prod_{j\geq 1}\begin{pmatrix}\nu_{j}\\ \eta_{j}\end{pmatrix}.\] The above multiindex notations are handy for treatment of multiparametric objects. The following technical Lemma is instrumental for the forthcoming analysis. **Lemma 2.3**.: _For two multiindices \(\mathbf{\nu},\mathbf{\eta}\in\mathcal{F}\) satisfying \(\mathbf{\eta}\leq\mathbf{\nu}\), a unit multi-index \(\mathbf{e}\) and \(\delta\geq 1\) we have_ \[(|\mathbf{\nu}-\mathbf{\eta}|!)^{\delta-1}(|\mathbf{\eta}|!)^{\delta-1}\leq(| \mathbf{\nu}|!)^{\delta-1}, \tag{13}\] \[\sum_{0\leq\mathbf{\eta}\leq\mathbf{\nu}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\eta}|}[\tfrac{1}{2}]_{|\mathbf{\nu}- \mathbf{\eta}|}\leq 4[\tfrac{1}{2}]_{|\mathbf{\nu}|},\] (14) \[\sum_{\mathbf{0}<\mathbf{\eta}\leq\mathbf{\nu}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}\begin{bmatrix}\tfrac{1}{2}\\ |\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|\end{bmatrix}[\tfrac{1}{2}]_{|\mathbf{\eta}|}\leq \begin{bmatrix}\tfrac{1}{2}\\ |\mathbf{\nu}+\mathbf{e}|\end{bmatrix},\] (15) \[\sum_{\mathbf{0}\leq\mathbf{\nu}\leq\mathbf{\nu}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|}\begin{bmatrix} \tfrac{1}{2}\\ |\mathbf{\eta}|\end{bmatrix}\leq 2\begin{bmatrix}\tfrac{1}{2}\\ |\mathbf{\nu}+\mathbf{e}|\end{bmatrix},\] (16) \[\sum_{\mathbf{0}<\mathbf{\eta}\leq\mathbf{\nu}}\sum_{\mathbf{0}\leq\mathbf{\ell}\leq \mathbf{\nu}-\mathbf{\eta}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}\begin{pmatrix}\mathbf{\eta}\\ \mathbf{\ell}\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\eta}-\mathbf{\ell}|}\begin{bmatrix} \tfrac{1}{2}\\ |\mathbf{\eta}|\end{bmatrix}[\tfrac{1}{2}]_{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|}\leq 4 \begin{bmatrix}\tfrac{1}{2}\\ |\mathbf{\nu}+\mathbf{e}|\end{bmatrix}. \tag{17}\] Proof.: Notice that for two non-negative integers \(n!\cdot m!\leq(n+m)!\) and therefore \[|\mathbf{\nu}-\mathbf{\eta}|!|\mathbf{\eta}|!\leq(|\mathbf{\nu}-\mathbf{\eta}|+|\mathbf{\eta}|)!=|\mathbf{ \nu}|!\] Since \((\cdot)^{\delta-1}\) is an increasing function for \(\delta\geq 1\), the estimate (13) follows. According to [1; Lemma 7.1], we have \[\sum_{\begin{subarray}{c}|\mathbf{\eta}|=\mathbf{\nu}\\ \mathbf{\eta}\leq\mathbf{\nu}\end{subarray}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}=\begin{pmatrix}|\mathbf{\nu}|\\ \mathbf{\nu}\end{pmatrix}, \tag{18}\] which is sometimes called generalized Vandermonde or Chu-Vandermonde identity. This together with (11) imply the estimate \[\sum_{0\leq\mathbf{\eta}\leq\mathbf{\nu}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\nu}-\mathbf{\eta}|}[\tfrac{1}{2}]_{|\mathbf{ \eta}|}=\sum_{\tau=0}^{|\mathbf{\nu}|}\sum_{\begin{subarray}{c}|\mathbf{\nu}|=\mathbf{\nu} \\ \mathbf{\nu}\end{subarray}}\begin{pmatrix}\mathbf{\nu}\\ \mathbf{\eta}\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\nu}|-\mathbf{\tau}|}[\tfrac{1}{2}]_{r}= \sum_{\tau=0}^{|\mathbf{\nu}|}\begin{pmatrix}|\mathbf{\nu}|\\ r\end{pmatrix}[\tfrac{1}{2}]_{|\mathbf{\eta}|-\mathbf{\tau}|}[\tfrac{1}{2}]_{r}\leq 4[ \tfrac{1}{2}]_{|\mathbf{\eta}|}.\] It shows inequality (14). Similarly, we derive bounds (15) and (16) by applying (18) to (9) and (11), respectively. The final estimate (17) follows by consecutive application of (14) and (15). ### Gevrey-class and Analytic function The following definition of Gevrey-\(\delta\) functions with countably many parameters will be used in our regularity analysis in Section 4. **Definition 2.4**.: Let \(\delta\geq 1\), \(B\) be a Banach space, \(I\subset\mathbb{R}^{\mathbb{N}}\) be an open domain and a function \(f:I\to B\) be such that its \(\mathbf{y}\)-derivatives \(\partial^{\mathbf{v}}f:I\to B\) are continuous for all \(\mathbf{v}\in\mathcal{F}\). We say that the function \(f\) is of class Gevrey-\(\delta\) if for each \(y_{0}\in I\) there exist an open set \(J\subseteq I\), and strictly positive constants \(\mathbf{R}=(R_{1},R_{2},\dots)\subset\mathbb{R}^{\mathbb{N}}_{>0}\) and \(C\in\mathbb{R}_{>0}\) that the derivatives of \(f\) satisfy the bounds \[\|\partial^{\mathbf{v}}f(\mathbf{y})\|_{B}\leq\frac{C}{\mathbf{R}^{\mathbf{v}}}(|\mathbf{v}|!)^{ \delta},\qquad\forall\mathbf{y}\in J,\quad\forall\mathbf{v}\in\mathcal{F}. \tag{19}\] In this case we write \(f\in G^{\delta}(U,B)\). Definition 2.4 is also suitable for the case of finitely many parameters \(\mathbf{y}\). In particular, when \(\mathbf{y}=(y_{1},\dots,y_{M})\), \(B=\mathbb{R}\) or \(\mathbb{C}\) and \(\delta=1\), the bound (19) guarantees convergence of the power series of \(f\) and therefore characterizes the class of analytic functions of \(M\) variables, see e.g. (10, Section 2.2) and (1, Remark 2.6). This property follows from the bound \(|\mathbf{v}|!\leq M^{|\mathbf{v}|}\mathbf{v}|!\) that is valid for a multiindex \(\mathbf{v}\) with \(M\) nonzero components. Notice that otherwise estimate (19) does not guarantee convergence of the power series of \(f\). Moreover, the scale \(G^{\delta}\) grows monotonously with \(\delta\) in the sense of (4). We now make an assumption on the coefficients, which, in particular, ensure that the solution of the semilinear problem (1) is Gevrey-class regular. **Assumption 2.5**.: _For all fixed values \(\mathbf{y}\in U\in\mathbb{R}^{m}\) with \(m<\infty\), the coefficients \(a(\mathbf{y}),b(\mathbf{y})\in L^{\infty}(D)\) and \(f(\mathbf{y})\in V^{*}\). The functions \(a(\mathbf{y}),b(\mathbf{y})\) are of Gevrey class \(G^{\delta}(U,L^{\infty}(D))\) and \(f(\mathbf{y})\) is of Gevrey class \(G^{\delta}(U,V^{*})\), i.e. for all \(\mathbf{v}\in\mathbb{N}^{*}\) there exist \(\mathbf{R}\) independent of \(s\) such that_ \[\big{\|}\partial^{\mathbf{v}}a(\mathbf{y})\big{\|}_{L^{\infty}(D)}\leq\frac{\overline {a}}{2}\frac{(|\mathbf{v}|!)^{\delta}}{(2\mathbf{R})^{\mathbf{v}}},\qquad\big{\|}\partial ^{\mathbf{v}}b(\mathbf{y})\big{\|}_{L^{\infty}(D)}\leq\frac{\overline{b}}{2}\frac{(| \mathbf{v}|!)^{\delta}}{(2\mathbf{R})^{\mathbf{v}}},\qquad\big{\|}\partial^{\mathbf{v}}f(\mathbf{ y})\big{\|}_{V^{*}}\leq\frac{\overline{f}}{2}\frac{(|\mathbf{v}|!)^{\delta}}{(2 \mathbf{R})^{\mathbf{v}}}.\] Notice that for \(\mathbf{v}=0\) Assumption 2.5 agrees with the upper bounds in (3). Notice also that the components of \(\mathbf{R}\) are readily scaled by the factor of \(2\). This leads to no loss of generality, but helps to shorten the forthcoming expressions. For example, in view of (6) Assumption 2.5 immediately implies \[\big{\|}\partial^{\mathbf{v}}a(\mathbf{y})\big{\|}_{L^{\infty}(D)}\leq\frac{\overline {a}\,\big{[}\frac{1}{2}\big{]}_{|\mathbf{v}|}}{\mathbf{R}^{\mathbf{v}}}(|\mathbf{v}|!)^{\delta -1},\qquad\big{\|}\partial^{\mathbf{v}}b(\mathbf{y})\big{\|}_{L^{\infty}(D)}\leq\frac {\overline{b}\,\big{[}\frac{1}{2}\big{]}_{|\mathbf{v}|}}{\mathbf{R}^{\mathbf{v}}}(|\mathbf{v}|! )^{\delta-1},\qquad\big{\|}\partial^{\mathbf{v}}f(\mathbf{y})\big{\|}_{V^{*}}\leq\frac {\overline{f}\,\big{[}\frac{1}{2}\big{]}_{|\mathbf{v}|}}{\mathbf{R}^{\mathbf{v}}}(|\mathbf{v}|! )^{\delta-1}. \tag{20}\] The definition of the norms used above is standard will be recalled in the beginning of the next section. ## 3 Elliptic Semilinear PDEs with countably many parameters For a fixed \(\mathbf{y}\in U\) the variational formulation of (1) reads \[C_{m}^{2}\int_{D}a(\mathbf{x},\mathbf{y})\nabla u(\mathbf{x},\mathbf{y})\cdot\nabla v(\mathbf{x})+ \int_{D}b(\mathbf{x},\mathbf{y})[u(\mathbf{x},\mathbf{y})]^{m}\,v(\mathbf{x})=C_{m}\int_{D}f(\mathbf{x },\mathbf{y})\,v(\mathbf{x}). \tag{21}\] The Holder inequality imply that the second integral is well-defined for \(u(\cdot,\mathbf{y}),v(\cdot)\in L^{m+1}(D)\). By the Sobolev embedding theorem this is guaranteed for \(H^{1}_{0}(D)\) functions under restrictions on the range of \(m\) as readily announced in (2). We now collect the required notations and facts from the theory of variational semilinear problems. By \(L^{p}(D)\) and \(L^{\infty}(D)\) we denote the spaces of \(p\)-power integrable and bounded functions equipped with standard norms. Throughout the paper, when it is unambiguous we will drop the \(\mathbf{x}\)-dependence when referring to a function defined on \(D\) at a parameter value \(\mathbf{y}\). We introduce the Sobolev spaces \(V:=H^{1}_{0}(D)\), its dual \(V^{*}:=H^{-1}(D)\) equipped the following norms \[\big{\|}u\big{\|}_{V}:=C_{m}\,\big{\|}u\big{\|}_{H^{1}_{0}(D)}\,,\qquad\big{\|} f\big{\|}_{V^{*}}:=\big{\|}f\big{\|}_{H^{-1}(D)}=\sup_{\stackrel{{ v\in V}}{{\nu\neq 0}}}\frac{\int_{D}f\,v}{\|v\|_{H^{1}_{0}(D)}}=\sup_{\stackrel{{ v\in V}}{{\nu\neq 0}}}\frac{\langle f,v\rangle}{\|v\|_{V}},\] where the duality pairing on \(V\times V^{*}\) is denoted by \(\langle\cdot,\cdot\rangle\) as \[\langle g,v\rangle:=C_{m}\int_{D}g\,v,\qquad\forall g\in V^{*}\text{ and }\forall v\in V. \tag{22}\] In purpose of simpler calculus, we introduce rescaled Lebesgue spaces \(\mathcal{L}_{k}=L^{\frac{m+1}{k}}(D)\) for a fixed \(m\) and \(1\leq k\leq m+1\) and equipped with the norm \[\|u\|_{\mathcal{L}_{k}}:=\|u\|_{L^{\frac{m+1}{k}}(D)}\,.\] The following Lemma shows a Holder-type inequalities for Lebesgue spaces \(\mathcal{L}_{k}\), which has an important role in the proof of regularity in Section 4. **Lemma 3.1**.: _For a fixed \(m\in\mathbb{N}\) recall that \(\mathcal{L}_{k}=L^{\frac{m+1}{k}}(D)\) and let \(p,q\geq 1\). Then for any \(w\in\mathcal{L}_{p}\) and \(v\in\mathcal{L}_{q}\) with \(p+q\leq m+1\) it holds that_ \[\|w\,v\|_{\mathcal{L}_{p+q}}\leq\|w\|_{\mathcal{L}_{p}}\,\|v\|_{ \mathcal{L}_{q}}\,. \tag{23}\] _Moreover, for all \(k\leq m+1\) and \(u\in\mathcal{L}_{1}\), we have_ \[\left\|u^{k}\right\|_{\mathcal{L}_{k}}\leq\left\|u\right\|_{ \mathcal{L}_{1}}^{k}. \tag{24}\] Proof.: The definition of \(\mathcal{L}_{p+q}\) and the Holder inequality imply \[\left\|w\,v\right\|_{\mathcal{L}_{p+q}}=\bigg{(}\int_{D}(wv)^{ \frac{m+1}{p+q}}\bigg{)}^{\frac{p+q}{m+1}} \leq\left(\left(\int_{D}\left(w^{\frac{m+1}{p+q}}\right)^{\frac{p +q}{p}}\right)^{\frac{p}{p+q}}\left(\int_{D}\left(v^{\frac{m+1}{p+q}}\right)^{ \frac{p+q}{p+q}}\right)^{\frac{p}{p+q}}\right)^{\frac{p+q}{p+q}}\] \[=\|w\|_{L^{\frac{m+1}{p}}(D)}\,\|v\|_{L^{\frac{m+1}{q}}(D)}=\|w\|_ {\mathcal{L}_{p}}\,\|v\|_{\mathcal{L}_{q}}\,.\] This shows the first inequality (23). The second inequality (24) follows from (23) by induction, since for any \(n\in\mathbb{N}\) \[\left\|u^{n+1}\right\|_{\mathcal{L}_{n+1}}=\left\|u^{n}\,u\right\|_{\mathcal{ L}_{n+1}}\leq\left\|u^{n}\right\|_{\mathcal{L}_{n}}\,\left\|u\right\|_{ \mathcal{L}_{1}}\leq\left\|u\right\|_{\mathcal{L}_{1}}^{n}\,\left\|u\right\|_ {\mathcal{L}_{1}}=\left\|u\right\|_{\mathcal{L}_{1}}^{n+1}.\] This finishes the proof. From Sobolev embedding theorem, for every \(v\in V\) and \(f\in V^{*}\) we have \[\|u\|_{\mathcal{L}_{1}}=\|u\|_{L^{m+1}(D)}\leq C_{m}\,\|u\|_{H^{1} _{0}(D)}=\|u\|_{V}\,, \tag{25}\] where the Sobolev embedding constant \(C_{m}\) could be calculated explicitly as in [8]. Moreover, we have \[\langle f,v\rangle\leq C_{m}\,\|f\|_{H^{-1}(D)}\,\|v\|_{H^{1}_{0} (D)}=\|f\|_{V^{*}}\,\|v\|_{V}\,. \tag{26}\] For a fixed \(\mathbf{y}\) we define the bilinear form \(A_{\mathbf{y}}:V\times V\to\mathbb{R}\) and a nonlinear operator \(T_{\mathbf{y}}(w,v):V\times V\to\mathbb{R}\) \[A_{\mathbf{y}}(w,v):=C_{m}^{2}\int_{D}a(\mathbf{y})\nabla u\cdot\nabla v,\quad T_{\mathbf{y}}(w,v):=\int_{D}b(\mathbf{y})w^{m}v. \tag{27}\] In the view of bounds (3) and Lemma 3.1, we have \[A_{\mathbf{y}}(w,w)\geq\|w\|_{V}^{2},\qquad A_{\mathbf{y}}(w,v) \leq\frac{\overline{a}}{2}\|w\|_{V}\|v\|_{V},\qquad w,v\in V, \tag{28}\] \[T_{\mathbf{y}}(w,v) \leq\frac{\overline{b}}{2}\|w\|_{V}^{m}\|v\|_{V},\qquad w,v\in V. \tag{29}\] Thus, for every \(\mathbf{y}\in U\), the variational equivalent of (1) is the problem of finding a solution \(u\in V\) such that \[A_{\mathbf{y}}(u(\mathbf{y}),v)+T_{\mathbf{y}}(u(\mathbf{y}),v)=\langle f(\mathbf{y}),v\rangle \,\quad\forall v\in V. \tag{30}\] Note that the uniqueness of the solution for (30) generally fails. For instance, the (real-valued) problem for \(a\equiv b\equiv 1\) with certain restriction on \(m\) even has infinitely many solutions with arbitrarily large norms, see [12, Theorem 7.2 and Remark 7.3] and [2]. In the following we introduce two different assumptions (Assumption 3.2 and Assumption 3.4) that are sufficient to guarantee the existence and uniqueness of the solution of (21). Roughly speaking, Assumption 3.2 admits indefinite reaction term \(T_{y}\), but requires that \(\overline{b}\) and \(\overline{f}\) cannot be large simultaneously. If, however, \(T_{y}\) is nonnegative, no further restrictions are required, see Assumption 3.4. In both cases the unique solution is bounded \[\|u\|_{V}\leq\overline{u}, \tag{31}\] where the upper bound \(\overline{u}\) will be determined below. We now give the details of the argument. The following assumption naturally extends the result in [2]. **Assumption 3.2**.: _For a fixed integer \(m\geq 1\), there exists a positive constant \(\gamma<1\) such that \(\overline{b}\) and \(\overline{f}\) satisfying_ \[\frac{\overline{b}}{2}=\frac{\gamma}{m\,\overline{f}^{m-1}}.\] For the case \(m=1\), the problem in (21) turns into a linear reaction-diffusion problem. The bilinear form for this problem is \(V\)-coercive for \(\frac{\overline{b}}{2}<1\) \[C_{m}^{2}\int_{D}a(\mathbf{y})|\nabla w|^{2}+\int_{D}b(\mathbf{y})w^{2}\geq\|w\|_{V}^ {2}-\frac{\overline{b}}{2}\|w\|_{L^{2}(D)}^{2}\geq\bigg{(}1-\frac{\overline{b} }{2}\bigg{)}\|w\|_{V}^{2}. \tag{32}\] This property is guaranteed by Assumption 3.2 for \(m=1\). We now show that Assumption 3.2 also guarantee that (21) has a unique solution by means of the Banach fixed-point theorem. Let \(u_{0}(\mathbf{y})=0\) and define \(u_{n+1}(\mathbf{y})\) as the unique solution of \[A_{\mathbf{y}}(u_{n+1}(\mathbf{y}),v)=\langle f(\mathbf{y}),v\rangle-T_{\mathbf{y}}(u_{n}(\bm {y}),v)\qquad\forall v\in V. \tag{33}\] The following Lemma shows that the sequence \(\{u_{n}\}\) never leaves the closed set \[\mathcal{B}(0,\overline{f}):=\Big{\{}v\in V:\|v\|_{V}\leq\overline{f}\Big{\}}\] and converges a limit in \(\mathcal{B}(0,\overline{f})\). Obviously, when the sequence \(\{u_{n}(\mathbf{y})\}\) admits a limit point, it would be a solution of (21). Indeed, the following Lemma proves the above statement. **Lemma 3.3**.: _For every \(\mathbf{y}\in U\) and \(m\geq 2\) be an even or odd integer, the sequence \(\{\|u_{n}(\mathbf{y})\|_{V}\}\) is bounded by \(\overline{f}\) and converges to a fixed point in \(\mathcal{B}(0,\overline{f})\)._ Proof.: We will prove boundedness of the sequence by induction with respect to \(n\). The Lax-Milgram Lemma and (3) imply \[\|u_{1}\|_{V}\leq\|f\|_{V^{*}}\leq\frac{\overline{f}}{2}\] and hence \(u_{1}\in\mathcal{B}(0,\overline{f})\). Assume now that all \(u_{1},\ldots,u_{n}\) belong to this neighbourhood and prove that the same holds for \(u_{n+1}\). For this, we substitute \(v=u_{n+1}(\mathbf{y})\) into (33) and recall (26), (3), (29) and Assumption 3.2 to obtain \[\|u_{n+1}(\mathbf{y})\|_{V}\leq\frac{\overline{f}}{2}+\frac{\overline{b}}{2}\,\|u _{n}(\mathbf{y})\|_{V}^{m}\leq\frac{\overline{f}}{2}+\frac{\gamma}{m\,\overline{ f}^{m-1}}\,\overline{f}^{m}\leq\overline{f},\] where in the last step we have used that \(\gamma<1\) and \(m\geq 2\). This shows that \(u_{n}\in\mathcal{B}(0,\overline{f})\) for all \(n\). We now prove that the sequence converges to a limit in \(V\). We have that \(u_{n}(\mathbf{y})\) is the solution of \[A_{\mathbf{y}}(u_{n}(\mathbf{y}),v)=\langle f(\mathbf{y}),v\rangle-T_{\mathbf{y}}(u_{n-1}(\bm {y}),v)\quad\forall v\in V.\] Subtract both sides of the above equation from (33) and set \(v=u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y})\) to obtain \[C_{m}^{2}\int_{D}a(\mathbf{y})\left|\nabla u_{n+1}(\mathbf{y})-\nabla u_{n}(\mathbf{y}) \right|^{2}=\int_{D}b(\mathbf{y})\left(\left[u_{n-1}(\mathbf{y})\right]^{m}-\left[u_{n }(\mathbf{y})\right]^{m}\right)(u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y})).\] The left-hand side is bounded by \(\left\|u_{n+1}-u_{n}\right\|_{V}^{2}\) from below. To obtain an upper bound for the right-hand side, recall the elementary identity \(a^{m}-b^{m}=(a-b)\sum_{j=0}^{m-1}a^{m-1-j}b^{j}\) and Lemma 3.1. This implies \[\left\|u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y})\right\|_{V}^{2} \leq\frac{\overline{b}}{2}\left\|u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y}) \right\|_{\mathcal{L}_{1}}\left\|u_{n}(\mathbf{y})-u_{n-1}(\mathbf{y})\right\|_{ \mathcal{L}_{1}}\sum_{j=0}^{m-1}\left\|u_{n}(\mathbf{y})\right\|_{\mathcal{L}_{1} }^{m-1-j}\left\|u_{n-1}(\mathbf{y})\right\|_{\mathcal{L}_{1}}^{j}\] \[\leq\frac{\overline{b}}{2}\left\|u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y}) \right\|_{V}\left\|u_{n}(\mathbf{y})-u_{n-1}(\mathbf{y})\right\|_{V}\sum_{j=0}^{m-1} \left\|u_{n}(\mathbf{y})\right\|_{V}^{m-1-j}\left\|u_{n-1}(\mathbf{y})\right\|_{V}^{j}, \tag{34}\] where (25) has been used in the last step. Since \(\{u_{n}\}\subset\mathcal{B}(0,\overline{f})\) for all \(n\), the sum in the right-hand side of (34) is bounded by \(m\overline{f}^{m-1}\). This and Assumption 3.2 imply the contraction property \[\left\|u_{n+1}(\mathbf{y})-u_{n}(\mathbf{y})\right\|_{V}\leq\gamma\left\|u_{n}(\mathbf{y} )-u_{n-1}(\mathbf{y})\right\|_{V}.\] Since \(\gamma<1\), the sequence \(\{u_{n}(\mathbf{y})\}\) converges to a fixed point in \(\mathcal{B}(0,\overline{f})\) by the Banach fixed point theorem. Estimate (32) and Lemma 3.3 imply (31) with \[\overline{u}:=\begin{cases}\frac{\overline{f}}{1-\gamma}&\text{ if }m=1\\ \overline{f}&\text{ if }m\geq 2\end{cases}. \tag{35}\] According to the Assumption 3.2, the magnitude of \(b\) will decrease as \(m\) and \(\overline{f}\) grow. As an alternative, we consider Assumption 3.4, which helps to relax this, when \(T_{\mathbf{y}}\) is nonnegative. **Assumption 3.4**.: _The function \(b(\mathbf{y})\) is non-negative for almost \((\mathbf{x},\mathbf{y})\in D\times U\) and \(m\) is an odd positive integer such that \((d,m)\in\mathcal{M}\)._ In case Assumption 3.4 is satisfied, we choose the operator \(\mathcal{S}_{\mathbf{y}}:V\to V^{*}\) such that \(\left\langle\mathcal{S}_{\mathbf{y}}(u),v\right\rangle=A_{\mathbf{y}}(u,v)+T_{\mathbf{y}}( u,v)\). As an immediate consequence of (28) and (29), the operator \(\mathcal{S}_{\mathbf{y}}\) is continuous and bounded. Moreover, \(\mathcal{S}_{\mathbf{y}}\) is strictly monotone operator, since \(b(\mathbf{y})\geq 0\) and \((\cdot)^{m}\) is a monotonously increasing function when \(m\) is an odd number. Indeed, for every \(w,v\in V\) such that \(w\neq v\), we have \[\left\langle\mathcal{S}_{\mathbf{y}}(w)-\mathcal{S}_{\mathbf{y}}(v),w-v\right\rangle= C_{m}^{2}\int_{D}a(\mathbf{y})\left|\nabla w-\nabla v\right|^{2}+\int_{D}b(\mathbf{y})( w^{m}-v^{m})(w-v)\geq\left\|w-v\right\|_{V}^{2}>0.\] Substitute \(v=0\) in the above inequality and notice that \(\mathcal{S}_{\mathbf{y}}(0)=0\) to arrive at \[\left\langle\mathcal{S}_{\mathbf{y}}(w),w\right\rangle=A_{\mathbf{y}}(w,w)+T_{\mathbf{y}}( w,w)\geq\left\|w\right\|_{V}^{2}\qquad\forall w\in V, \tag{36}\] and thus \(\mathcal{S}_{\mathbf{y}}\) is coercive. By the Minty-Browder Theorem, the operator \(\mathcal{S}_{\mathbf{y}}\) is bijective, and hence the problem (21) has uniquely determined solution in \(V\). In this case (36) gives \[\left\|u\right\|_{V}^{2}\leq\left\langle\mathcal{S}_{\mathbf{y}}(u),u\right\rangle= \left\langle f,u\right\rangle\leq\left\|f\right\|_{V}\cdot\left\|u\right\|_{V}\] and hence, by (3), we may choose \(\overline{u}=\frac{\overline{f}}{2}\) for the upper bound (31). For each \(\mathbf{y}\in U\), we denote by \(\widetilde{A}_{\mathbf{y}}(u,w,v)\) the linearization of (30) mapping from \(V\times V\times V\to\mathbb{R}\) as \[\widetilde{A}_{\mathbf{y}}(u,w,v)=C_{m}^{2}\int_{D}a(\mathbf{y})\nabla w\cdot\nabla v+m \int_{D}b(\mathbf{y})\,u^{m-1}\,w\,v. \tag{37}\] The following Lemma shows the coercivity of \(\widetilde{A}_{\mathbf{y}}\), which is required for the regularity proof in Section 4. **Lemma 3.5**.: _Let \(a,b\) and \(f\) satisfy Assumption 3.2 or Assumption 3.4. The operator \(\widetilde{A}_{\mathbf{y}}\) is uniformly coercive in \(\mathbf{y}\), i.e._ \[\widetilde{A}_{\mathbf{y}}(u,v,v)\geq C_{A}\left\|v\right\|_{V}^{2}, \quad\forall v\in V\text{ and }\forall u\in\mathcal{B}(0,\overline{f}) \tag{38}\] _where \(C_{A}:=1\) if Assumption 3.4 holds and \(C_{A}:=1-\gamma\) if Assumption 3.2 holds._ Proof.: Assumption 3.4 sets \(b\) non-negative and \(m\) an odd number. This implies that \(b(\mathbf{y})u^{m-1}\) is nonnegative and \[\widetilde{A}_{\mathbf{y}}(u,v,v)\geq C_{m}^{2}\left\|v\right\|_{H^{ \infty}_{b}(D)}^{2}+m\int_{D}b(\mathbf{y})\,u^{m-1}\,v^{2}\geq\left\|v\right\|_{V} ^{2}.\] This shows that \(C_{A}=1\) in this case. If, instead, Assumption 3.2 is valid, analogous considerations imply \[\widetilde{A}_{\mathbf{y}}(u,v,v)\geq\left(1-m\,\frac{\overline{b}}{2 }\left\|u\right\|_{V}^{m-1}\right)\left\|v\right\|_{V}^{2}.\] For \(m=1\), we have \(\overline{b}/2=\gamma\), and therefore \(C_{A}=1-\gamma\). In case \(m\geq 2\), notice that \(u\in\mathcal{B}(0,\overline{f})\), we obtain \[\widetilde{A}_{\mathbf{y}}(u,v,v)\geq\left(1-m\,\left(\frac{\gamma}{ m\overline{f}^{m-1}}\right)\overline{f}^{m-1}\right)\left\|v\right\|_{V}^{2} \geq\left(1-\gamma\right)\left\|v\right\|_{V}^{2}.\] It also shows that \(C_{A}=1-\gamma\) and finishes the proof. ## 4 Parametric regularity and the formulation of main result The following theorem is the main regularity result of this paper. **Theorem 4.1**.: _Let the coefficients \(a,b\) and the right-hand side \(f\) of (21) satisfy Assumption 2.5 for some \(\delta\geq 1\) and suppose moreover that either Assumption 3.2 or Assumption 3.4 hold. Then the solution \(u\) of (21) is of class Gevrey-\(\delta\). more precisely, the following estimates are valid for all \(\mathbf{v}\in\mathcal{F}\) and \(\mathbf{y}\in U\)_ \[\left\|\partial^{\mathbf{v}}u(\mathbf{y})\right\|_{V}\leq\frac{C_{\mathbf{u} }\partial^{\left|\mathbf{y}\right|}\left[\frac{1}{2}\right]_{\mathbf{y}}\left|(\mathbf{y }|!)^{\delta-1}\right.}{\mathbf{R}^{\mathbf{v}}} \tag{39}\] _and_ \[\left\|\partial^{\mathbf{v}}u(\mathbf{y})\right\|_{H^{\infty}_{b}(D)}\leq \frac{C_{\mathbf{u}}\partial^{\left|\mathbf{y}\right|}(\left|\mathbf{y}\right|!)^{\delta}}{ C_{m}\mathbf{R}^{\mathbf{v}}}. \tag{40}\] _The constants in the above bounds are explicitly determined as_ \[C_{\mathbf{u}}:=\overline{u}\qquad\text{and}\qquad\rho:=C_{A}^{-1}( 3\,\overline{a}+3(4\overline{u})^{m-1}\,\overline{b}+1). \tag{41}\] To prove the above Theorem, we require auxiliary upper bounds for derivatives of the solution from Lemma 4.2 and Lemma 4.3 below. **Lemma 4.2**.: _For sufficiently regular solutions of (30) there holds_ \[C_{A}\left\|\partial^{\mathbf{v}+\mathbf{v}}u\right\|_{V}\leq\] \[+\left\|\partial^{\mathbf{v}+\mathbf{e}}f\right\|_{V^{\prime}}+\sum_{\bm {0}<\mathbf{v}<\mathbf{v}}\left(\begin{array}{c}\mathbf{v}\\ \mathbf{\eta}\end{array}\right)\left\|\partial^{\mathbf{q}}a\right\|_{L^{\infty}(D)} \left\|\partial^{\mathbf{v}+\mathbf{e}-\eta}u\right\|_{V} \tag{42}\] \[+\sum_{\mathbf{0}<\mathbf{q}<\mathbf{v}}\sum_{\mathbf{0}<\mathbf{t}<\mathbf{q}}\left( \begin{array}{c}\mathbf{v}\\ \mathbf{\eta}\end{array}\right)\left(\begin{array}{c}\mathbf{\eta}\\ \mathbf{\ell}\end{array}\right)\left\|\partial^{\mathbf{q}-\mathbf{t}}b\right\|_{L^{\infty }(D)}\left\|\partial^{\mathbf{q}}(u^{m-1})\right\|_{\mathcal{L}_{m-1}}\left\| \partial^{\mathbf{v}+\mathbf{e}-\eta}u\right\|_{V},\] _where \(\mathbf{e}\) is a unit multi-index in \(\mathcal{F}\), i.e. \(|\mathbf{e}|=1\)._ Proof.: We recall the variational formulation (21) and take the \(\mathbf{e}\)-th derivative its both sides with respect to \(\mathbf{y}\). Collecting the terms with \(\partial^{\mathbf{e}}u\) on the left-hand side we obtain \[C_{m}^{2}\int_{D}a\,\partial^{\mathbf{e}}\nabla u\cdot\nabla v+m\int_{D}b\,u^{m-1} \partial^{\mathbf{e}}u\,v=-C_{m}^{2}\int_{D}\partial^{\mathbf{e}}a\nabla u\cdot\nabla v- \int_{D}\partial^{\mathbf{e}}b\,u^{m}v+C_{m}\int_{D}\partial^{\mathbf{e}}fv. \tag{43}\] Observe that the left-hand side can be expressed as \(\widetilde{A}_{\mathbf{y}}(u,\partial^{\mathbf{e}}u,v)\), where the linearized form \(\widetilde{A}_{\mathbf{y}}\) has been introduced in (37). Notice that the first and the second argument of this expression depend on \(\mathbf{y}\). Therefore, if we take higher \(\mathbf{v}\)-th order derivatives of (43), both the first and the second argument of \(\widetilde{A}_{\mathbf{y}}\) will generate further terms by the Leibniz product rule. But the highest order derivative \(\partial^{\mathbf{v}+\mathbf{e}}u\) will only appear in the term \(\widetilde{A}_{\mathbf{y}}(u,\partial^{\mathbf{v}+\mathbf{e}}u,v)\). Isolating this term on the left-hand side, we obtain \[\begin{split}\widetilde{A}_{\mathbf{y}}(u,\partial^{\mathbf{v}+\mathbf{e}} u,v)=&-C_{m}^{2}\sum_{\mathbf{0}\leq\mathbf{e}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{ \eta}}\int_{D}\partial^{\mathbf{v}+\mathbf{e}-\eta}a\,\partial^{\mathbf{e}}\nabla u\cdot \nabla v-\sum_{\mathbf{0}\leq\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta}}\int_{D} \partial^{\mathbf{v}+\mathbf{e}-\eta}b\,\partial^{\mathbf{q}}(u^{m})v+C_{m}\int_{D} \partial^{\mathbf{v}+\mathbf{e}}fv\\ &-C_{m}^{2}\sum_{\mathbf{0}<\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta} }\int_{D}\partial^{\mathbf{q}}a\,\partial^{\mathbf{v}+\mathbf{e}-\eta}\nabla u\cdot\nabla v -m\sum_{\mathbf{0}<\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta}}\sum_{\mathbf{0}\leq\mathbf{ f}\leq\mathbf{q}}\binom{\mathbf{\eta}}{\mathbf{\ell}}\int_{D}\partial^{\mathbf{q}-\mathbf{f}}b\, \partial^{\mathbf{f}}(u^{m-1})\partial^{\mathbf{v}+\mathbf{e}-\eta}u\,v.\end{split} \tag{44}\] Since (44) is valid for all \(v\in V\) we may select specifically \(v=\partial^{\mathbf{v}+\mathbf{e}}u\). According to Lemma 3.5, the left-hand side admits the bound \(\widetilde{A}_{\mathbf{y}}(u,\partial^{\mathbf{v}+\mathbf{e}}u,\partial^{\mathbf{v}+\mathbf{e}}u) \geq C_{A}\left\|\partial^{\mathbf{v}}u\right\|_{V}^{2}\). Applying the triangle and the Cauchy-Schwarz inequality, we get the estimate \[\begin{split} C_{A}\left\|\partial^{\mathbf{v}}u\right\|_{V}\leq& \sum_{\mathbf{0}\leq\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta}}\left\| \partial^{\mathbf{v}+\mathbf{e}-\eta}a\right\|_{L^{\mathbf{v}}(D)}\left\|\partial^{\mathbf{q} }\nabla u\right\|_{V}+\sum_{\mathbf{0}\leq\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta} }\left\|\partial^{\mathbf{v}+\mathbf{e}-\eta}b\right\|_{L^{\mathbf{v}}(D)}\left\| \partial^{\mathbf{q}}(u^{m})\right\|_{\mathcal{L}_{m}}+\left\|\partial^{\mathbf{v}+ \mathbf{e}}f\right\|_{V^{\mathbf{v}}}\\ &+\sum_{\mathbf{0}<\mathbf{q}\leq\mathbf{v}}\binom{\mathbf{v}}{\mathbf{\eta}}\left\| \partial^{\mathbf{q}}a\right\|_{L^{\mathbf{v}}(D)}\left\|\partial^{\mathbf{v}+\mathbf{e}-\eta} \nabla u\right\|_{V}+m\sum_{\mathbf{0}<\mathbf{q}\leq\mathbf{v}}\sum_{\mathbf{0}\leq\mathbf{f}\leq \mathbf{q}}\binom{\mathbf{v}}{\mathbf{\eta}}\left(\frac{\mathbf{\eta}}{\mathbf{\ell}}\right) \left\|\partial^{\mathbf{q}-\mathbf{f}}b\right\|_{L^{\mathbf{v}}(D)}\left\|\partial^{\mathbf{ q}}(u^{m-1})\right\|_{\mathcal{L}_{m-1}}\left\|\partial^{\mathbf{v}+\mathbf{e}-\eta}u \right\|_{\mathcal{L}_{1}},\end{split}\] where we have applied the Holder inequality (23) and the Sobolev embedding estimate (25) for the terms on the right-hand side. Notice that \(\left\|\partial^{\mathbf{v}}u\right\|_{V}\) is cancelled on the both sides. Finally, observe that the Sobolev embedding also yields \(\left\|\partial^{\mathbf{v}+\mathbf{e}-\eta}u\right\|_{\mathcal{L}_{1}}\leq\left\| \partial^{\mathbf{v}+\mathbf{e}-\eta}u\right\|_{V}\) and, hence, the statement of the Lemma. The right-hand side of (42) contains the terms of the type \(\left\|\partial^{\mathbf{u}}(u^{k})\right\|_{\mathcal{L}_{k}}\), where \(k=m-1\) or \(m\) and \(\mu\in\mathcal{F}\). The following Lemma determines explicit upper bounds for these powers of \(u\), if corresponding bounds for \(u\) itself are available. This result together with Lemma 4.2 is the key ingredient in the inductive proof of Theorem 4.1. **Lemma 4.3**.: _Let \(\mathbf{\mu}\in\mathcal{F}\), suppose that (39) holds for all multi index \(\mathbf{\ell}\leq\mathbf{\mu}\), i.e._ \[\left\|\partial^{\mathbf{\ell}}u(\mathbf{y})\right\|_{V}\leq\frac{C_{u}\partial^{ \mathbf{\ell}}\left[\frac{1}{2}\right]_{\mathbf{\ell}}}{\mathbf{R}^{\mathbf{\ell}}}(|\mathbf{ \ell}|!)^{\delta-1}\quad\forall\mathbf{\ell}\leq\mathbf{\mu}. \tag{45}\] _Then, for all \(k\leq m\) the following estimates are valid_ \[\left\|\partial^{\mathbf{\mu}}(u(\mathbf{y})^{k})\right\|_{\mathcal{L}_{k}}\leq\frac{ 4^{k-1}C_{u}^{k}\partial^{\left\|\mathbf{\ell}\right\|}\left[\frac{1}{2}\right]_{ \mathbf{\mu}}}{\mathbf{R}^{\mathbf{\mu}}}(|\mathbf{\mu}|!)^{\delta-1}. \tag{46}\] Proof.: We will prove the above Lemma by induction with respect to \(m\). The basis of induction, the case \(m=1\), follows from the assumption (45) and the Sobolev embedding (25). For the inductive step, we now assume that (46) holds for all number \(k\leq m\) and prove that this implies the same bound for \(k=m+1\). Applying Leibniz general product rule to \(u^{m+1}=u^{m}u\) and the triangle inequality, we arrive at \[\left\|\partial^{\mathbf{\sigma}}(u^{m+1})\right\|_{\mathcal{L}_{m+1}}=\left\|\sum_{ \mathbf{0}\leq\mathbf{\ell}\leq\mathbf{\mu}}\binom{\mathbf{\mu}}{\mathbf{\ell}}\,\partial^{\mathbf{ \ell}}(u^{m})\partial^{\mathbf{\mu}-\mathbf{\ell}}u\right\|_{\mathcal{L}_{m+1}}\leq\sum_ {\mathbf{0}\leq\mathbf{\ell}\leq\mathbf{\mu}}\binom{\mathbf{\mu}}{\mathbf{\ell}}\left\|\partial^{ \mathbf{\ell}}(u^{m})\partial^{\mathbf{\mu}-\mathbf{\ell}}u\right\|_{\mathcal{L}_{m+1}}.\] By the Holder inequality (23) with \(w=\partial^{\ell}(u^{m})\), \(v=\partial^{\mu}-t\), \(p=m\), \(q=1\) and recalling the inductive assumption for the individual terms we obtain \[\left\|\partial^{\mu}(u^{m+1})\right\|_{\mathcal{L}_{\infty}} \leq\sum_{\mathbf{0}\leq\ell\leq\mu}\left(\mathbf{\mu}\atop\mathbf{\ell}\right) \left\|\partial^{\ell}(u^{m})\right\|_{\mathcal{L}_{\infty}}\left\|\partial^{ \mu-t}u\right\|_{\mathcal{L}_{1}}\] \[\leq\sum_{\mathbf{0}\leq\ell\leq\mu}\left(\mathbf{\mu}\atop\mathbf{\ell} \right)\left(\frac{4^{m-1}C_{u}^{m}\,\rho^{|\mathbf{\ell}|}\left[\frac{1}{2}\right] _{|\mathbf{\mu}|}}{\mathbf{R}^{\ell}}(|\mathbf{\ell}|!)^{\delta-1}\right)\left(\frac{C_{u} \rho^{|\mathbf{\mu}-\ell|}\left[\frac{1}{2}\right]_{|\mathbf{\mu}-\ell|}}{\mathbf{R}^{\mathbf{ \mu}-\ell}}(|\mathbf{\mu}-\ell|!)^{\delta-1}\right)\] \[=\frac{4^{m-1}C_{u}^{m+1}\rho^{|\mathbf{\mu}|}}{\mathbf{R}^{\mu}}\sum_{ \mathbf{0}\leq\ell\leq\mu}\left(\mathbf{\mu}\atop\mathbf{\ell}\right)\left[\frac{1}{2} \right]_{|\mathbf{\mu}|}\left[\frac{1}{2}\right]_{|\mathbf{\mu}-\ell|}|\mathbf{\ell}|!^{ \delta-1}\left|\mathbf{\mu}-\ell\right|!^{\delta-1}\] \[\leq\frac{4^{m}C_{u}^{m+1}\rho^{|\mathbf{\mu}|}}{\mathbf{R}^{\mu}}\left[ \frac{1}{2}\right]_{|\mathbf{\mu}|}|\mathbf{\mu}|!^{\delta-1},\] where we have used (13) and (14) in the last step. This finishes the proof. Proof of Theorem 4.1.: Observe that (40) is a simple corollary from (39) by changing from the \(V\)-norm to the \(H_{0}^{1}(D)\)-norm and the trivial bound \(\left[\frac{1}{2}\right]_{n}\leq n!\). Therefore it remains to prove (39). Here we argue by induction with respect to the order of the derivative \(\mathbf{\nu}\). For the first-order derivatives we use (42) with \(\mathbf{\nu}=\mathbf{0}\) and get \[C_{A}\left\|\partial^{\ell}u\right\|_{V}\leq\left\|\partial^{\ell}a\right\|_{ L^{\infty}(D)}\left\|u\right\|_{V}+\left\|\partial^{\ell}b\right\|_{L^{\infty}(D)} \left\|u^{m}\right\|_{\mathcal{L}_{\infty}}+\left\|\partial^{\ell}f\right\|_{V }.\] For the term with \(u^{m}\) we recall the Holder estimate (24), the Sobolev embedding (25), and (31) to obtain the upper bound \(\left\|u^{m}\right\|_{\mathcal{L}_{\infty}}\leq\left\|u\right\|_{\mathcal{L}_ {1}}^{m}\leq\overline{u}^{m}\). Using this and the regularity assumption (20), we derive \[\left\|\partial^{\ell}u\right\|_{V}\leq\frac{\overline{a}\left[\frac{1}{2} \right]_{1}}{C_{A}\mathbf{R}^{\ell}}\,\overline{u}+\frac{\overline{b}\left[\frac{ 1}{2}\right]_{1}}{C_{A}\mathbf{R}^{\ell}}\,\overline{u}^{m}+\frac{\overline{f} \left[\frac{1}{2}\right]_{1}}{C_{A}\mathbf{R}^{\ell}}=\overline{u}\left(\overline{ a}+\overline{b}\,\overline{u}^{m-1}+1\right)\,\frac{\left[\frac{1}{2}\right]_{1}}{C_{A} \mathbf{R}^{\ell}}\leq C_{u}\rho\frac{\left[\frac{1}{2}\right]_{1}}{\mathbf{R}^{\ell}}\] Thus, the base of induction is satisfied for the constants \(C_{u}\) and \(\rho\) defined in (41). Suppose now that (39) is valid for the \(\mathbf{\nu}\)-th derivative. Our aim is to show that the same bound holds for the \((\mathbf{\nu}+\mathbf{e})\)-th order derivative, where \(\mathbf{e}\) is a unit multiindex. For this we combine (42) with regularity assumptions (20), the inductive assumption, and (46) for \(\mathbf{\mu}\leq\mathbf{\nu}\) (this is valid by Lemma 4.3 and the inductive assumption) to arrive at \[C_{A}\left\|\partial^{\mathbf{\nu}+\mathbf{e}}u\right\|_{V} \leq\sum_{\mathbf{0}\leq\mathbf{0}\leq\mathbf{0}\leq\mathbf{0}}\left(\mathbf{\nu} \atop\mathbf{\eta}\right)\frac{\overline{a}\left[\frac{1}{2}\right]_{|\mathbf{\nu}+\bm {e}-\mathbf{\eta}|}}{\mathbf{R}^{\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}}}(|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta }|!)^{\delta-1}\,\frac{C_{u}\rho^{|\mathbf{\eta}|}\left[\frac{1}{2}\right]_{|\mathbf{ \eta}|}}{\mathbf{R}^{\mathbf{\eta}}}(|\mathbf{\eta}|!)^{\delta-1}\] \[+\sum_{\mathbf{0}\leq\mathbf{0}\leq\mathbf{0}}\left(\mathbf{\nu}\atop\mathbf{\eta} \right)\frac{\overline{b}\left[\frac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|} }{\mathbf{R}^{\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}}}(|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|!)^{\delta- 1}\,\frac{4^{m-1}C_{u}^{m}\,\rho^{|\mathbf{\eta}|}\left[\frac{1}{2}\right]_{|\mathbf{ \eta}|}}{\mathbf{R}^{\mathbf{\eta}}}(|\mathbf{\eta}|!)^{\delta-1}+\frac{\overline{f} \left[\frac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{e}|}}{\mathbf{R}^{\mathbf{\nu}+\mathbf{e}}}(|\mathbf{\nu }+\mathbf{e}|!)^{\delta-1}\] \[+\sum_{\mathbf{0}\leq\mathbf{0}\leq\mathbf{0}}\left(\mathbf{\nu}\atop\mathbf{\eta} \right)\frac{\overline{a}\left[\frac{1}{2}\right]_{|\mathbf{\eta}|}}{\mathbf{R}^{\mathbf{ \eta}}}(|\mathbf{\eta}|!)^{\delta-1}\,\frac{C_{u}\rho^{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|} \left[\frac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|}}{\mathbf{R}^{\mathbf{\nu}+\mathbf{ e}-\mathbf{\eta}}}(|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|!)^{\delta-1}\] \[+\sum_{\mathbf{0}\leq\mathbf{0}\leq\mathbf{0}}\sum_{\mathbf{0}\leq\mathbf{0}}\frac{ \left(\mathbf{\nu}\atop\mathbf{\eta}\right)\,\left(\mathbf{\eta}\atop\mathbf{\ell}\right)\, \overline{b}\left[\frac{1}{2}\right]_{|\mathbf{\eta}|}}{\mathbf{R}^{\mathbf{\eta}-\ell}}(| \mathbf{\eta}-\ell|!)^{\delta-1}\,\frac{4^{m-2}C_{u}^{m-1}\,\rho^{|\mathbf{\eta}|} \left[\frac{1}{2}\right]_{|\mathbf{\eta}|}}{\mathbf{R}^{\ell}}(|\mathbf{\ell}|!)^{\delta- 1}\,\frac{C_{u}\rho^{|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|}\left[\frac{1}{2}\right]_{| \mathbf{\eta}|}}{\mathbf{R}^{\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}}}(|\mathbf{\nu}+\mathbf{e}-\mathbf{\eta}|! )^{\delta-1}.\] Bound (13) yields estimates for products of the factorial terms. Observe that \(0<C_{A}\leq 1\) and therefore \(\rho\geq 1\). This helps to extract common factors on the right-hand side and obtain \[C_{A}\left\|\partial^{\nu+\mathbf{\epsilon}}u\right\|_{V} \leq\frac{\rho^{|\mathbf{\epsilon}|}(|\mathbf{\nu}+\mathbf{\epsilon}|!)^{\delta -1}}{\mathbf{R}^{\nu+\mathbf{\epsilon}}}\left(\overline{a}C_{u}\sum_{\mathbf{0}\in\mathbf{ \epsilon}\ni\mathbf{\epsilon}}\left(\begin{array}{c}\mathbf{\nu}\\ \mathbf{\eta}\end{array}\right)\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{\epsilon} -\mathbf{\eta}|}\left[\tfrac{1}{2}\right]_{|\mathbf{\eta}|}\] \[+\overline{b}\,4^{m-2}C_{u}^{m}\sum_{\mathbf{0}\in\mathbf{\epsilon}\ni\mathbf{ \epsilon}}\left(\begin{array}{c}\mathbf{\nu}\\ \mathbf{\eta}\end{array}\right)\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{\epsilon} -\mathbf{\eta}|}\left[\tfrac{1}{2}\right]_{|\mathbf{\eta}|}+\overline{f}\left[\tfrac{1 }{2}\right]_{|\mathbf{\nu}+\mathbf{\epsilon}|}\] \[+\overline{a}C_{u}\sum_{\mathbf{0}\in\mathbf{\epsilon}\ni\mathbf{\epsilon}} \left(\begin{array}{c}\mathbf{\nu}\\ \mathbf{\eta}\end{array}\right)\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{\epsilon} -\mathbf{\eta}|}\left[\tfrac{1}{2}\right]_{|\mathbf{\eta}|}\] \[+\overline{b}\,4^{m-2}C_{u}^{m}\sum_{\mathbf{0}\in\mathbf{\epsilon}\ni\mathbf{ \epsilon}}\sum_{\mathbf{0}\in\mathbf{\epsilon}\ni\mathbf{\epsilon}}\left(\begin{array}{ c}\mathbf{\nu}\\ \mathbf{\eta}\end{array}\right)\left(\begin{array}{c}\mathbf{\eta}\\ \mathbf{\ell}\end{array}\right)\left[\tfrac{1}{2}\right]_{|\mathbf{\eta}-\mathbf{\ell}|} \left[\tfrac{1}{2}\right]_{|\mathbf{\mu}}\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\bm {\epsilon}-\mathbf{\eta}|}\bigg{)}.\] According to (15)-(17), the bound for first and second sums is \(2[\tfrac{1}{2}]_{|\mathbf{\nu}+\mathbf{\epsilon}|}\), the bound for fourth term is \([\tfrac{1}{2}]_{|\mathbf{\nu}+\mathbf{\epsilon}|}\), the last sum bounded by \(4[\tfrac{1}{2}]_{|\mathbf{\nu}+\mathbf{\epsilon}|}\). Recalling that \(\overline{f}\leq\overline{u}=C_{u}\) we arrive at \[\left\|\partial^{\nu+\mathbf{\epsilon}}u\right\|_{V} \leq C_{u}\,C_{A}^{-1}\,\left(2\overline{a}+2\overline{b}(4C_{u} )^{m-1}+1+\overline{a}+\overline{b}(4C_{u})^{m-1}\right)\frac{\rho^{|\mathbf{ \epsilon}|}(|\mathbf{\nu}+\mathbf{\epsilon}|!)^{\delta-1}}{\mathbf{R}^{\nu+\mathbf{\epsilon}} }\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\mathbf{\epsilon}|}\] \[\leq C_{u}\frac{\rho^{|\mathbf{\epsilon}|}(|\mathbf{\nu}+\mathbf{\epsilon}|!)^{ \delta-1}}{\mathbf{R}^{\nu+\mathbf{\epsilon}}}\left[\tfrac{1}{2}\right]_{|\mathbf{\nu}+\bm {\epsilon}|},\] where we have used the definition (41) of \(\rho\) in the last step. We completes the inductive argument and thereby the proof of the theorem. ## 5 Applications and numerical experiments In this section we give two numerical examples that demonstrate how the abstract regularity result of Theorem 4.1 can be applied to mathematically analyse convergence of numerical methods for nonlinear reaction-diffusion problems under uncertainty. ### Gauss-Legendre quadrature Let the domain \(D=(0,1)^{2}\) and consider the problem (1) with \(a\equiv 1\), \(f=3(\cos(2\pi x_{1})+1)(\cos(3\pi x_{2})+1)\), \(m=3\) and \(b\) is one of following functions \[b^{(1)}(\mathbf{x},y)=50(\cos^{2}(15\pi x_{1}+y^{10})+1)(\cos^{2}(17\pi x_{2}+y^{ 25})+1), \tag{47}\] \[b^{(2)}(\mathbf{x},y)=\left(\exp\left(-\frac{x_{1}^{2}+x_{2}^{2}}{y+1}\right)+1 \right)(\cos^{2}(15\pi x_{1})+1)(\cos^{2}(17\pi x_{2})+1). \tag{48}\] Here \(y\) is a scalar real parameter with the range \([-1,1]\). Since \(m\) is odd and both \(b^{(1)}\) and \(b^{(2)}\) are nonnegative, Assumption 3.4 is valid and therefore corresponding solutions \(u^{(1)}\) and \(u^{(2)}\) of (21) are uniquely determined in \(V\) for every \(y\in[-1,1]\). Consider the functional \(\mathcal{G}(u)(y):=u(\mathbf{x}_{0},y)\), \(\mathbf{x}_{0}=(0.5,0.5)\), i.e. the point evaluation in the center of the computational domain \(D\). Our goal is to compute numerically the (rescaled) expectation of \(\mathcal{G}(u)(y)\) if \(y\) is a scalar real random variable uniformly distributed in \([-1,1]\) \[I(\mathcal{G}(u)):=\int_{-1}^{1}u(x_{0},y)\,dy. \tag{49}\] Since the solution \(u\) is not available in closed form, the integral (49) can be approximated by numerical quadrature, e.g. the Gauss-Legendre quadrature, which is a reasonable choice for the case of a single real-valued parameter \(y\). Let \(\{\xi_{i},w_{i}\}_{i=1}^{n}\) be the nodes and weights of the \(n\)-point Gauss-Legendre quadrature \[Q_{n}[\mathcal{G}(u)]:=\sum_{i=1}^{n}w_{i}u(\mathbf{x}_{0},\xi_{i}). \tag{50}\] We are interested in the behaviour of the quadrature error \[\varepsilon_{n}=|I(\mathcal{G}(u))-Q_{n}[\mathcal{G}(u)]| \tag{51}\] with increasing \(n\). It is known that the convergence of \(\varepsilon_{n}\) is strongly related to the regularity of \(u\) with respect to \(y\). In particular [13, Theorem 5.2] implies that \[\varepsilon_{n}\leq C\exp(-rn^{1/\delta}). \tag{52}\] with positive constants \(C\) and \(r\) independent of \(n\) if \(u\) is of class Gevrey-\(\delta\). 1. In the case of the analytic diffusion coefficient \(b^{(1)}\) as in (47) Theorem 4.1 implies that \(u\) is analytic in \(y\), i.e. \(\delta=1\), and therefore we expect \[\varepsilon_{n}^{(1)}\leq C\exp(-rn).\] (53) 2. The diffusion coefficient \(b^{(2)}\) is not analytic near \(y=-1\), but is Gevrey-\(\delta\) uniformly for all \(y\in[-1,1]\) with \(\delta\geq 2\), see [14] and [13, Section 6]. Theorem 4.1 implies that \(u\) is Gevrey-\(\delta\) with the same \(\delta=2\) and hence we expect \[\varepsilon_{n}^{(2)}\leq C\exp(-rn^{1/2}).\] (54) In order to observe the behaviour predicted in (53) and (54) we solve deterministic equations (21) in every quadrature point \(y=\xi_{i}\) on a very fine finite element grid having \(16.129\) degrees of freedom. Since \(I(\mathcal{G}(u))\) is not available in closed form, we approximate it by a very fine Gauss-Legendre quadrature \(Q_{n^{*}}[\mathcal{G}(u)]\) with \(n^{*}=50\) quadrature nodes for \(b^{(1)}\) and \(n^{*}=150\) quadrature nodes for \(b^{(2)}\). As a solver for the nonlinear problem (21) we use the fixed-point iteration method (33) with an absolute error tolerance of \(10^{-14}\) with respect to the \(H_{0}^{1}(D)\)-seminorm. In Figure 1, we plot the relative error \(\varepsilon_{n}^{(1)}\) against the number of quadrature points \(n\) in the semi-logarithmic scale. The reference line clearly shows the linear trend of the type \(-rn+\log C\) and thereby confirms (53). In Figure 2, we plot the relative error \(\varepsilon_{n}^{(2)}\) with respect to the square root of the number of quadrature points \(N:=n^{1/2}\) in the semi-logarithmic scale. Here we can also observe the linear trend of the type \(-rN+\log C\). This confirms (54) and thereby demonstrates the meaning and validity of Theorem 4.1. Figure 1: Quadrature error \(\varepsilon_{n}^{(1)}\) (left) with respect to the number \(n\) of quadrature points and Quadrature error \(\varepsilon_{n}^{(2)}\) (right) with respect to \(N=n^{1/2}\). ### Quasi-Monte Carlo method for Gevrey functions Let \(Y=\left[-\frac{1}{2},\frac{1}{2}\right]\), for any given \(s\in\mathbb{N}\) we denote by \(\mathbf{y}_{s}=(y_{1},\dots,y_{s},0,0,\dots)\) the \(s\)-dimensional truncation of \(\mathbf{y}\in U=Y^{\mathbb{N}}\). For a function \(F:Y^{\mathbb{N}}\mapsto\mathbb{R}\), our quantity of interest is the integral of the form \[I(F)=\int_{Y^{\mathbb{N}}}F(\mathbf{y})\,d\mathbf{y} \tag{55}\] In this section we apply our main regularity result in Theorem 4.1 to analyse the convergence rate of Quasi-Monte Carlo (QMC) method for \(F:=\mathcal{G}(u)\), where \(\mathcal{G}(u)(\mathbf{y}):=u(\mathbf{x}_{0},\mathbf{y})\) is the point evaluation at the center of the computational domain \(D\), i.e. the same linear functional introduced in Section 5.1. The QMC approximation reads \[Q^{\Delta}_{s,n}(F):=\frac{1}{n}\sum_{i=1}^{n}F\left(\,\left\{\frac{iz_{s}}{n} +\Delta\right\}-\frac{1}{2}\right) \tag{56}\] which is a randomly shifted lattice rule with the generating vector \(\mathbf{z}_{s}\in\mathbb{N}^{*}\), \(\Delta\) is a random shift which is uniformly distributed over the cube \((0,1)^{s}\), and \(n\) is the number of quadrature points. The braces in (56) indicate the fractional part of each component of the argument vector. Notice that \(Q^{\Delta}_{s,n}(F)\) is a random variable itself. A popular measure of accuracy is the root mean square error \[\text{RMSE}=\sqrt{\mathbb{E}|I(F)-Q^{\Delta}_{s,n}(F)|^{2}},\] where \(\mathbb{E}\) is the expectation with respect to the random shifts \(\Delta\). Moreover, \(Q^{\Delta}_{s,n}(F)\) approximates only in the first \(s\) components, therefore it is natural to introduce the truncation \[I_{s}(F)=\int_{Y^{s}}F(\mathbf{y}_{s})\,dy_{1}\dots dy_{s} \tag{57}\] and use the triangle inequality to get the error decomposition \[\text{RMSE}\leq|I(F)-I_{s}(F)|+\sqrt{\mathbb{E}\left(\left|I_{s}(F)-Q^{\Delta }_{s,n}(F)\right|^{2}\right)} \tag{58}\] Assume from now on that the assumptions of Theorem 4.1 are valid. The first summand on the right-hand side of (58), the truncation error, can only converge to zero, if \(F\) becomes "less dependent" on \(y_{s}\) as \(s\to\infty\). A sufficient condition that rigorously implies the desired behaviour is \[\|\mathbf{\beta}\|_{\ell^{p}}:=\bigg{(}\sum_{j=1}^{\infty}\beta^{p}_{j}\bigg{)}^{ \frac{1}{p}}<\infty \tag{59}\] for some \(p\in(0,1]\) and \(\beta_{j}:=R_{j}{}^{-1}\), i.e. \(R_{j}\) in Theorem 4.1 grows sufficiently fast with \(j\). Following closely the arguments in (9, Theorem 4.1) and (1, Lemma 7.3), this implies \[|I(F)-I_{s}(F)|\leq C_{1}\,s^{-2\left(\frac{1}{p}-1\right)}, \tag{60}\] where \(C_{1}\) depends on \(\delta\geq 1\), but is independent of \(s\). The estimate for the second summand in the right-hand side of (58), the quadrature error, can be analysed for \(\delta\geq 1\) following the arguments of (1, Lemma 7.4), (9, Theorem 4.2) and (15, Theorem 6.4). As a corollary of these results, for a fixed integer \(s\) and \(n\) being a power of \(2\), a QMC quadrature rule \(Q^{\Delta}_{s,n}\) can be explicitly constructed, so that \[\sqrt{\mathbb{E}\left(\left|I_{s}(F)-Q^{\Delta}_{s,n}(F)\right|^{2}\right)} \leq C_{2}n^{-\frac{1}{2n}}, \tag{61}\] where \(C_{2}\) is independent of \(n\) and \[\vartheta=\begin{cases}\omega&\text{for some $\omega\in(\frac{1}{2},1)$} \quad\text{when $p\in(0,\frac{2}{3\delta}]$}\\ \frac{\delta p}{2-\delta p}&\text{when $p\in(\frac{2}{3\delta},\frac{1}{ \delta}]$}\end{cases}\] This result requires assumptions of Theorem 4.1 and (59) in the reduced range \(p\in(0,\delta^{-1})\). The result is still valid for \(p=\delta^{-1}\) if (59) is replaced with \(\|\mathbf{\beta}\|_{\ell^{p}}<\sqrt{6}\). In this case the convergence rate deteriorates to the rate of the plain Monte Carlo estimator, that is \[\sqrt{\mathbb{E}\left(\left|I_{s}(F)-Q_{s,n}^{\Delta}(F)\right|^{2}\right)} \leq C_{3}n^{-\frac{1}{2}} \tag{62}\] and the Monte Carlo sample average \[Q_{s,n}^{MC}(F):=\frac{1}{n}\sum_{i=1}^{k}F(\mathbf{y}_{s}^{(i)}) \tag{63}\] with independent samples \(\mathbf{y}_{s}^{(i)}\) from the uniform distribution in \(Y^{s}\). The constant \(C_{3}\) is determined by the variance of \(F(\mathbf{y}_{s})\) and thereby is independent of \(n\). Observe that the Gevrey-\(\delta\) non-analytic regularity (i.e., \(\delta>1\)) has a more significant effect on the QMC error (61) rather than on the truncation error (60). For this reason in the forthcoming example we concentrate specifically on this contribution. Let \(D=(0,1)^{2}\) and consider the problem (1) with \(a\equiv 1\) and \(f\equiv 1\), \(m=5\) and \(b\) being one of following functions \[b^{(1)}(\mathbf{x},\mathbf{y}) =2+2\exp\left(-\zeta(5)+\sum_{j=1}^{100}j^{-5}\sin(j\pi x_{1}) \sin(j\pi x_{2})\,y_{j}\right)\,, \tag{64}\] \[b^{(2)}(\mathbf{x},\mathbf{y}) =3+\frac{1}{\zeta(5)}\sum_{j=1}^{100}j^{-5}\sin(j\pi x_{1})\sin( j\pi x_{2})\exp\left(-\frac{1}{y_{j}+\frac{1}{2}}\right)\,, \tag{65}\] Here \(y_{j}\) are scalar real random variables uniformly distributed in \([-\frac{1}{2},\frac{1}{2}]\) for all \(j\in\mathbb{N}\). Clearly, for all \(j\in\mathbb{N}\), we have that \(\mathbf{\beta}^{(k)}\in\ell^{p}\) with any \(p>\frac{1}{3}\) for both test cases \(k=1\) and \(k=2\). Moreover, \(a^{(1)}\) is analytic in \(\mathbf{y}\) (\(\delta^{(1)}=1\)), whereas \(a^{(2)}\) is Gevrey-\(\delta\) with \(\delta^{(2)}=2\). From Theorem 4.1 we know that this regularity carries over to the solution \(u^{(1)}\) and \(u^{(2)}\) with the same \(\delta\). The point values \(u(\mathbf{x}_{0},\cdot)\) are computed on a very fine uniform finite element mesh having \(16.129\) degrees of freedom, so that the effect of the finite element discretization is negligible. As in Section 5.1, we use the fixed-point iteration (33) with an absolute error tolerance of \(10^{-14}\) with respect to the \(H_{0}^{1}(D)\)-seminorm. The outer expectation is approximated by the empirical mean of \(R=8\) runs, i.e., for \(\Delta^{(j)}\), \(1\leq j\leq R\) being an independent sample from the uniform distribution from the unit cube \((0,1)^{s}\) and \(Q_{s,n}^{(j)}(F^{(k)})\) the corresponding QMC quadrature, we approximate the relative QMC error by \[\varepsilon_{n}^{\text{QMC},(k)}=\sqrt{\frac{1}{R}\sum_{j=1}^{R}\left(\left| \frac{I_{s}^{*}(F^{(k)})-Q_{s,n}^{(j)}(F^{(k)})}{I_{s}^{*}(F^{(k)})}\right|^{ 2}\right)}, \tag{66}\] and analogously for the plain Monte Carlo approximation \(\varepsilon_{n}^{\text{MC},(k)}\). In the above notation, \(k=1\) corresponds to the test case with \(b=b^{(1)}\) and \(k=2\) corresponds to the test case with \(b=b^{(2)}\). In both cases the reference value \(I_{s}^{*}(F^{(k)})\) is the highest level of the QMC approximation. Since \(p\in(0,\frac{2}{3\delta^{(2)}})\subset(0,\frac{2}{3\delta^{(1)}})\), we expect from the above theory that \(\varepsilon_{n}^{\text{QMC},(k)}\) is approximately proportional to \(n^{-1}\) and \(\varepsilon_{n}^{\text{MC},(k)}\) to \(n^{-\frac{1}{2}}\). In Figure 3 we clearly observe that this convergence behaviour is reproduced.
2309.14846
Supersonic: Learning to Generate Source Code Optimizations in C/C++
Software optimization refines programs for resource efficiency while preserving functionality. Traditionally, it is a process done by developers and compilers. This paper introduces a third option, automated optimization at the source code level. We present Supersonic, a neural approach targeting minor source code modifications for optimization. Using a seq2seq model, Supersonic is trained on C/C++ program pairs ($x_{t}$, $x_{t+1}$), where $x_{t+1}$ is an optimized version of $x_{t}$, and outputs a diff. Supersonic's performance is benchmarked against OpenAI's GPT-3.5-Turbo and GPT-4 on competitive programming tasks. The experiments show that Supersonic not only outperforms both models on the code optimization task but also minimizes the extent of the change with a model more than 600x smaller than GPT-3.5-Turbo and 3700x smaller than GPT-4.
Zimin Chen, Sen Fang, Martin Monperrus
2023-09-26T11:21:46Z
http://arxiv.org/abs/2309.14846v3
# Supersonic: Learning to Generate ###### Abstract Software optimization refines programs for resource efficiency while preserving functionality. Traditionally, it is a process done by developers and compilers. This paper introduces a third option, automated optimization at the source code level. We present Supersonic, a neural approach targeting minor source code modifications for optimization. Using a seq2seq model, Supersonic is trained on C/C++ program pairs (\(x_{t},x_{t+1}\)), where \(x_{t+1}\) is an optimized version of \(x_{t}\), and outputs a diff. Supersonic's performance is benchmarked against OpenAI's GPT-3.5-Turbo and GPT-4 on competitive programming tasks. The experiments show that Supersonic not only outperforms both models on the code optimization task but also minimizes the extent of the change with a model more than 600x smaller than GPT-3.5-Turbo and 3700x smaller than GPT-4. Code Optimization, Seq2Seq Learning, Large Language Model. ## 1 Introduction Software optimization refers to the process of refining a program so that it utilizes fewer resources, such as time, memory, CPU, and energy while preserving its original functionality. Traditionally, this task has been carried out by the developer and/or the compiler. The developer possesses deep human expertise to enhance the program by using a more efficient data structure or an algorithm with better complexity. On the other hand, the compiler can apply a range of automated optimizations on the intermediate representation that can significantly enhance the program's performance. Human developers optimize at the source code level, and compilers at the machine code level. In this work, we explore a third way: automated optimization at the source code level. For developers, automatic source code optimization is invaluable. Firstly, it encompasses optimizations that are beyond the scope of compiler optimizations, high-level optimizations that a compiler cannot achieve with guarantees. For instance, refactoring an inefficient algorithm or modifying data structures to boost performance is beyond the compiler's scope. Secondly, it facilitates the optimization of legacy systems for which language and domain expertise have been lost over time, such as Fortran libraries or Cobol systems. Process-wise, automatic source code optimization can be used in modern code bases, with automated pull requests [1]. In this paper, we introduce Supersonic, an innovative end-to-end system that employs a supervised machine-learning approach to the problem of automatic source code optimization. Supersonic, implemented as a seq2seq model, learns the relationship between input programs and their optimized versions. Our process involves collecting a dataset of past source code optimizations, where each pair (\(x_{t}\), \(x_{t+1}\)) consists of a base program \(x_{t}\) and its optimized counterpart \(x_{t+1}\). Special attention has been given to creating such a high-quality training dataset. With this dataset, we then tailor a model and devise a data and training loop specifically for the optimization task. At inference time, Supersonic has three phases: canonicalization, diff-synthesis, and post-processing. During the canonicalization phase, Supersonic removes source code comments and ensures a canonical code format (w.r.t. whitespaces, tabs, etc.) that has also been enforced at training time. The diff-synthesis phase takes the input program and predicts a diff-based output representation, which is similar to the typical Unix patch format. The post-processing phase is responsible for validating the predicted diff-based output representation and applying it to the original program. Supersonic is implemented for C/C++ and trained on a large corpus of C/C++ program optimizations. For the training and evaluation of Supersonic, we use programs from three code competition websites. This is a perfect evaluation benchmark since problems on code competition websites are usually designed to have a large solution space and they also have a strong emphasis on optimization, running time and memory consumption. They are excellent for collecting training data as well as for assessing optimized programs. For evaluation, we compare against a strong baseline. Supersonic competes with OpenAI's GPT-3.5-Turbo and GPT-4, two industrial large language models that have proven to be state-of-the-art in many software engineering tasks [2, 3, 4, 5]. Our evaluation of 559 programs shows that Supersonic outperforms both GPT-3.5-Turbo and GPT-4. Not only is the performance better on the task of optimizing C/C++ programs, but Supersonic's model is approximately 600x smaller1 than GPT-3.5-Turbo and 3700x smaller2 than GPT-4, making it more cost and energy-efficient. Footnote 1: We assume it is the same size as GPT3, which is a 175B parameter model. Supersonic is unique compared to the most related work, such as PIE4Perf [6] and DeepDev-PERF [7]. First, we take special care in removing essentially full re-implementations, and not optimized solutions from the dataset. Second, Supersonic uses a novel diff-based output representation as opposed to generating the entire program, a feature that, as demonstrated in our evaluation, significantly boosts its effectiveness. Last, the evaluation of Supersonic is done on third-party competition websites, giving our results strong external validity. In summary, our contributions are: * [noitemsep,topsep=0pt,parsep=0pt,leftmargin=*] * Supersonic, a novel source code optimization technique based on state-of-the-art sequence-to-sequence learning. Supersonic is able to generate source code level optimizations while retaining significant similarity with the original program. * We show that Supersonic outperforms GPT-3.5-Turbo and GPT-4 when submitting tentative optimizations to the official Codertores website. It improves running time for 26.0% programs, compared to only 12.0% for GPT-3.5-Turbo and 4.0% for GPT-4. * We investigate the optimization performance of Supersonic, GPT-3.5-Turbo, and GPT-4 at various string similarity thresholds between the optimized and original program. We find Supersonic is better than GPT-3.5-Turbo and GPT-4 when the threshold is above 0.4. However, the threshold of 0.4 is close to a complete rewrite. * We demonstrate that the diff-based output representation is better than outputting the full program. Our ablation study shows that changing the full program output representation to a diff-based output representation of Supersonic boosts the optimization success rate by 2x at least. * For the sake of open science and future research on this topic, we share all our code, data, and train models at [https://github.com/ASSET-KTH/Supersonic](https://github.com/ASSET-KTH/Supersonic). ## 2 Background ### _Large language models_ A Large Language Model (LLM) is a type of deep learning (DL) model commonly used in the field of natural language processing (NLP). Unlike traditional DL models designed for specific tasks, LLMs are initially trained on large textual datasets to acquire a universal language representation. This learned representation can be further refined and adapted for various downstream tasks through supervised fine-tuning [8]. The majority of current LLMs are built upon the core modules of the Transformer model [9], and they can be categorized into three main types based on the chosen modules: encoder-only [8], decoder-only [10], and encoder-decoder [11]. The design of an encoder-only LLM involves stacking multiple Transformer encoder layers, with BERT [8] emerging as a renowned representative of this LLM class. BERT specifically undergoes training with two training objectives - masked language modeling and next sentence prediction. The former lets BERT predict masked words according to their located context, and the latter lets BERT measure whether two sentences are continuous, both of which allow it to learn a universal contextual representation of words. In contrast, decoder-only LLMs are structured around the Transformer decoder as their fundamental building block. The GPT family [10, 12], as one of the most well-known decoder-only LLMs, is pre-trained with an objective called next token prediction. Here, the models are tasked to predict the present word based on its preceding context. The encoder-decoder LLM, as the name suggests, incorporates both Transformer encoder and decoder layers. T5 [13], a prototypical example of this LLM category, is pre-trained with an objective analogous to a fill-in-the-blank task, which compels the model to predict missing words in relation to their specific context. Due to the inherent differences in LLMs' core architectures and training objectives, encoder-only and decoder-only LLMs respectively excel in language understanding and generation tasks. On the other hand, encoder-decoder LLMs, while capable of delivering high performance across both types of tasks, necessitate more parameters for their construction. LLMs are increasingly utilized in the field of software engineering (SE) to expedite automation processes. An example of this is CodeBERT [14], an LLM designed specifically for programming language. This model was initially trained on a broad, cross-programming language (PL) corpus. Subsequently, supervised fine-tuning allows its application to various SE tasks, such as code searching [15] and summarization [16]. To augment the efficacy of LLMs in handling code, Wang _et al._ introduced CodeT5 [17], which was pre-trained using an innovative method known as identifier-aware denoising training. Additional training objectives include identifier tagging, masked identifier prediction, and bimodal dual generation. All of these objectives facilitated the model's ability to effectively comprehend both programming and natural language. In recent developments, next-generation LLMs like LLaMa [18], BLOOM [19], and PaLM [20] are gaining momentum. These models are typically trained on both extensive PL and natural language corpora. As a result, they offer robust support for tasks within both NLP and SE domains. ### _The State-of-the-art Models at OpenAI_ Recently, OpenAI released GPT-3.5-Turbo [21] and GPT-4 [22], two groundbreaking NLP systems specializing in generating text that closely mirrors human language. These state-of-the-art systems over their performance to the Transformer decoder-based architecture, known as GPT [10]. GPT-3.5-Turbo's training involves a two-stage process: unsupervised pre-training and subsequent instruction fine-tuning. In the pre-training phase, the model learns from an extensive corpus of Internet data dated until September 2021. It is important to underscore that the system does not possess specific knowledge about the documents that comprise its training data and cannot directly access any database or document. The second stage, instruction fine-tuning, involves training the model on a smaller dataset with human feedback, utilizing a technology called reinforcement learning from human feedback [23]. With suitable instructions, also called prompts, GPT-3.5-Turbo can serve a wide array of applications, including but not limited to generating code [3], debugging [24], acting as a tutoring assistant[2], translating languages [25], and even simulating characters for video games [26]. Following the success of GPT-3.5-Turbo, OpenAI introduced GPT-4, an even more advanced model with an increased number of parameters and refined training techniques. The GPT-4 is better than GPT-3.5-Turbo on understanding and generating text, improving its predecessor in terms of context accuracy, diversity of output, and adaptability across diverse applications [22]. In this work, we explore the capability of GPT-3.5-Turbo and GPT-4 in code optimization. ### _Sequence-to-Sequence learning_ Sequence-to-Sequence (Seq2Seq) learning [27], a concept introduced by the NLP community, is a machine learning paradigm that models conditional probability distributions for sequence outputs based on sequence inputs. This is usually implemented through an end-to-end neural network structure: an encoder and a decoder. Specifically, the encoder maps the input sequence into a fixed-length vector. To illustrate, taking code optimization as an example, the encoder could process a section of code requiring optimization, such as a C++ program, and convert it into a compact vector representation. Leveraging the powerful learning capabilities of neural networks, this vector can effectively capture the semantic information of the input sequence. Subsequently, the decoder utilizes this vector to produce the output sequence. Continuing with our example, the decoder could yield a C++ program optimized for some objectives, e.g., execution speed or memory usage. Training Seq2Seq models are typically conducted end-to-end, tuning the parameters to maximize the likelihood of the correct output sequence given the input sequence. Thus, Seq2Seq learning is a robust framework for tackling generation tasks involving intricate, variable-length sequence transformations. ## 3 Technical Solution: Supersonic ### _Overview_ Supersonic leverages seq2seq learning initialized with pre-trained models to generate optimized C/C++ programs. Importantly, the optimized programs are syntactically similar to the original program, meaning that the optimization only changes a few lines of the program. It is done by training Supersonic on program pairs, (\(x_{t}\), \(x_{t+1}\)), where \(x_{t}\) and \(x_{t+1}\) are functionally equivalent, with a small edit distance. Supersonic consists of three phases: 1) Canonicalization, 2) Diff synthesis, 3) Post-processing. Figure 1 shows the training and inference pipeline of Supersonic for code optimization from canonicalization to post-processing. The canonicalization phase canonicalizes the program coding style (whitespaces, tabs, and newlines) and removes comments before it is used as the input to the diff synthesis phase. The diff synthesis phase generates multiple optimization candidates in a diff-based output representation. The post-processing phase checks for the well-formedness of the diff-based output representation, and then applies the output as a patch on the original program. For the training stage of Supersonic, the file is first canonicalized, then it is used as input to the seq2seq model. The seq2seq model generates a diff-based representation in the diff synthesis phase, and it is compared with the ground truth to update the parameter to the seq2seq model in an iterative loop. For the inference stage of Supersonic, the input still goes through the canonicalization and diff synthesis phase, but since there is no ground truth, the output is applied to the original program in the post-processing phase. In the following sections, we illustrate each phase of Supersonic in detail and describe our training dataset. ### _Canonicalize training source code_ The goal of the canonicalization phase is to canonicalize program pairs (\(x_{t}\), \(x_{t+1}\)) before they are used as input to the machine learning model. In this phase, we first remove Fig. 1: The pipeline of Supersonic for code optimization. source code comments and apply a unique, consistent code style. We remove source code comments because they are subjective in nature, varying greatly in style, quality, and content. Meanwhile, although they do not affect the logical flow of the code, including comments makes the input program longer, which impacts negatively on the inference performance of the language model [28]. Input programs also vary in coding styles, for example number of whitespace, newlines, and how brackets are placed. These are all tokenized and used as input to the machine-learning model. Therefore we use a single coding style to let the machine learning model focus on the functionality of the program. We use _GCC_ (using the command _"gcc -fpreprocessed -dD -E -P"_) to remove source code comments and _clang-format_ (using the command _"clang-format -style=llvm"_) to format all C/C++ programs according to LLVM coding style. ### _Output representation & Diff synthesis_ A key aspect of Supersonic is that it generates a diffused output representation instead of generating the whole optimized program. The diff-based output representation is the Univ unified diff format between \(x_{t}\) and \(x_{t+1}\) with one line context, i.e. one line before and after the change. Compared to the original diff format, the diff header that specifies the filenames and changed line numbers is removed. Multiple change hunks are concatenated with newlines to represent a change that modifies multiple locations. In contrast, most related works [6, 7] employ a full program output representation. We use the diff-based output representation because generating a longer output sequence increases the probability of errors due to accumulated mistakes. It is widely known that seq2seq models performance decreases with the output length [29]. Figure 2 shows an example of the diff-based output representation generated by Supersonic. This example is a user submission to AtCoder - Green Bin problem 3. The task is to find the number of anagrams from a list of strings. The original solution, Listing 1, uses a sorted mapping _map_ to store pairs of strings and integers. The improved solution, Listing 2, by the same user used _unordered_map_ that does not sort the mapping compared to _map_. This change improved the running time from 144 ms to 97 ms, and memory consumption from 11136 B to 10660 B. Supersonic's diff-based output representation of Listing 2 is shown in Listing 3. The diff-based output representation contains 71 tokens instead of 124 tokens when tokenizing the full improved program. As we will see in the evaluation, the shorter token length helps the model to capture knowledge with fewer tokens. Footnote 3: [https://atcoder.jp/contests/abcl37/tasks/abcl37_c](https://atcoder.jp/contests/abcl37/tasks/abcl37_c) ### _Post-processing synthesized code_ During the inference stage of Supersonic, the diff-based output representation from the diff synthesis phase needs to be patched on the original program to get the full source code of the predicted optimized program. We greedily match each predicted changed hunk to the original program using the one-line context, and then apply the changed hunk. If we fail to match the context lines in the original program, it means that the diff synthesis phase has generated a malformed diff-based output representation and we simply discard it. ### _Training loop_ The core machine learning model of Supersonic that generates and predicts the diff-based output representation is implemented as a transformer-style seq2seq model. The encoder and decoder are initialized with CodeBERT which is pre-trained on C++ code, CodeBERT-CPP [30]. CodeBERT is a pre-trained LLM on source code and natural language [14]. The training objective of CodeBERT is masked language modeling, where the model predicts the token masked by a special mask token, and replaced token detection, where the model predicts if a token is the original one or not. CodeBERT trained on these two objectives achieves a robust understanding of code semantics and syntax. By initializing our model with weights from CodeBERT, we leverage this rich foundation of code semantics and structure, which accelerates convergence during training and can lead to better overall performance [31]. Our seq2seq model is trained to generate \(x_{t+1}\) by using \(x_{t}\) as the input of the program pair \((x_{t},x_{t+1})\) of the training set. We reuse the tokenizer CodeBERT-CPP, which is based on the subword tokenization from WordPiece with \(50\,265\) as the vocabulary size [32]. The total amount of parameters for the seq2seq model is 278M. ### _Training Dataset_ Training Supersonic to optimize C/C++ programs requires a dataset of source code optimization program pairs (\(x_{t}\), \(x_{t+1}\)). In our work, we collect source code optimization program pairs from code competition websites. Code competition websites usually record the execution time and memory usage of all submitted solutions to each problem. It makes them the ideal source to crawl high-quality and accurate source code optimization program pairs. In our work, we use data from Codeforces 4, as well as AIZU 5 and AtCoder 6 originally from the CodeNet dataset [33] to build our goal dataset. For the Codeforces dataset, we first use the Codeforces API to get all the past contests and submissions to each contest. However, the Codeforces API does not return the source code, therefore we use another source 7 to find the source code for each submission. For submissions to AIZU and AtCoder, the CodeNet dataset already provides all the necessary information that we need to process the data. All the submissions that we collect are accepted by each competition website, therefore their correctness is guaranteed. The attributes that we collect from each competition website are shown in Table II. Footnote 4: [https://codeforces.com](https://codeforces.com) Footnote 5: [https://judge.u-airzu.ac.jp](https://judge.u-airzu.ac.jp) Footnote 6: [https://atcoder.jp](https://atcoder.jp) Footnote 7: [https://codeforces.com/blog/entry/94755](https://codeforces.com/blog/entry/94755) To ensure the relevance of our dataset, we specifically focused on program pairs (\(x_{t}\), \(x_{t+1}\)) from the same author, where \(x_{t+1}\) strictly improves either on the running time or memory usage over \(x_{t}\). By requiring the submissions to be in chronological order, we aimed to capture the author's progress and improvements over time. This approach allowed us to observe the evolution of optimizations employed by an author and analyze the specific code changes that led to enhanced performance. Collecting these pairs of programs directly from the same author, in chronological order, minimizes confounding factors such as coding style, enabling us to attribute improved performance to code changes. The primary objective of Supersonic is to generate optimized programs while minimizing the extent of changes from the original code. Intuitively, we want to train the system to generate optimizations that modify a few lines of code only. To do this, we filter all pairs of submissions based on the following criteria: 1. At most 20 lines are changed between \(x_{t}\) and \(x_{t+1}\). 2. At most 20% of lines are changed between \(x_{t}\) and \(x_{t+1}\). 3. At least a string similarity score of 0.8 between \(x_{t}\) and \(x_{t+1}\). The string similarity metric provides a value between 0.0 and 1.0, with 0.0 indicating two strings are entirely dissimilar and 1.0 indicating a perfect match between the strings. The string similarity metric we use is the _SequenceMatcher_ from the Python _difflib_ library. It uses a variant of the Gestalt pattern matching string-matching algorithm [34]. In total, we collected 20M Codeforces submissions and 6M submissions from the CodeNet dataset. Then, we extracted 746K and 138K submission pairs from Codeforces and CodenNet which either improves on the running time or memory consumption. After filtering the submission pairs, we split the remaining submission pairs into train, validation, and test sets, consisting of \(312\,876\), \(1000\), and \(559\) samples, respectively. The pre-training dataset used to pre-train CodeBERT is the C++ source code of the GitHub \begin{table} \begin{tabular}{l l l l l l l} \hline Split & Size & \(x_{t}\) LOC & \(x_{t+1}\) LOC & Supersonic output lines & Codeforces solutions & AtCoder solutions & AIZU solutions \\ \hline Train & 312876 & \((35,91)\) & \((35,90)\) & \((5,16)\) & 276714 & 28665 & 7497 \\ Validation & 1000 & \((36,88)\) & \((36,88)\) & \((5,15)\) & 877 & 96 & 27 \\ Test & 559 & \((33,86)\) & \((33,86)\) & \((5,17)\) & 300 & - & 259 \\ \hline Total & 314435 & \((35,91)\) & \((35,90)\) & \((5,16)\) & 277891 & 28761 & 7783 \\ \hline \end{tabular} \end{table} TABLE I: Our dataset statistics. The values in parentheses of the \(x_{t}\) LOC, \(x_{t+1}\) LOC and Supersonic output lines columns are the lower and upper quartile. \begin{table} \begin{tabular}{l l} \hline Attribute & Description \\ \hline origin & The origin of the submission: AIZU, AtCoder, or Codeforces \\ author & The author of the submission \\ contest\_id & ID of the contest/problem \\ submission\_id & ID of the submission \\ creation\_time & When the submission is created \\ problem & The problem name \\ programming\_language & The programming language: C or C++ \\ cpu\_time & The execution time \\ memory & The memory consumption \\ source\_code & The submission source code \\ \hline \end{tabular} \end{table} TABLE II: Collected attributes for each submission to AIZU, AtCoder, and Codeforces. Fig. 2: The comparison between the full program and the diff-based output representation generated by Supersonic. When tokenized, the fully optimized program contains 124 tokens whereas the diff-based output representation contains 71 tokens. The difference will be even bigger if the original solution is longer. A longer output increases the probability of generating an erroneous program because of accumulated mistakes. Code-Clean dataset under the CodeParrot project on Huggingface 8. The full dataset statistics are shown in Table I. Footnote 8: [https://huggingface.co/datasets/codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean) ## 4 Experimental Setup In this section, we describe our experimental setup to evaluate Supersonic against the state-of-the-art related work. ### _Research Questions_ **RQ1 - Effectiveness**: How does Supersonic compare against GPT-3.5-Turbo and GPT-4 on the task of code optimization while minimizing the extent of changes? **RQ2 - Optimization Focus**: How does Supersonic compare against GPT-3.5-Turbo and GPT-4 depending on the extent of acceptable changes? **RQ3 - Ablation Study**: To what extent is the diff-based output representation better than the full program output representation? These research questions are framed to address different aspects of the problem domain. RQ1 compares Supersonic against GPT-3.5-Turbo and GPT-4 on running time and memory consumption optimizations, while minimizing the extent of changes, by submitting predicted program to code competition websites. RQ2 in contrast to RQ1, relaxes the constraint of minimizing the extent of changes. Lastly, RQ3 explores the impact of using our diff-based output representation compared to using the full program, to validate our core design. ### _Test dataset_ Our test dataset, described in subsection 3.6, consists of user submissions of code competition problems. The problems are of different difficulties and are designed with possible optimization in mind. These problems are known to be hard for language models. For example, GPT-4 is ranked at the bottom 5% when trying to Codeforces problems[22]. The code competition website is also used to compute and report the running time and memory usage. Our full test dataset consists of 559 user submissions (see Table I), of which 300 are Codeforces user submissions, and 259 are AIZU user submissions. The 559 user submissions, along with their predictions, and the collected running time and memory consumption are used to answer all research questions. Our test set does not overlap with any examples in the pre-training or our own training dataset. The check is done by checking all pairs of examples in the test set and the pre-training dataset for exact matches. ### _Methodology for RQ1_ The goal of RQ1 is to study whether Supersonic, OpenAI's GPT-3.5-Turbo and GPT-4 can optimize programs. Recall that code optimization typically involves making edits to only a portion of the original program, without completely changing it: Alali, Kagdi, and Maletic found that the majority of changes are classified as small, changing 6-46 lines [35]. With that respect, the experiment takes care of discarding re-implementations of the original program. Smaller edits are also less likely to contain bugs [36] and more likely to be accepted as a pull request [37]. Concretely, Supersonic, GPT-3.5-Turbo and GPT-4 are asked to generate 10 optimization predictions per original program in our test set described in subsection 3.6. Then, we filter those where the string similarity value compared to the original program is lower than 0.8, the value we used in subsection 3.6, in order to discard re-implementations. Next, the metrics that we use to compare Supersonic and GPT-3.5-Turbo are running time and memory usage (from [6]): * %OPT: The percentage of optimized programs in the test set. * PI: The average improvement in running time or memory usage of the best-optimized program among the predictions. If _old_ and _new_ are running time or memory usage of the original and optimized program, then \(\text{PI}=\frac{\partial I}{\text{new}}\). We report the average PI over the test set. We compute these metrics for strictly better memory improvements and for strictly better running time improvements. We realize that code competition measurements are not perfectly precise and tend to have some variation. To account for this noise, we only consider a program to be truly more optimized if the performance improvement is at least 20%, i.e. PI is at least \(1.2\). We utilize OpenAI's official APIs to call GPT-3.5-Turbo and GPT-4 for generating predictions. As per OpenAI's official API documentation9, the APIs are updated to incorporate the latest model iteration approximately two weeks subsequent to its release10. To ensure clarity regarding the API version employed in our experiments, our predictions spanned from April 17 to July 5 2023, whereas for GPT-4, predictions were conducted between August 21 and August 25 2023. The GPT-3.5-Turbo and GPT-4 prompt we use to generate the predictions is shown in Listing 4. We use a carefully designed prompt per the best practices11. The prompt instructs the model to act as an experienced C/C++ developer and optimize the given program with respect to running time and memory usage. Footnote 9: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Footnote 10: [https://help.openai.com/en/articles/6825453-chatgpt-release-notes](https://help.openai.com/en/articles/6825453-chatgpt-release-notes) Footnote 11: [https://github.com/f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts) ### _Methodology for RQ2_ In RQ2, the goal is to study to what extent Supersonic, GPT-3.5-Turbo and GPT-4 optimize by re-implementing. To do that, we compare Supersonic, GPT-3.5-Turbo and GPT-4 at various string similarity thresholds using the same protocol as RQ1. Also, we study the string similarity distribution for all predictions to see how much the programs are changed for Supersonic and GPT-3.5-Turbo. Then, we plot the %OPT value for different string similarity thresholds between 0 and 1. ### _Methodology for RQ3_ The goal of RQ3 is to evaluate if the diff-based output representation is a better code representation than the full program representation. We use the same model architecture and the same dataset split described in subsection 3.6 and used in RQ1 and RQ2. The only difference is that one model learns to generate the full program, while the other learns to generate the diff-based output representation described in subsection 3.3. Similar to RQ1 described in subsection 4.3, 10 predictions are generated per sample in the test set and predictions with a string similarity value below 0.8 are filtered. The rest are submitted to the corresponding code competition website. We compute %OPT and SP on running time and memory consumption. ## 5 Results ### _Answer to RQ1 (Effectiveness)_ Table III shows the comparison between Supersonic, GPT-3.5-Turbo, and GPT-4 on generating source code optimizations for Codeforces and AIZU submissions. For the Codeforces submissions, Supersonic generates optimizations that improve 26.0% (78) and 8.0% (24) of the programs on running time and memory consumption respectively. This shows the advantage and efficacy of Supersonic over both GPT-3.5-Turbo and GPT-4, illustrating its capability for optimizing a range of problems, even when compared against the most advanced generative models currently available. This is despite the fact that Supersonic is 600x and 3700x smaller than GPT-3.5-Turbo and GPT-4. For AIZU submissions, both Supersonic and GPT-3.5-Turbo perform at a similar level for running time and memory optimization, while GPT-4 is still performing worse. The reason why Supersonic has a lower performance for AIZU submissions than Codeforces submissions might be because our training data contains more than 80% program pairs from Codeforces, hence have better learned on that type of problem. The problem set on Codeforces and AIZU is indeed different, Codeforces being competitive in nature with regular contests dividing participants with ratings, AIZU being primarily an educational platform. generated example reduces the memory consumption from 2.64MB to 1.56MB, saving 41% of memory usage. The second example adds three GCC pragma directives that are instructions to the GCC compiler on how to compile the code. _optimize("Offset")_ enables all -O3 optimizations along with those applicable to all programs compliant with the standard. _optimize("unroll-loops")_ unroll loops, a technique that attempts to optimize the running time by sacrificing the binary size. _target("axx,axx2,sese,se2")_ enables the use of specific instruction set extensions for vector operations. The generated program reduces the running time to 7ms from the original 17ms, which is a significant 60% decrease. This also demonstrates that Supersonic is able to instruct the compiler on how to better compile the program, in addition to changing the data structures, compilers, or algorithms. We give a breakdown of all 5590 predicted programs (10x559 predictions, for every program in the test set), for Supersonic. The post-processing phase of Supersonic removed 2045 programs because of malformed diff-based output representation that can not be applied to the original program. For the remaining 3545 programs that we submitted for evaluation, 1686 programs are accepted as valid solutions, they compile and pass the functional tests, showing the capabilities of Supersonic to output proper syntax and to maintain semantics. Among the 1686 accepted programs, 356 programs either improved the running time or memory consumption. **RQ1:**Supersonic, a model 600x and 3700x smaller than GPT-3.5-Turbo and GPT-4 is able to generate significantly more optimized programs than OpenAI models. It is capable of producing, short, target optimizations, including appropriate compiler directive addition. ### _Answer to RQ2 (Optimization Focus)_ The string similarity distribution for all generated programs is shown in Figure 3. GPT-3.5-Turbo and GPT-4 generate prediction at all string similarity levels, which means they sometimes generate target optimizations and sometimes rewrite the whole program. On the contrary, most of Supersonic's predictions have a high string similarity (red distribution shifted to the right), which shows Supersonic generates much more precise edits to the programs under optimization. In addition, we see that most generated programs have a similarity higher than 0.8, which is very consistent with the training data curation, as described in subsection 3.2, showing the soundness of the end-to-end training and inference loop. Figure 4 shows how the %OPT metric changes with different string similarity thresholds for generated programs that optimize the running time. The y-axis is the %OPT value. The x-axis is the string similarity threshold, meaning that we filter all generated programs with a lower string similarity score than the corresponding x value. For \(x=0\), it means that we include all generated programs, and for \(x=1\), all generated programs are filtered out. Reading the graph from right to left, we can see that Supersonic is the best (red curve at the top) until the string similarity threshold is close to 0.4. This means the string similarity threshold of 0.8 in RQ1 was not a magic, unstable one. Rather, it can be changed to a large degree and Supersonic would still be the best performing tool. From the same graph, we can also see that GPT-4 performs better than GPT-3.5-Turbo when the threshold is 0, but the relationship reverses when the threshold is higher. Overall, it is clear that GPT-4 is generating more full rewrites than GPT-3.5-Turbo. To give an idea of how dissimilar two programs are when the similarity score is low, Listing 7 shows a GPT-3.5-Turbo generated program with similarity 0.36. It is clear that most lines are changed, including variable names. Other significant rewrites by GPT-3.5-Turbo in that example include _std::transform_ substituted with for-loops and _char_ type used instead of integer for comparison. Fig. 4: %OPT for running time optimization of Supersonic, GPT-3.5-Turbo and GPT-4 if we limit the string similarity at various thresholds between the predicted submission and the original solution. Fig. 3: The string similarity distribution for the similarity between the predicted submission and the original solution. Best viewed in color. * [leftmargin=*,noitemsep,topsep=0pt] * **RQ2:**Supersonic is the best performing model compared to GPT-3.5-Turbo and GPT-4 even when we lower the similarity threshold. GPT-3.5-Turbo and GPT-4 tend to optimize by fully rewriting the program which is argued to be a different task than optimization, closer to re-implementation. To the best of our knowledge, Supersonic is the best model for generating precise, targeted, and effective source code optimizations. ### _Answer to RQ3 (Ablation Study)_ For this research question, we focus on investigating the efficacy of two distinct code output representations when training our Supersonic model. One version of Supersonic is trained to generate an entire program as output, exemplified in Listing 2. The second version is trained to produce a diff-based output representation, which can be seen in Listing 3. Table IV shows the comparison between the two versions on the predicted submission to the Codeforces and AIZU code competition website. Supersonic with a diff-based output representation is clearly better than Supersonic with a full program output. The diff-based output representation optimizes more programs on both Codeforces (26.0% versus 11.0% for running time, 8.0% versus 2.7% for memory consumption) and AIZU (3.5% versus 1.5% for running time, 1.2% versus 0.0% for memory consumption). Considering the presented data, it is evident that the diff-based output representation of Supersonic consistently outperforms its full program counterpart in terms of generating a higher percentage of optimized programs. This fully validates our initial hypothesis that longer output sequences are harder to learn and lower the overall performance of the model. However, the diff-based representation comes at a cost of stricter well-formedness rules for applying the diff-based output. During the breakdown of all generated programs of Supersonic in RQ1, we indeed showed that a portion of diff-based output representations are malformed (2045 out of 5590). While being potentially valid optimizations, this results in fewer programs being submitted for evaluation. Overall, the advantages of the sort diff representation win over the cons, and Supersonic with diff-based output representation clearly outperforms the full program output representation. * [leftmargin=*,noitemsep,topsep=0pt] * **RQ3:**Supersonic with the original diff-based output representation doubles the performance, compared to traditional full program output representation. While the majority of works in the field of code generation with deep learning use full program or full function output, our results clearly call for more exploration of shorter, contextual output representations such as Supersonic's. ## 6 Discussion ### _Threats to Validity_ #### 6.1.1 Internal Threats * [leftmargin=*,noitemsep,topsep=0pt] * [leftmargin=*,noitemsep,topsep=0pt] * Data leakage.: The evaluation of GPT-3.5-Turbo and GPT-4 poses an internal threat in the form of potential data leakage that is beyond our control. The data used to train GPT-3.5-Turbo and GPT-4 is not publicly accessible, which means we lack visibility into the specific information it was exposed to during its training process. Consequently, there exists a potential risk of data leakage that GPT-3.5-Turbo and GPT-4 has already seen examples in the test set during its training process. \begin{table} \begin{tabular}{l l|c c|c c} \multirow{2}{*}{Metrics} & \multicolumn{2}{c|}{Supersonic Full} & \multicolumn{2}{c}{Supression Diff} \\ & & Running time & Memory & Running time & Memory \\ \hline Codeforces & \%OPT & 11.0\% (33) & 2.7\% (8) & 26.0\% (58) & 8.0\% (24) \\ & PI & 3.2\% & 1.5\% & 2.6\% & 1.8\% \\ \hline AIZU & \%OPT & 12\% (4) & 0.0\% (3) & 3.7\% (9) & 1.2\% (3) \\ & PI & 3.24\(\times\) & 0.0\% & 2.82\(\times\) & 1.23\(\times\) \\ \end{tabular} \end{table} TABLE IV: Supersonic with diff-based output representation (Supersonic Diff) and full program (Supersonic Full) as output. Within the parentheses for %OPT is the absolute number of optimized programs #### 6.1.1.2 Functionally incorrect programs. We have noticed that some original programs are functionally correct according to the original metadata, but they failed when we submitted them ourselves. For Codeforces, the main reason was that they were either because of time/memory limit, or compilation error. For AIZU, it is a mix of compile error, time/memory limit, and wrong answer. We hypothesize that it is because of the compiler version being updated throughout the year. For these programs, we still count them as failures when calculating the %OPT value to ensure a fair comparison. #### 6.1.1.3 Measurement uncertainty. We also found that if we submit the same program twice to the Codeforces platform, we may get different execution times, e.g. 0ms and 15ms. This is because the Codeforces platform splits the execution time as different blocks, which means 0ms and 15ms are equal. Therefore when comparing execution time for programs on Codeforces, we divided all time into blocks of 16, i.e. 0 and 15 would be block 1, 16 and 31 would be block 2. Then we use the block number to calculate the speed up. Additionally, to account for noise, we also define a program to be more optimized if the time or memory improvement is at least 1.2, as mentioned in subsection 4.3. #### 6.1.2 External Threats A significant external threat is that CodeNet and Codeforces datasets are both from code competition websites. They do not fully represent the diversity of real-world optimization problems. The problems on competition websites are designed with code optimizations in mind, and they facilitate the comparison between submissions by having functional correctness, running time, and memory consumption as part of their result. However, real-world optimizations are likely more varied. Future work may address this threat by creating new benchmarks for code optimization. ### _Code Optimization Versus Code Synthesis_ Our extensive experience looking at predicted optimization has convinced us that the boundary between code optimization and code synthesis is hard to define. Consider a hypothetical example: an original program implements a naive algorithm to sort a list, such as bubble sort. In some cases, the distinction between optimization and synthesis is obvious, for example if the model proposes to replace the original program with a quicksort or mergesort implementation. In other cases, if the model made many changes while retaining the core concept of bubble sort, the distinction is less clear. The distinction between code optimization and full re-implementation has received limited attention in related research. We believe that clarifying this distinction is crucial in understanding the true capabilities of neural models. In this paper, we contribute to this clarification in a fully operational way. We draw the line between optimization and re-implementation by looking at the string similarity between the original and the predicted optimized program. This is a well-formed definition: a string similarity value of 0 is pure code synthesis, and a value close to 1 is a small change to the program. Nonetheless, this purely syntactic solution could be improved with semantic and runtime analysis, we leave this as a topic for future exploration. ### _Importance of Input/Output Code Representation_ Designing the input and output representations of code for machine learning models is a critical aspect that demands careful consideration. The effectiveness of these representations can significantly impact the performance and efficiency of the resulting models. Creating an appropriate input representation for code is akin to providing the model with the right set of tools for understanding and processing the task at hand. When designing input representations, it is essential to strike a balance between comprehensiveness and conciseness. The representation must capture the relevant information that enables the model to perform the desired task effectively. For the code optimization task, it is uncommon that we know in advance the source code location where we can optimize the program. In comparison with other code-related tasks such as the program repair task, we often have fault localization techniques that guide us toward the buggy locations. Therefore for Supersonic, we decided to use the full program as input. Crafting a concise and precise output representation for code is akin to presenting the model's insights and solutions in a simple manner, ensuring that the generated code conveys the intended functionality without unnecessary redundancy. Redundancy in the output is repeating information that was present in the input and increases the probability of unnecessary mistakes. Such mistakes are more unforgiving than mistakes in natural languages, as programs are interpreted by compilers with strict parser rules. In Supersonic, we use the diff-based output representation to minimize the redundancy and risk of mistakes and only keep two context lines to localize the change. RQ3 proves that this design choice alone improves the performance of the model by 2 times. It shows the importance of designing a proper input and output representation for machine learning for code models that are rarely discussed. ### _Large versus Small Models_ It is widely accepted that larger models perform better, they are more sample efficient [12], are few-shot learners [38], and the performance increases with model size [20]. Furthermore, the accuracy and the robustness of a model also increase with dataset size [39]. One model that we compared against, GPT-3.5-Turbo, has at least 175B parameters and is trained on 500 billion tokens [12]. It demonstrated capabilities for many software engineering-related tasks, such as code generation [3], program repair [4], and code summarization [2]. In RQ2 of our study, we have also shown the good performance of GPT-3.5-Turbo on the code optimization task. However in RQ1, Supersonic was able to outperform GPT-3.5-Turbo on the task of generating similar but optimized programs, despite being approximately 600x smaller (175B versus 278M) and trained with much less data. It shows the relevance of training a smaller model focusing on a single task with well-curated input and output representation. Related Work ### _ML for Source Code Optimization_ PIE4Perf is the most related work [6]. In this work, Madaan, Shypula, Alon, _et al._ investigates the ability of large language models to suggest performance-improving code edits. First, they extract samples of performance-improving code edits from the Codenet dataset. Then, they fine-tune the CodeGen model and they prompt Codex using few-shot prompting. They find that the system can generate performance-improving edits with speedups of more than 2.5x for over 25% of the programs, with a model 10x smaller than Codex to match its performance. The main differences between Supersonic and PIE4Perf are: 1) we study both the running time and memory optimization while PIE4Perf only looks at running time optimization. 2) Supersonic is based on an original diff-based output representation, while PIE4Perf outputs simple vanilla programs. 3) we take care of removing re-implementations in the evaluation procedure, yielding more meaningful measurements. 4) we report competitive results w.r.t. GPT-3.5-Turbo and GPT-4 with a model that is 600x and 3700x smaller. DeepDev-PERF is a deep learning-based approach to improve software performance for C# applications [7]. The model is pre-trained on both English and source code corpora and fine-tuned on the task of performance improvement for C# applications. The evaluation shows that the model can generate the same performance improvement suggestions as the developer patches in 53% of the cases. The authors also submitted 19 pull requests with performance optimizations, of which 11 of these are accepted. Supersonic is different from DeepDev-PERF by looking at C/C++ optimizations. The evaluation is also done by submitting to code competition websites that report the running time and memory consumption, instead of using the benchmark. Chen, Tarlow, Swersky, _et al._ propose a discrete variational auto-encoder to extract discrete latent variables, each representing a code-edit that increases program performance [40]. The learned discrete latent variables can be used to guide programmers toward writing more efficient source code. They show that the discrete variational auto-encoder extracts better code efficiency edits than the Transformer baseline. Supersonic on the other hand does not extract code efficiency edits but directly predicts the more optimized program in an end-to-end way. RAPGen is an approach based on the OpenAI Codex model to do zero-shot code inefficiency fixing in C# methods [41]. This is done by first collecting a dataset of performance bug fixes by keyword matching on the commit message. The dataset is used to extract identifiers that are changed in the commits to form a knowledge base. The knowledge base is used to form an instruction to describe which identifiers were added/removed/edited when giving the input method. To predict the code inefficiency fix, they build a prompt by using the buggy method as a comment, followed by the instruction of which variables to change, and ending with the buggy method's signature. The prompt is used as input to the Codex model, and it generates the fixed method. They found that it performs better than DeepDev-PERF on exact match and CodeBLEU score, without any training. Artemis++ is a tool for syntax-based optimization of C++ using a genetic algorithm [42]. The tool automatically chooses implementations of common data structures to provide performance improvements. It uses a genetic algorithm that automatically performs source code transformations and produces an optimized version of the program. The tool is evaluated on three C++ libraries, observing improvements by up to 16.0%, 27.9%, and 2.74% for CPU usage, runtime, and memory, respectively. Supersonic is different in that it is trained to do more than data structure optimization, and relies on language model and seq2seq learning instead of a genetic algorithm. LoopLearner is a tool that predicts the speedup of loop transformations [43]. They encode the source code as a vector and concatenate a compact encoding of transformations to the vector. The concatenated vector is then fed into CNN or RNN to predict the speedup of transformations. They found that by applying the top transformation, the program yields an average speedup of 1.29x. While LoopLearner only focuses on loops, Supersonic's input the whole program and can target different source code locations. ### _ML for Compiler Optimization_ Ashouri _et al._ have a survey about compiler autotuning [44]. They summarize papers on two problems in the machine learning for compiler optimization field, 1) selecting the best optimization and 2) the phase-ordering of optimizations. Wang _et al._ focus more on the different machine-learning techniques used in the compiler optimization field. Cummins _et al._ propose ProGraML, a graph-based program representation, that can be used as input to machine learning models [46]. The representation is a union of a control flow graph, data flow graph, and call flow graph. They show that on the task of choosing a CPU or GPU to run an OpenCL kernel, it surpassed prior approaches on all metrics. DeepTune is a tool that predicts optimization heuristics on OpenCL kernel [47]. It directly uses the source code, instead of the binary code, as the input to an LSTM model to predict heterogeneous mapping (CPU or GPU) and OpenCL thread coarsening factor. The results show that by learning directly on source code, instead of manual features extracted from the binary code, DeepTune could match or surpass 89% of the predictions. Narayanamurthy _et al._ use genetic algorithms to find the best set of compiler optimizations that optimized the performance while providing better error resilience [48]. The fitness function used in the genetic algorithm measures the error resilience of a candidate optimization. The resulting program is able to achieve similar performance as -O1, -O2, and -O3 while having better error resilience. Schulte _et al._ use a genetic algorithm to reduce the non-functional properties of executables such as power efficiency [49]. The fitness function used in the genetic algorithm is hardware performance counters combined into a single scalar value. They find that they can reduce on average 20% energy usage on AMD and Intel systems. Wang _et al._ use machine learning to partition stream processing programs [50]. The partition refers to dividing the program graph into clusters that are allocated to threads. They define different program features and used PCA to reduce the dimensionality to 10. Then, they use a k-means clustering algorithm to find the previous similar good partitions. They find that it achieves a 1.90x speedup over the already-tuned partition by the compiler. Churchill _et al._ propose an approach to optimize loops in the Google native Client [51]. The approach has a bounded verifier that verifies the correctness of a loop transformation up to bound k. If it fails, it can create a counter-example that guides the search away from this transformation. Then they apply a sound verifier that uses strong loop invariants. In the evaluation, they achieve an average 25% speedup compared to libraries shipped by Google, and the optimized program can be formally verified. Stoke is a tool that formulates the binary superoptimization problem as a stochastic search problem [52]. The search is guided by a cost function with a correctness term and a performance term. They use Markov chain Monte Carlo to sample the binary modification. By starting with a binary that is compiled with -O0, Stoke was able to either match or outperform code produced by -O3. Bunel _et al._ improve upon Stoke by replacing the searched distribution [53]. Stoke uses a uniform distribution to sample modifications of the program, and Bunel _et al._ instead used reinforcement learning to learn the distribution from past behaviors and program semantics. By doing so, they can increase the probability of reaching the best-optimized programs and generate better programs in fewer iterations. Rotem _et al._ tackle the problem of improving the LLVM's BranchProbabilityInfo (BPI) heuristics [54]. They first use the profile-guided optimization (PGO) workflow to compile many different programs and record their branch probabilities. These data are used to train a model of gradient-boosted trees. Once the model is trained, it can be used to predict branch probabilities on programs without PGO workflow. They find that the geometric mean speedup of the new BPI heuristic across 10 workloads is 1.016. Baghdadi _et al._ develop a deep learning-based cost model for predicting speedups of code transformations in the Tiramisu compiler [55]. The training data is created by generating random programs, applying a series of code transformations, and recording the actual speedups. The deep learning model is a combination of recurrent and recursive neural networks. They find that the proposed model has only 16% of mean absolute percentage error in predicting speedups on full programs and that it is able to find transformations that match or are better than SOTA compilers. Agakov _et al._ use machine learning to speed up the process of iterative optimization on loops [56], i.e., iterative try different compilation optimization to determine the best set of compilation optimization. They extract different features from the loop and use PCA to represent the loop with a 5-dimension vector. This vector is used to determine the nearest neighbor in the training data and extract the best compilation optimization. They found that the average speedup is 1.22x on the T1 C6713 and 1.27x on the AMD Au1500. ### _Other Work on Optimization_ Petke _et al._ have a comprehensive survey about papers using genetic algorithms to improve programs [57]. They find that genetic algorithm has been able to improve the performance of a program for a diverse set of properties, such as execution time, energy, and memory consumption. Marco _et al._ propose an approach to estimate the memory behavior of Spark applications to allow higher server utilization and system throughput [58]. They extract 22 raw features from the application and use PCA to reduce it to 5 principal components. To predict the function that best describes the memory usage, they use K nearest neighbor to find the most similar cluster. The evaluation shows that they achieve a 1.28x improvement on system throughput and 1.68x on turnaround time. Chen _et al._ introduce a framework AutoTVM to optimize tensor operators that are used in deep learning models [59]. They use simulated annealing with a statistical cost function trained from past historical data to find more optimized low-level code. Experiments on existing frameworks show that it yields 1.2x to 3.8x performance improvement. Ahn _et al._ propose Chameleon that improves upon AutoTVM by using reinforcement learning to optimize tensor operators [60]. Instead of using simulated annealing that relies on random walks, they use adaptive exploration by leveraging reinforcement learning. The result is that they achieve a 4.45x speed up in optimization time over AutoTVM. ## 8 Conclusion In this paper, we have proposed Supersonic, a tool that can generate source code optimizations for C/C++ programs with targeted changes. Supersonic features an original diff-based output representation that is similar to a software patch. Our experiments clearly show that Supersonic outperforms GPT-3.5-Turbo and GPT-4 on the task of source code optimization. We believe that there is a large research avenue in the area of using small but specific models targeting a given software engineering task instead of using larger general models.
2310.20458
Machine learning detects terminal singularities
Algebraic varieties are the geometric shapes defined by systems of polynomial equations; they are ubiquitous across mathematics and science. Amongst these algebraic varieties are Q-Fano varieties: positively curved shapes which have Q-factorial terminal singularities. Q-Fano varieties are of fundamental importance in geometry as they are "atomic pieces" of more complex shapes - the process of breaking a shape into simpler pieces in this sense is called the Minimal Model Programme. Despite their importance, the classification of Q-Fano varieties remains unknown. In this paper we demonstrate that machine learning can be used to understand this classification. We focus on 8-dimensional positively-curved algebraic varieties that have toric symmetry and Picard rank 2, and develop a neural network classifier that predicts with 95% accuracy whether or not such an algebraic variety is Q-Fano. We use this to give a first sketch of the landscape of Q-Fanos in dimension 8. How the neural network is able to detect Q-Fano varieties with such accuracy remains mysterious, and hints at some deep mathematical theory waiting to be uncovered. Furthermore, when visualised using the quantum period, an invariant that has played an important role in recent theoretical developments, we observe that the classification as revealed by ML appears to fall within a bounded region, and is stratified by the Fano index. This suggests that it may be possible to state and prove conjectures on completeness in the future. Inspired by the ML analysis, we formulate and prove a new global combinatorial criterion for a positively curved toric variety of Picard rank 2 to have terminal singularities. Together with the first sketch of the landscape of Q-Fanos in higher dimensions, this gives new evidence that machine learning can be an essential tool in developing mathematical conjectures and accelerating theoretical discovery.
Tom Coates, Alexander M. Kasprzyk, Sara Veneziale
2023-10-31T13:51:24Z
http://arxiv.org/abs/2310.20458v1
# Machine learning detects terminal singularities ###### Abstract. Algebraic varieties are the geometric shapes defined by systems of polynomial equations; they are ubiquitous across mathematics and science. Amongst these algebraic varieties are Q-Fano varieties: positively curved shapes which have Q-factorial terminal singularities. Q-Fano varieties are of fundamental importance in geometry as they are 'atomic pieces' of more complex shapes - the process of breaking a shape into simpler pieces in this sense is called the Minimal Model Programme. Despite their importance, the classification of Q-Fano varieties remains unknown. In this paper we demonstrate that machine learning can be used to understand this classification. We focus on eight-dimensional positively-curved algebraic varieties that have toric symmetry and Picard rank two, and develop a neural network classifier that predicts with 95% accuracy whether or not such an algebraic variety is Q-Fano. We use this to give a first sketch of the landscape of Q-Fano varieties in dimension eight. How the neural network is able to detect Q-Fano varieties with such accuracy remains mysterious, and hints at some deep mathematical theory waiting to be uncovered. Furthermore, when visualised using the quantum period, an invariant that has played an important role in recent theoretical developments, we observe that the classification as revealed by ML appears to fall within a bounded region, and is stratified by the Fano index. This suggests that it may be possible to state and prove conjectures on completeness in the future. Inspired by the ML analysis, we formulate and prove a new global combinatorial criterion for a positively curved toric variety of Picard rank two to have terminal singularities. Together with the first sketch of the landscape of Q-Fano varieties in higher dimensions, this gives strong new evidence that machine learning can be an essential tool in developing mathematical conjectures and accelerating theoretical discovery. Key words and phrases:Fano varieties, terminal singularities, machine learning, 37th Conference on Neural Information Processing Systems (NeurIPS 2023) 2020 Mathematics Subject Classification: 14J45 (Primary); 68T07 (Secondary). ## 1. Introduction Systems of polynomial equations occur throughout mathematics and science; see e.g. [4, 23, 25, 43]. Solutions of these systems define shapes called _algebraic varieties_. Depending on the equations involved, algebraic varieties can be smooth (as in Figure 1(a)) or have singularities (as in Figures 1(b) and 1(c)). In this paper we show that machine learning methods can detect a class of singularities called _terminal singularities_. A key class of algebraic varieties are _Fano varieties_: positively curved shapes that are basic building blocks in algebraic geometry. Fano varieties are 'atomic pieces' of more complex shapes, in the sense of the Minimal Model Programme [11, 33, 35]. Running the Minimal Model Programme - that is, breaking an algebraic variety \(X\) into atomic pieces - involves making birational transformations of \(X\). These are Figure 1. Algebraic varieties in \(\mathbb{R}^{3}\) with different defining equations. ## 1. Introduction The classification of \(\mathbb{Q}\)-Fano varieties is a long-standing problem of great importance [6, 20, 34, 41, 42] - one can think of this as building a Periodic Table for geometry. But, despite more than a century of study, very little is known. In what follows we exploit the fact that machine learning can detect terminal singularities to give the first sketch of part of the classification of higher-dimensional \(\mathbb{Q}\)-Fano varieties. We probe the classification of \(\mathbb{Q}\)-Fano varieties using a class of highly-symmetrical shapes called _toric varieties_. (For example, the algebraic varieties pictured in Figure 1 are toric varieties.) Toric varieties are particularly suitable for computation and machine learning, because their geometric properties are encoded by simple combinatorial objects. We consider Fano toric varieties of Picard rank two. These can be encoded using a \(2\times N\) matrix of non-negative integers called the _weight matrix_; here the dimension of the toric variety is \(N-2\). To determine whether such a toric variety \(X\) is a \(\mathbb{Q}\)-Fano variety we need to check whether \(X\) is \(\mathbb{Q}\)-factorial, and whether the singularities of \(X\) are terminal. Checking \(\mathbb{Q}\)-factoriality from the weight matrix of \(X\) turns out to be straightforward (see SS3) but checking terminality is extremely challenging. This is because there is no satisfactory theoretical understanding of the problem. We lack a global criterion for detecting terminality in terms of weight data (such as [32] in a simpler setting) and so have to fall back on first enumerating all the singularities to analyse, and then checking terminality for each singularity. Each step is a challenging problem in discrete geometry: the first step involves building a different combinatorial object associated to the \(n\)-dimensional toric variety \(X\), which is a collection of cones in \(\mathbb{R}^{n}\) called the _fan \(\Sigma(X)\)_; the second step involves checking for various cones in the fan whether or not they contain lattice points on or below a certain hyperplane. To give a sense of the difficulty of the computations involved, generating and post-processing our dataset of 10 million toric varieties in dimension eight took around 30 CPU years. To overcome this difficulty, and hence to begin to investigate the classification of \(\mathbb{Q}\)-Fano varieties in dimension eight, we used supervised machine learning. We trained a feed-forward neural network classifier on a balanced dataset of 5 million examples; these are eight-dimensional \(\mathbb{Q}\)-factorial Fano toric varieties of Picard rank two, of which 2.5 million are terminal and 2.5 million non-terminal. Testing on a further balanced dataset of 5 million examples showed that the neural network classifies such toric varieties as terminal or non-terminal with an accuracy of 95%. This high accuracy allowed us to rapidly generate many additional examples that are with high probability \(\mathbb{Q}\)-Fano varieties - that is, examples that the classifier predicts have terminal singularities. This ML-assisted generation step is much more efficient: generating 100 million examples in dimension eight took less than 120 CPU hours. The fact that the ML classifier can detect terminal singularities with such high accuracy suggests that there is new mathematics waiting to be discovered here - there should be a simple criterion in terms of the weight matrix to determine whether or not a toric variety \(X\) has terminal singularities. In SS5 we take the first steps in this direction, giving in Algorithm 1 a new method to check terminality directly from the weight matrix, for toric varieties of Picard rank two. A proof of correctness is given in SSE. This new algorithm is fifteen times faster than the naive approach that we used to generate our labelled dataset, but still several orders of magnitude slower than the neural network classifier. We believe that this is not the end of the story, and that the ML results suggest that a simpler criterion exists. Note that the neural network classifier cannot be doing anything analogous to Algorithm 1: the algorithm relies on divisibility relations between entries of the weight matrix (GCDs etc.) that are not visible to the neural network, as they are destroyed by the rescaling and standardisation that is applied to the weights before they are fed to the classifier. In SS6 we use the ML-assisted dataset of 100 million examples to begin to explore the classification of \(\mathbb{Q}\)-Fano varieties in dimension eight. We visualise the dataset using the _regularized quantum period_, an invariant that has played an important role in recent theoretical work on \(\mathbb{Q}\)-Fano classification, discovering that an appropriate projection of the data appears to fill out a wedge-shaped region bounded by two straight lines. This visualisation suggests some simple patterns in the classification: for example, the distance from one edge of the wedge appears to be determined by the Fano index of the variety. Our work is further evidence that machine learning can be an indispensable tool for generating and guiding mathematical understanding. The neural network classifier led directly to Algorithm 1, a new theoretical result, by revealing that the classification problem was tractable and thus there was probably new mathematics waiting to be found. This is part of a new wave of application of artificial intelligence to pure mathematics [15, 19, 22, 27, 49, 50, 51], where machine learning methods drive theorem discovery. A genuinely novel contribution here, though, is the use of machine learning for data generation and data exploration in pure mathematics. Sketching the landscape of higher-dimensional \(\mathbb{Q}\)-Fano varieties using traditional methods would be impossible with the current theoretical understanding, and prohibitively expensive using the current exact algorithms. Training a neural network classifier however, allows us to explore this landscape easily - a landscape that is unreachable with current mathematical tools. ### Why dimension eight? We chose to work with eight-dimensional varieties for several reasons. It is important to distance ourselves from the surface case (dimension two), where terminality is a trivial condition. A two-dimensional algebraic variety has terminal singularities if and only if it is smooth. On the other hand, we should consider a dimension where we can generate a sufficient amount of data for machine learning (the analogue of our dataset in dimension three, for example, contains only 34 examples [31]) and where we can generate enough data to meaningfully probe the classification. Moreover, we work in Picard rank two because there already exists a fast combinatorial formula to check terminality in rank one [32]; Picard rank two is the next natural case to consider. ## 2. Mathematical background The prototypical example of a Fano variety is projective space \(\mathbb{P}^{N-1}\), which can be thought of as the quotient of \(\mathbb{C}^{N}\setminus\{\mathbf{0}\}\) by \(\mathbb{C}^{\times}\) acting as follows: \[\lambda\cdot(z_{1},\dots,z_{N})=(\lambda z_{1},\dots,\lambda z_{N})\] Fano toric varieties of Picard rank two arise similarly. They can be constructed as the quotient of \(\mathbb{C}^{N}\setminus S\), where \(S\) is a union of subspaces, by an action of \((\mathbb{C}^{\times})^{2}\). This action, and the union of subspaces \(S\), is encoded by a weight matrix: \[\begin{bmatrix}a_{1}&\cdots&a_{N}\\ b_{1}&\cdots&b_{N}\end{bmatrix} \tag{2.1}\] Here we assume that all \((a_{i},b_{i})\in\mathbb{Z}^{2}\setminus\{\mathbf{0}\}\) lie in a strictly convex cone \(C\subset\mathbb{R}^{2}\). The action is \[(\lambda,\mu)\cdot(z_{1},\dots,z_{N})=(\lambda^{a_{1}}\mu^{b_{1}}z_{1},\dots, \lambda^{a_{N}}\mu^{b_{N}}z_{N})\] and \(S=S_{+}\cup S_{-}\) is the union of subspaces \(S_{+}\) and \(S_{-}\), where \[\begin{split} S_{+}&=\{(z_{1},\dots,z_{N})\mid z_{i}=0 \text{ if }b_{i}/a_{i}>b/a\}\\ S_{-}&=\{(z_{1},\dots,z_{N})\mid z_{i}=0\text{ if }b_{i}/a_{i}<b/a \}\end{split} \tag{2.2}\] and \(a=\sum_{i=1}^{N}a_{i}\), \(b=\sum_{i=1}^{N}b_{i}\): see [8]. The quotient \(X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times})^{2}\) is an algebraic variety of dimension \(N-2\). We assume in addition that both \(S_{+}\) and \(S_{-}\) have dimension at least two; this implies that the second Betti number of \(X\) is two, that is, \(X\) has Picard rank two. Since we have insisted that all columns \((a_{i},b_{i})\) lie in a strictly convex cone \(C\), we can always permute columns and apply an \(\operatorname{SL}_{2}(\mathbb{Z})\) transformation to the weight matrix to obtain a matrix in standard form: \[\begin{bmatrix}a_{1}&a_{2}&\cdots&a_{N}\\ 0&b_{2}&\cdots&b_{N}\end{bmatrix} \tag{2.3}\] where all entries are non-negative, the columns are cyclically ordered anticlockwise, and \(a_{N}<b_{N}\). This transformation corresponds to renumbering the co-ordinates of \(\mathbb{C}^{N}\) and reparametrising the torus \((\mathbb{C}^{\times})^{2}\) that acts, and consequently leaves the quotient variety \(X\) that we construct unchanged. We will consider weight matrices (2.1) that satisfy an additional condition called being _well-formed_. An \(r\times N\) weight matrix is called standard if the greatest common divisor of its \(r\times r\) minors is one, and is well-formed if every submatrix formed by deleting a column is standard [2]. Considering only well-formed weight matrices guarantees that a toric variety determines and is determined by its weight matrix, uniquely up to \(\operatorname{SL}_{r}(\mathbb{Z})\)-transformation. **Testing terminality.** As mentioned in the introduction, an \(n\)-dimensional toric variety \(X\) determines a collection \(\Sigma(X)\) of cones in \(\mathbb{R}^{n}\) called the fan of \(X\). A toric variety is completely determined by its fan. The process of determining the fan \(\Sigma(X)\) from the weight matrix (2.1) is explained in SSA; this is a challenging combinatorial calculation. In the fan \(\Sigma(X)\), the one-dimensional cones are called rays. For a Fano toric variety \(X\), taking the convex hull of the first lattice point on each ray defines a convex polytope \(P\), and \(X\) has terminal singularities if and only if the only lattice points in \(P\) are the origin and the vertices. Verifying this is a conceptually straightforward but computationally challenging calculation in integer linear programming. ## 3. Data generation We generated a balanced, labelled dataset of ten million \(\mathbb{Q}\)-factorial Fano toric varieties of Picard rank two and dimension eight. These varieties are encoded, as described above, by weight matrices. We generated \(2\times 10\) integer-valued matrices in standard form, as in (2.3), with entries chosen uniformly at random from the set \(\{0,\dots,7\}\). Minor exceptions to this were the values for \(a_{1}\) and \(b_{N}\), which were both chosen uniformly at random from the set \(\{1,\dots,7\}\), and the value for \(a_{N}\), which was chosen uniformly at random from the set \(\{0,\dots,b_{N}-1\}\). Once a random weight matrix was generated, we retained it only if it satisfied: 1. None of the columns are the zero vector. 2. The sum of the columns is not a multiple of any of them. 3. The subspaces \(S_{+}\) and \(S_{-}\) in (2.2) are both of dimension at least two. 4. The matrix is well-formed. The first condition here was part of our definition of weight matrix; the second condition is equivalent to \(X\) being \(\mathbb{Q}\)-factorial; the third condition guarantees that \(X\) has Picard rank two; and the fourth condition was discussed above. We used rejection sampling to ensure that the dataset contains an equal number of terminal and non-terminal examples. Before generating any weight matrix, a boolean value was set to True (terminal) or False (non-terminal). Once a random weight matrix that satisfied conditions (1)-(4) above was generated, we checked if the corresponding toric variety was terminal using the method discussed in SS2. If the terminality check agreed with the chosen boolean, the weight matrix was added to our dataset; otherwise the generation step was repeated until a match was found. As discussed, different weight matrices can give rise to the same toric variety. Up to isomorphism, however, a toric variety \(X\) is determined by the isomorphism class of its fan. We deduplicated our dataset by placing the corresponding fan \(\Sigma(X)\), which we had already computed in order to test for terminality, in normal form [26, 37]. In practice, very few duplicates occurred. ## 4. Building the machine learning model We built a neural network classifier to determine whether a \(\mathbb{Q}\)-factorial Fano variety of Picard rank two and dimension eight is terminal. The network was trained on the features given by concatenating the two rows of a weight matrix, \([a_{1},\dots,a_{10},b_{1},\dots,b_{10}]\). The features were standardised by translating their mean to zero and scaling to variance one. The network, a multilayer perceptron, is a fully connected feedforward neural network with three hidden layers and leaky ReLu activation function. It was trained on the dataset described in SS3 using binary cross-entropy as loss function, stochastic mini-batch gradient descent optimiser and using early-stopping, for a maximum of 150 epochs and with learning rate reduction on plateaux. We tested the model on a balanced subset of 50% of the data (5M); the remainder was used for training (40%; 4M balanced) and validation (10%; 1M). Hyperparameter tuning was partly carried out using RayTune [39] on a small portion of the training data, via random grid search with Async Successive Halving Algorithm (ASHA) scheduler [38], for 100 experiments. Given the best configuration resulting from the random grid search, we then manually explored nearby configurations and took the best performing one. The final best network configuration is summarised in Table 1. By trying different train-test splits, and using 20% of the training data for validation throughout, we obtained the learning curve in Figure 2(a). This shows that a train-validate-test split of 4M-1M-5M produced an accurate model that did not overfit. Training this model gave the loss learning curve in Figure 2(b), and a final accuracy (on the test split of size 5M) of 95%. ## 5. Theoretical result The high accuracy of the model in SS4 was very surprising. As explained in the introduction, Q-Fano varieties are of fundamental importance in algebraic geometry. However, asking whether a Fano variety has terminal singularities is, in general, an extremely challenging geometric question. In the case of a Fano toric variety one would typically proceed by constructing the fan, and then performing a cone-by-cone analysis of the combinatorics. This is computationally expensive and unsatisfying from a theoretical viewpoint. The success of the model suggested that a more direct characterisation is possible from the weight matrix alone. An analogous characterisation exists in the simpler case of weighted projective spaces [32], which have Picard rank one, however no such result in higher Picard rank was known prior to training this model. Inspired by this we prove a theoretical result, Proposition 3, which leads to a new algorithm for checking terminality directly from the weight matrix, for Q-factorial Fano toric varieties of Picard \begin{table} \begin{tabular}{c c c c} \hline \hline **Hyperparameter** & **Value** & **Hyperparameter** & **Value** \\ \hline Layers & \((512,768,512)\) & Momentum & 0.99 \\ Batch size & 128 & LeakyRelu slope & 0.01 \\ Initial learning rate & 0.01 & & \\ \hline \hline \end{tabular} \end{table} Table 1. Final network architecture and configuration. Figure 2. (a) Accuracy for different train-test splits; (b) epochs against loss for the network trained on 5M samples. rank two. Consider a weight matrix as in (2.1) that satisfies conditions (1)-(4) from SS3, and the toric variety \(X\) that it determines. As discussed in SS2, and explained in detail in SSA, \(X\) determines a convex polytope \(P\) in \(\mathbb{R}^{N-2}\), with \(N\) vertices given by the first lattice points on the \(N\) rays of the fan. Each of the vertices of \(P\) is a lattice point (i.e., lies in \(\mathbb{Z}^{N-2}\subset\mathbb{R}^{N-2}\)), and \(X\) has terminal singularities if and only if the only lattice points in \(P\) are the vertices \(e_{1},\ldots,e_{N}\) and the origin. **Definition 1**.: Let \(\Delta_{i}\) denote the simplex in \(\mathbb{R}^{N-2}\) with vertices \(e_{1},\ldots,\hat{e}_{i},\ldots,e_{N}\) where \(e_{i}\) is omitted. We say that \(\Delta_{i}\) is _mostly empty_ if each lattice point in \(\Delta_{i}\) is either a vertex or the origin. **Notation 2**.: Let \(\{x\}\) denote the fractional part \(x-\lfloor x\rfloor\) of a rational number \(x\). **Proposition 3**.: _Consider a weight matrix_ \[\begin{bmatrix}a_{1}&\cdots&a_{N}\\ b_{1}&\cdots&b_{N}\end{bmatrix}\] _that satisfies conditions (1)-(4) from SS3. Let \(g_{i}=\gcd\{a_{i},b_{i}\}\), and let \(A_{i}\), \(B_{i}\) be integers such that \(A_{i}a_{i}+B_{i}b_{i}=g_{i}\). Set_ \[\alpha_{i}^{j} =\frac{a_{i}b_{i}-b_{j}a_{i}}{g_{i}} \alpha_{i} =\sum_{j=1}^{N}\alpha_{i}^{j}\] \[\beta_{i}^{j} =-A_{i}a_{j}-B_{i}b_{j} \beta_{i} =\sum_{j=1}^{N}\beta_{i}^{j} f_{i} =\frac{\alpha_{i}g_{i}}{\gcd\{g_{i},\beta_{i}\}}\] _noting that all these quantities are integers. Then \(\Delta_{i}\) is mostly empty if and only if for all \(k\in\{0,\ldots,f_{i}-1\}\) and \(l\in\{0,\ldots,g_{i}-1\}\) such that_ \[\sum_{j=1}^{N}\left\{k\frac{\alpha_{i}^{j}}{f_{i}}+l\frac{\beta_{i}^{j}}{g_{i} }\right\}=1\] _we have that_ \[\left\{k\frac{\alpha_{i}^{j}}{f_{i}}+l\frac{\beta_{i}^{j}}{g_{i}}\right\}= \left\{\frac{\alpha_{i}^{j}}{\alpha_{i}}\right\}\] _for all \(j\)._ Let \(s_{+}=\{i\mid a_{i}b-b_{i}a>0\}\), \(s_{-}=\{i\mid a_{i}b-b_{i}a<0\}\), and let \(I\) be either \(s_{+}\) or \(s_{-}\). Then \(\Delta_{i}\), \(i\in I\), forms a triangulation of \(P\). Thus \(X\) has terminal singularities if and only if \(\Delta_{i}\) is mostly empty for each \(i\in I\). This leads to Algorithm 1. ``` 1:Set \(a=\sum_{i=1}^{N}a_{i}\), \(b=\sum_{i=1}^{N}b_{i}\). 2:Set \(s_{+}=\{i\mid a_{i}b-b_{i}a>0\}\) and \(s_{-}=\{i\mid a_{i}b-b_{i}a<0\}\). 3:Set \(I\) to be the smaller of \(s_{+}\) and \(s_{-}\). 4:for\(i\in I\)do 5: Test if \(\Delta_{i}\) is mostly empty, using Proposition 3. 6:if\(\Delta_{i}\) is not mostly empty then 7: return False. 8:endif 9:endfor 10:return True. ``` **Algorithm 1** Test terminality for weight matrix \(W=[[a_{1},\ldots,a_{N}],[b_{1},\ldots,b_{N}]]\). **Comparisons.** Testing on \(100\,000\) randomly-chosen examples indicates that Algorithm 1 is approximately \(15\) times faster than the fan-based approach to checking terminality that we used when labelling our dataset (\(0.020\)s per weight matrix for Algorithm 1 versus \(0.305\)s for the standard approach implemented in Magma). On single examples, the neural network classifier is approximately \(30\) times faster than Algorithm 1. The neural network also benefits greatly from batching, whereas the other two algorithms do not: for batches of size \(10\,000\), the neural network is roughly \(2000\) times faster than Algorithm 1. ## 6. The terminal toric Fano landscape Having trained the terminality classifier, we used it to explore the landscape of \(\mathbb{Q}\)-Fano toric varieties with Picard rank two. To do so, we built a large dataset of examples and analysed their _regularized quantum period_, a numerical invariant of \(\mathbb{Q}\)-Fano varieties [12]. For smooth low-dimensional Fano varieties, it is known that the regularized quantum period is a complete invariant [13]. This is believed to be true in higher dimension, but is still conjectural. Given a \(\mathbb{Q}\)-Fano variety \(X\), its regularized quantum period is a power series \[\hat{G}_{X}(t)=\sum_{d=0}^{\infty}c_{d}t^{d}\] where \(c_{0}=1\), \(c_{1}=0\), \(c_{d}=d!\,r_{d}\), and \(r_{d}\) is the number of degree-\(d\) rational curves in \(X\) that satisfy certain geometric conditions. Formally speaking, \(r_{d}\) is a degree-\(d\), genus-zero Gromov-Witten invariant [36]. The _period sequence_ of \(X\) is the sequence (\(c_{d}\)) of coefficients of the regularized quantum period. This sequence grows rapidly. In the case where \(X\) is a \(\mathbb{Q}\)-Fano toric variety of Picard rank two, rigorous asymptotics for this growth are known. **Theorem 4** (Theorem 5.2, [15]).: _Consider a weight matrix_ \[\begin{bmatrix}a_{1}&\dots&a_{N}\\ b_{1}&\dots&b_{N}\end{bmatrix}\] _for a \(\mathbb{Q}\)-factorial Fano toric variety \(X\) of Picard rank two. Let \(a=\sum_{i=1}^{N}a_{i}\) and \(b=\sum_{i=1}^{N}b_{i}\), and let \([\mu\colon\nu]\in\mathbb{P}^{1}\) be the unique real root of the homogeneous polynomial_ \[\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{a_{i}b}-\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu )^{b_{i}a} \tag{6.1}\] _such that \(a_{i}\mu+b_{i}\nu\geq 0\) for all \(i\in\{1,2,\dots,N\}\). Let (\(c_{d}\)) be the corresponding period sequence. Then non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[\begin{split} A&=-\sum_{i=1}^{N}p_{i}\log p_{i}\\ B&=-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^ {N}\log p_{i}-\frac{1}{2}\log\left(\sum_{i=1}^{N}\frac{(a_{i}b-b_{i}a)^{2}}{ \ell^{2}p_{i}}\right)\end{split} \tag{6.2}\] _Here \(p_{i}=\frac{\mu a_{i}+\nu b_{i}}{\mu a+\nu b}\), so that \(\sum_{i}p_{i}=1\), and \(\ell=\gcd\{a,b\}\) is the Fano index._ In Figure 3 we picture our dataset of \(\mathbb{Q}\)-Fano varieties by using the coefficients \(A\) and \(B\) to project it to \(\mathbb{R}^{2}\); for the corresponding images for terminal Fano weighted projective spaces, see [15, Figure 7a]. Note the stratification by Fano index. Although many weight matrices can give rise to the same toric variety, in our context we are using well-formed weight matrices in standard form (2.3) and so at most two weight matrices can give rise to the same toric variety. We removed any such duplicates from our dataset, so the heatmap in Figure 3(b) reflects genuine variation in the distribution of \(\mathbb{Q}\)-Fano varieties, rather than simply the many-to-one correspondence between weight matrices and toric varieties. Data generation.The dataset pictured in Figure 3 was generated using an AI-assisted data generation workflow that combines algorithmic checks and our machine learning model, as follows. * Generate a random \(2\times 10\) matrix with entries chosen uniformly from \(\{0,1,2,3,4,5,6,7\}\). * Cyclically order the columns and only keep the matrix if it is in standard form, as in (2.3). * Check conditions (1)-(4) from SS3. * Predict terminality using the neural network classifier from SS4, only keeping examples that are classified as terminal and storing their probabilities. * Set \(\mu=1\) in (6.1) and solve the univariate real polynomial in the correct domain to obtain the solution \((1,\nu)\). * Calculate the coefficients \(A\) and \(B\) using the formulae in (6.2). The final dataset is composed of 100M samples. Each of these represents a \(\mathbb{Q}\)-factorial toric Fano variety of dimension eight and Picard rank two that the classifier predicts is a \(\mathbb{Q}\)-Fano variety. Data analysis.We note that the vertical boundary in Figure 3 is not a surprise. In fact, we can apply the log-sum inequality to the formula for \(A\) to obtain \[A=-\sum_{i=1}^{N}p_{i}\log(p_{i})\leq-\left(\sum_{i=1}^{N}p_{i}\right)\log \left(\frac{\sum_{i=1}^{N}p_{i}}{N}\right)=\log(N)\] In our case \(N=10\), and the vertical boundary that we see in Figure 3(a) is the line \(x=\log(10)\sim 2.3\). We also see what looks like a linear lower bound for the cluster; a similar bound was observed, and established rigorously, for weighted projective spaces in [15]. Closer analysis (see SSB) reveals large overlapping clusters that correspond to Fano varieties of different Fano index. Furthermore the simplest toric varieties of Picard rank two - products of projective spaces, and products of weighted projective spaces - appear to lie in specific regions of the diagram. ## 7. Limitations and future directions The main message of this work is a new proposed AI-assisted workflow for data generation in pure mathematics. This allowed us to construct, for the first time, an approximate landscape of objects of mathematical interest (\(\mathbb{Q}\)-Fano varieties) which is inaccessible by traditional methods. We hope that this methodology will have broad application, especially to other large-scale classification questions in mathematics, of which there are many [1, 18, 28]. Figure 3. A dataset of 100M probably-\(\mathbb{Q}\)-Fano toric varieties of Picard rank two and dimension eight, projected to \(\mathbb{R}^{2}\) using the growth coefficients \(A\) and \(B\) from (6.2). In (a) we colour by Fano index, while in (b) we colour a heatmap according to the frequency. Our approach has some limitations, however, which we enumerate here. Some of these limitations suggest directions for future research. A key drawback, common to most ML models, is that our classifier performs poorly on out-of-sample data. Recall from SS3 that the dataset we generated bounded the entries of the matrices by seven. For weight matrices within this range the model is extremely accurate (95%), however this accuracy drops off rapidly for weight matrices that fall outside of this range: 62% for entries bounded by eight; 52% for entries bounded by nine; and 50% for entries bounded by ten. See Figure 4 for details. Note that the network quickly degenerates to always predicting non-terminal singularities. Furthermore the training process seems to require more data than we would like, given how computationally expensive the training data is to generate. It is possible that a more sophisticated network architecture, that is better adapted to this specific problem, might require less data to train. Mathematically, our work here was limited to toric varieties, and furthermore only to toric varieties of Picard rank two. Finding a meaningful vectorisation of an arbitrary algebraic variety looks like an impossible task. But if one is interested in the classification of algebraic varieties up to deformation, this might be less of a problem than it first appears. Any smooth Fano variety in low dimensions is, up to deformation, either a toric variety, a toric complete intersection, or a quiver flag zero locus [13, 30]; one might hope that this also covers a substantial fraction of the \(\mathbb{Q}\)-Fano landscape. Each of these classes of geometry is controlled by combinatorial structures, and it is possible to imagine a generalisation of our vectorisation by weight matrices to this broader context. Generalising to \(\mathbb{Q}\)-factorial Fano toric varieties in higher Picard rank will require a more sophisticated approach to equivariant machine learning. In this paper, we could rely on the fact that there is a normal form (2.3) for rank-two weight matrices that gives an almost unique representative of each \(\operatorname{SL}_{2}(\mathbb{Z})\times S_{N}\)-orbit of weight matrices. For higher Picard rank \(r\) we need to consider weight matrices up to the action of \(G=\operatorname{SL}_{r}(\mathbb{Z})\times S_{N}\). Here no normal form is known, so to work \(G\)-equivariantly we will need to augment our dataset, to fill out the different \(G\)-orbits, or to use invariant functions of the weights as features. The latter option, geometrically speaking, is working directly with the quotient space. The best possible path forward would be to train an explainable model that predicted terminality from the weight data. This would allow us to extract from the machine learning not only that the problem is tractable, but also a precise mathematical conjecture for the solution. At the moment, however, we are very far from this. The multilayer perceptron that we trained is a black-box model, and post-hoc explanatory methods such as SHAP analysis [40] yielded little insight: all features were used uniformly, as might be expected. We hope to return to this point elsewhere. **Data and code availability.** The datasets underlying this work and the code used to generate them are available from Zenodo under a CC0 license [14]. Data generation and post-processing was carried out using the computational algebra system Magma V2.27-3 [7]. The machine learning model was built using PyTorch v1.13.1 [45] and scikit-learn v1.1.3 [46]. All code used and trained models are available from BitBucket under an MIT licence [16]. **Acknowledgements.** TC was partially supported by ERC Consolidator Grant 682603 and EPSRC Programme Grant EP/N03189X/1. AK was supported by EPSRC Fellowship EP/N022513/1. SV was supported by the Engineering and Physical Sciences Research Council [EP/S021590/1], the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Figure 4. Confusion matrices for the neural network classifier on in-sample and out-of-sample data. In each case a balanced set of 10 000 random examples was tested. Number Theory), University College London. The authors would like to thank Hamid Abban, Alessio Corti, and Challenger Mishra for many useful conversations, and the anonymous referees for their insightful feedback and suggestions. ## Supplementary Material A Mathematical background ### Toric varieties The prototypical example of a toric Fano variety is two-dimensional projective space, \(\mathbb{P}^{2}\). As mentioned in SS2, this is defined by taking the quotient of \(\mathbb{C}^{3}\setminus\{\mathbf{0}\}\) by the following action of \(\mathbb{C}^{\times}\): \[\lambda\cdot(z_{1},z_{2},z_{3})=(\lambda z_{1},\lambda z_{2},\lambda z_{3})\] The elements of \(\mathbb{P}^{2}\) are equivalence classes that can be written as \([z_{1}\!:\!z_{2}\!:\!z_{3}]\) where at least one of the \(z_{i}\) is non-zero. The algebraic variety \(\mathbb{P}^{2}\) is _smooth_, since we can cover it by three open subsets that are each isomorphic to the complex plane \(\mathbb{C}^{2}\). Namely, \[U_{1} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{1}\neq 0\}\] \[U_{2} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{2}\neq 0\}\] \[U_{3} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{3}\neq 0\}\] To see that \(U_{1}\) is isomorphic to \(\mathbb{C}^{2}\), we note that since \(z_{1}\neq 0\) it can be rescaled to one. Therefore, each point in \(U_{1}\) can be identified with a (unique) point of the form \([1\!:\!\bar{z}_{2}\!:\!\bar{z}_{3}]\); this gives the isomorphism to \(\mathbb{C}^{2}\). Similar arguments show that \(U_{2}\) and \(U_{3}\) are each isomorphic to \(\mathbb{C}^{2}\). More generally, \((N-1)\)-dimensional projective space \(\mathbb{P}^{N-1}\) is smooth, since it can be covered by \(N\) open subsets each isomorphic to \(\mathbb{C}^{N-1}\). By modifying the action of \(\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\setminus\{\mathbf{0}\}\) we can define more general examples of toric varieties, _weighted projective spaces_, which in general contain singular points. For example, we can consider the action of \(\mathbb{C}^{\times}\) on \(\mathbb{C}^{3}\setminus\{\mathbf{0}\}\) defined by \[\lambda\cdot(z_{1},z_{2},z_{3})=(\lambda z_{1},\lambda z_{2},\lambda^{2}z_{3})\] which gives rise to the weighted projective space \(\mathbb{P}(1,1,2)\). Here the entries of the vector \((1,1,2)\) are called the _weights_ of the variety. In order to see that this variety is not smooth, we can consider the same open sets as above, \[U_{1} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{1}\neq 0\}\] \[U_{2} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{2}\neq 0\}\] \[U_{3} =\{[z_{1}\!:\!z_{2}\!:\!z_{3}]\in\mathbb{P}^{2}\mid z_{3}\neq 0\}\] As before, \(U_{1}\) and \(U_{2}\) are each isomorphic to \(\mathbb{C}^{2}\). However, \(U_{3}\) is not. In fact, since \(z_{3}\neq 0\) we can rescale the last entry to one, but the square in the definition of the action implies that there are two ways of doing so: \[\pm z_{3}^{-1/2}\cdot(z_{1},z_{2},z_{3})=(\pm z_{3}^{-1/2}z_{1},\pm z_{3}^{-1/ 2}z_{2},1)\] Therefore, \(U_{3}\cong\mathbb{C}^{2}/\mu_{2}\) where \(\mu_{2}=\{1,-1\}\) is the group of square roots of unity. Note that \(\mathbb{C}^{2}/\mu_{2}\) has a singular point at the origin, which corresponds to the singular point \([0\!:\!0\!:\!1]\) in \(U_{3}\). We say that \(\mathbb{P}(1,1,2)\) has two smooth charts, \(U_{1}\) and \(U_{2}\), and one singular chart \(U_{3}\). This generalises to higher dimensions by considering \(\mathbb{C}^{\times}\) acting on \(\mathbb{C}^{N}\setminus\{\mathbf{0}\}\) by \[\lambda\cdot(z_{1},\ldots,z_{N})=(\lambda^{a_{1}}z_{1},\ldots,\lambda^{a_{N}}z_{ N})\] for some choice of weights \((a_{1},\ldots,a_{N})\in\mathbb{Z}_{>0}^{N}\). The algebraic variety \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is an \((N-1)\)-dimensional \(\mathbb{Q}\)-factorial Fano toric variety of Picard rank one, called a _weighted projective space_[21, 29]. Setting the \(a_{i}\) equal to \(1\) recovers \(\mathbb{P}^{N-1}\). For any two weighted projective spaces \(X=\mathbb{P}(a_{1},\ldots,a_{N})\) and \(Y=\mathbb{P}(b_{1},\ldots,b_{M})\), we can consider their product \(X\times Y\). This arises as a quotient of \(\mathbb{C}^{N+M}\) by an action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\), where the first \(\mathbb{C}^{\times}\) acts on the first \(N\) co-ordinates of \(\mathbb{C}^{N+M}\) and the second \(\mathbb{C}^{\times}\) acts on the last \(M\) co-ordinates. The two actions are specified by the weights of each weighted projective space. We can summarise this information in a _weight matrix_ \[\begin{bmatrix}a_{1}&\cdots&a_{N}&0&\cdots&0\\ 0&\cdots&0&b_{1}&\cdots&b_{M}\end{bmatrix}\] This type of construction can be generalised to any action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\) given defined as \[(\lambda,\mu)\cdot(z_{1},\ldots,z_{N})=(\lambda^{a_{1}}\mu^{b_{1}}z_{1},\ldots,\lambda^{a_{N}}\mu^{b_{N}}z_{N})\] and which can be encoded in a weight matrix of the form \[\begin{bmatrix}a_{1}&\cdots&a_{N}\\ b_{1}&\cdots&b_{N}\end{bmatrix}\] Note that in the case of projective spaces and weighted projective spaces we were considering \(\mathbb{C}^{N}\setminus\{\mathbf{0}\}\), excluding the origin because it lies in the closure of every orbit. When generalising this concept, we need to exclude more points than just the origin for the quotient to be reasonable; explicitly we consider \(\mathbb{C}^{N}\setminus S\), where \(S=S_{+}\cup S_{-}\) for linear subspaces \[S_{+} =\{(z_{1},\ldots,z_{N})\mid z_{i}=0\text{ if }b_{i}/a_{i}>b/a\}\] \[S_{-} =\{(z_{1},\ldots,z_{N})\mid z_{i}=0\text{ if }b_{i}/a_{i}<b/a\}\] and \(a=\sum_{i=1}^{N}a_{i},b=\sum_{i=1}^{N}b_{i}\): see [8]. The resulting quotient \(X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times})^{2}\) is an \((N-2)\)-dimensional toric variety. If the linear subspaces \(S_{+}\) and \(S_{-}\) each have dimension at least two then \(X\) has Picard rank two. ### From weight matrices to fans In SS2, a toric variety \(X\) was determined by a matrix \[W=\begin{bmatrix}a_{1}&\cdots&a_{N}\\ b_{1}&\cdots&b_{N}\end{bmatrix}\] (A.1) that, as recalled above, records the weights of an action of \((\mathbb{C}^{\times})^{2}\) on \(\mathbb{C}^{N}\). We will now explain how to recover the fan \(\Sigma(X)\) for the toric variety from this data [17, 24]. Consider the right kernel of the matrix \(W\), regarded as a \(\mathbb{Z}\)-linear map. The kernel is a free submodule of \(\mathbb{Z}^{N}\), of rank \(N-2\), and choosing a basis for this submodule defines an \(N\times(N-2)\) matrix \(M\) such that \(WM=0\). The rows of \(M\) define distinct primitive vectors \(e_{1},\ldots,e_{N}\) in \(\mathbb{Z}^{N-2}\) such that \[a_{1}e_{1}+\cdots+a_{N}e_{N} =0\] \[b_{1}e_{1}+\cdots+b_{N}e_{N} =0\] By construction, the vectors \(e_{1},\ldots,e_{N}\) span the kernel of \(W\) over \(\mathbb{Z}\). In general the construction of a toric variety (or equivalently a fan) from a weight matrix depends also on the choice of a _stability condition_, which is an element \(\omega\) of the column space of \(W\). In our case, however, because \(X\) is Fano there is a canonical choice for \(\omega\) given by \((a,b)\), the sum of the columns of \(W\). Let us denote the \(i\)th column of \(W\) by \(D_{i}\). We set \[\mathcal{A}_{\omega}=\{I\subset\{1,2,\ldots,N\}\mid\omega\in\angle_{I}\}\] where \[\angle_{I}=\left\{\sum_{i\in I}\lambda_{i}D_{i}\mid\lambda_{i}\in\mathbb{R}_{ >0}\right\}\] The fan \(\Sigma(X)\) is the collection of cones in \(\mathbb{R}^{N-2}\) given by \[\{\sigma_{I}\mid\bar{I}\in\mathcal{A}_{\omega}\}\] where \[\sigma_{I}=\text{cone}\{e_{i}\mid i\in I\}\] Here \(\bar{I}\) is the complement of \(I\) in \(\{1,2,\ldots,N\}\). Recall our assumptions on the weight matrix \(W\): 1. The columns of \(W\) span a strictly convex cone in \(\mathbb{R}^{2}\). 2. None of the columns are the zero vector. 3. The sum of the columns is not a multiple of any of them. 3. The subspaces \(S_{+}\) and \(S_{-}\), defined in (2.2), are both of dimension at least two. (We number from zero here to match the numbering of the conditions in SS3.) Conditions (0) and (1) together guarantee that the fan \(\Sigma(X)\) is complete; that is, its support covers \(\mathbb{R}^{N-2}\). The toric variety \(X\) is therefore compact. Condition (2) ensures that each top-dimensional cone in the fan has \(N-2\) rays; that is, the fan is simplicial. This implies that the toric variety \(X\) is \(\mathbb{Q}\)-factorial. Condition (3) ensures that each of the vectors \(e_{1},\ldots,e_{N}\) generates a one-dimensional cone \(\mathbb{R}_{\geq 0}e_{i}\) in the fan \(\Sigma(X)\). Together with \(\mathbb{Q}\)-factoriality, this implies that the Picard rank of \(X\) is two. ### Checking terminality Each top-dimensional cone \(\sigma\) in \(\Sigma(X)\) is generated over \(\mathbb{R}_{\geq 0}\) by \(N-2\) of the vectors \(e_{1},\ldots,e_{N}\). These generators are contained in a unique \((N-3)\)-dimensional hyperplane \(H\). The cone \(\sigma\) corresponds to a terminal singularity in \(X\) if and only if the only lattice points in \(\sigma\) that lie on or below \(H\) are the generators of \(\sigma\) and the origin [48]. \(X\) has terminal singularities if and only if each top-dimensional cone of \(\Sigma(X)\) corresponds to a terminal singularity. This justifies the assertion, given in SS2, that \(X\) has terminal singularities if and only if the convex polytope \(P=\operatorname{conv}\{e_{1},\ldots,e_{N}\}\) is mostly empty. ### A subtlety with quotient gradings In SS1, in the paragraph 'Why dimension eight?', we noted that the analogue of our dataset in dimension three contains 34 examples. There are 35 \(\mathbb{Q}\)-Fano toric varieties of Picard rank two in dimension three [31], but precisely one of these has a quotient grading and so does not fit into the framework we consider here. The exception is \(X=\mathbb{P}^{1}\times\mathbb{P}^{2}/\mu_{3}\), where \(\mu_{3}\) acts via \((u,v;x,y,z)\mapsto(u,\varepsilon v;x,\varepsilon y,\varepsilon^{2}z)\) and \(\varepsilon\) is a primitive cube root of unity. The quotient grading arises here because the primitive generators for rays of the fan \(\Sigma(X)\) fail to span the ambient lattice over \(\mathbb{Z}\). If we instead regard the primitive generators as living inside the sublattice that they generate, then we recover one of the other 34 terminal examples: \(\mathbb{P}^{1}\times\mathbb{P}^{2}\). The analogue of this phenomenon happens in higher dimensions too, and so we ignore quotient gradings in our methodology. ### Significance of \(\mathbb{Q}\)-Fano varieties As mentioned in SS1, \(\mathbb{Q}\)-Fano varieties are 'atomic pieces' from which more complicated algebraic varieties are made, and so one can think of the classification of \(\mathbb{Q}\)-Fano varieties as building a Periodic Table for geometry. Understanding this classification is a fundamental problem in algebraic geometry, and is the motivation behind a huge amount of research; see e.g. [9, 11, 33, 35] and the references therein. \(\mathbb{Q}\)-Fano varieties also play an important role elsewhere in mathematics, for example in the study of K-stability and the existence of Kahler-Einstein metrics [5]. In theoretical physics, \(\mathbb{Q}\)-Fano varieties provide, through their 'anticanonical sections', the main construction of the Calabi-Yau manifolds which give geometric models of spacetime [10, 25, 47] in Type II string theory. Moreover, terminal singularities - the focus of this paper - are the singularities that appear in the Minimal Model Program [33], and they also occur across mathematics. For example, in F-theory, terminal singularities reflect the presence of localized matter states from wrapped M2-branes which are not charged under any massless gauge potential [3]. Moreover, in the toric context, having only terminal singularities means that the corresponding polytope contains no lattice points other than the origin and the vertices. These are referred to in the combinatorics literature as one-point lattice polytopes, and are important in optimisation problems. ## Supplementary Material B Further data analysis The neural network classifier described in SS4 is remarkably accurate at determining whether a \(\mathbb{Q}\)-factorial Fano toric variety of Picard rank two and dimension eight is terminal or not. Confusion matrices for the classifier are presented in Figure 5. Because of this high accuracy, we were able to use this classifier to generate a dataset of 100M probably-\(\mathbb{Q}\)-Fano toric varieties of Picard rank two and dimension eight; see SS6. Creating this first glimpse of the \(\mathbb{Q}\)-Fano landscape would have been impractical using conventional methods. Based on the timing data outlined in SSC below, we estimate that generating this dataset using conventional methods would have taken 160 days on our HPC cluster, equivalent to 600 CPU _years_. In contrast, by using the neural network classifier and batch processing we were able to generate this dataset in under 120 CPU _hours_. One striking feature of the landscape of 100M probably-Q-Fano toric varieties, plotted in Figure 3, is the stratification by Fano index. Recall that the Fano index of \(X\) is equal to the greatest common divisor of \(a\) and \(b\), where \((a,b)\) is the sum of the columns of the matrix (A.1). For our dataset, the entries in the matrix (A.1) are bounded between zero and seven, and hence the range of possible Fano indices that can appear in the dataset is bounded. Figure 3 appears to show overlapping clusters of cases, with the Fano index increasing as we move from the bottom of the plot (Fano index one) to the top. **Products of weighted projective space.** To better understand this clustering by Fano index, we consider the simplest \(\mathbb{Q}\)-factorial Fano toric varieties of Picard rank two: products of weighted projective spaces. Recall from SSA that a product of weighted projective spaces \(X=\mathbb{P}(a_{1},\dots,a_{N})\) and \(Y=\mathbb{P}(b_{1},\dots,b_{M})\) is specified by a weight matrix \[\begin{bmatrix}a_{1}&\cdots&a_{N}&0&\cdots&0\\ 0&\cdots&0&b_{1}&\cdots&b_{M}\end{bmatrix}\] This matrix determines a \(\mathbb{Q}\)-factorial Fano toric variety of Picard rank two and dimension \(N+M-2\), denoted \(X\times Y\). The singular points of \(X\times Y\) are determined by the singular points of \(X\) and \(Y\). In particular, \(X\times Y\) is terminal if and only if both \(X\) and \(Y\) are terminal. In general a weighted projective space \(X=\mathbb{P}(a_{1},a_{2},\dots,a_{N})\) may have singular points; these are determined by the weights \((a_{1},a_{2},\dots,a_{N})\). Proposition 2.3 of [32] characterises when the singular points of \(X\) are terminal. Namely, \(X\) is terminal if and only if \[\sum_{i=1}^{N}\{ka_{i}/a\}\in\{2,\dots,N-2\}\] for each \(k\in\{2,\dots,a-2\}\). Here \(a=a_{1}+a_{2}+\dots+a_{N}\), and \(\{x\}\) denotes the fractional part \(x-\lfloor x\rfloor\) of a rational number \(x\). This is the Picard rank one analogue to Proposition 3. We can enumerate all terminal weighted projective spaces in dimensions one to seven, with weights \(1\leq a_{i}\leq 7\), using the characterisation of terminal weighted projective space described above. The number in each dimension is given in Table 2. By taking products, we obtain 8792 distinct \(\mathbb{Q}\)-Fano toric varieties of Picard rank two in dimension eight; these examples are plotted in Figure 6. This supports our observation that the \(\mathbb{Q}\)-Fano varieties fall into large overlapping clusters that are determined by the Fano index. Note that the products of weighted projective space appear to fall within the upper region of each cluster. **Smooth Fano toric varieties.** Projective space \(\mathbb{P}^{N-1}\) is smooth, and so products of projective spaces are also smooth. More generally, the smooth Fano toric varieties up to dimension eight have been classified [44]. There are 62 smooth Fano toric varieties in dimension eight and of Picard rank two, all Figure 5. Confusion matrices for the classifier trained on 5M samples: (a) is normalised with respect to the true axis; (b) is normalised with respect to the predicted axis. of which have weights bounded by seven when expressed in standard form (2.3). These are plotted in Figure 7, and appear to fall in the upper extreme region within each cluster. **A cluster of high-Fano index examples.** Figure 3 appears to show a cluster of high-Fano-index cases (at the top of the plot) standing apart from the remainder of the data. We now give an explanation for this high-Fano-index cluster. Figure 8 shows the frequency distribution of Fano indices in the dataset. The uptick in frequencies in the histogram in Figure 8 can be explained as follows. Consider how many ways we can write \(N\) as a sum of ten numbers between zero and seven (inclusive, and with possible repeats). This resembles a normal distribution with \(N=35\) the most frequent case. This higher probability is due to our sampling constraints on the entries of the weight matrix: amongst \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \(d\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \# & 1 & 1 & 7 & 80 & 356 & 972 & 2088 \\ \hline \hline \end{tabular} \end{table} Table 2. The number of terminal weighted projective spaces in dimension \(d\), \(1\leq d\leq 7\), with weights \(a_{i}\) bounded by seven. Figure 6. Q-Fano products of weighted projective space in dimension eight, with weights bounded by seven. (a) Projection to \(\mathbb{R}^{2}\) using the growth coefficients from (6.2). (b) The same as (a), but plotted on top of the dataset of 100M probably-Q-Fano toric varieties, plotted in grey. Figure 7. The smooth Fano toric varieties in dimension eight and of Picard rank two. (a) Projection to \(\mathbb{R}^{2}\) using the growth coefficients from (6.2). (b) The same as (a), but plotted on top of the dataset of 100M probably-Q-Fano toric varieties, plotted in grey. those matrices that have \(a=b\) we have the highest probability of selecting one that has \(a=b=35\). Therefore, we see a misleading accumulation around those Fano indices. In Figure 9 we restrict the dataset to low Fano indices. For each Fano index in the range one through to nine, we plot the convex hull of the resulting point cloud. The overlap between these clusters is clear. ## Supplementary Material C Computational resources In this section we describe the computational resources required by different steps of our analysis. We will refer to a _desktop PC_ and an _HPC cluster_. The desktop PC has an Intel Xeon 5222 quad-core processor, 64GB RAM, and an NVIDIA RTX A2000 12 GB GPU; note however that all CPU jobs on the desktop PC ran single-core. The HPC cluster has Intel Xeon E5-2650 processors with a total of 1400 cores. ### Data generation The datasets bound_7_terminal and bound_7_non_terminal were generated using scripts for the computational algebra system Magma [7], running on the HPC cluster in parallel over 1400 cores for eight days, with 2GB of memory per core. Deduplication of the dataset was performed on the desktop PC and took approximately eight hours. Figure 8. Distribution of the Fano index \(\gcd\{a,b\}\) in the dataset of 100M probably-\(\mathbb{Q}\)-Fano toric varieties (note that the vertical axis scale is logged). Figure 9. Convex hulls obtained from the point clouds for probably-\(\mathbb{Q}\)-Fano toric varieties with Fano indices between one and nine, obtained by projecting to \(\mathbb{R}^{2}\) using the growth coefficients from (6.2). #### Hyperparameter tuning This was carried out on the desktop PC, using the GPU. Each experiment ran on average for two minutes, for a total run time of 200 minutes for 100 experiments. #### Model training This was carried out using the desktop PC, using the GPU. Training on 5M balanced samples for 150 epochs took four hours. #### Model evaluation The model evaluation was carried out using the desktop PC, using the GPU. Evaluation took approximately ten minutes. #### Further data generation The dataset terminal_dim8_probable was generated by running Python scripts on the HPC cluster in parallel over 120 cores for one hour, with 16GB of memory per core. Deduplication of the dataset was performed on the desktop PC and took approximately one hour. ## Supplementary Material D Training for weights with a larger bound In SS7 we highlighted that the trained neural network does not perform well out of sample. Therefore, it is natural to ask whether the neural network is approximating an actual general mathematical statement, or if its performance is the result of some 'finite size effect' due to the choice of a particular weight bound (in our case seven). Our intuition here is as follows. Given that the testing and training data are free of noise (they are created through exact mathematical calculation) and the neural network classifier is so accurate, we believe that the classifier is indeed approximating a precise, general mathematical statement. However, the poor out-of-sample performance makes it unclear _what kind of mathematical statement_ the network is picking up. The statement could be about weight matrices with entries of arbitrary size, or could be about weight matrices with small entries (mathematically, this would be a statement about Fano varieties with terminal singularities of bounded index). In the first case the out-of-sample performance drop-off would happen because the network is approximating the true statement in a way that does not generalise to higher weight bounds; this is a common phenomenon when developing and using neural network models. In the second case the out-of-sample performance drop-off would happen because of the underlying mathematical statement that the classifier approximates. To probe this further, we repeated the same experiments as in the main text on a dataset of weight matrices with weights bounded by a larger constant, ten. We generated a new dataset of size 20 million, balanced between terminal and non-terminal examples, where the entries of each weight matrix are bounded by ten. The data generation steps were the same as described in SS3, except that the terminality check was now carried out using the new algorithm discussed in SS5 (and proved correct in SE). We remark that the increased speed of the new algorithm allowed us to generate double the amount of data of the original dataset. We used a fully-connected feed-forward neural network with the same architecture as the original neural network from the paper. This architecture is recalled in Table 3. Again, the network was trained on the features given by flattening the weight matrices, which where standardised by translating the mean to zero and rescaling the variance to one. It was trained using binary cross-entropy as loss function, stochastic mini-batch gradient descent optimiser and using early-stopping, for a maximum of 150 epochs and with learning rate reduction on plateaux. Training on 5M samples (using 80% for training and 10% for validation) and testing on the remaining data (15M samples) produced an accuracy of 90% - see Figure 10(b) for the loss learning curve. This performance is worse than that achieved for the same training sample size for weight bound seven, potentially indicating that the condition approximated by the network is harder to capture. Training on a larger sample of size 10M (again using 80% for training and 10% for validation) and testing on the remaining data (10M samples) improves the accuracy to 94% - see Figure 10(c) for the loss learning curve. The training and validation accuracies for intermediate training sizes are shown in Figure 10(a). We were able to recover a high accuracy for this new dataset. However, this was only possible by using a larger training sample size, which hints at the increased difficulty of the task. Moreover, Figure 10(a) suggests that increasing the size of the training set further is unlikely to improve the accuracy. Being able to train a high-accuracy neural network for a larger weights bound supports the hypothesis that the neural network is approximating a general mathematical statement but in a way that does not generalise well to higher bounds. However, it is too early to exclude the hypothesis that the network might be capturing a mathematical statement that needs weight matrices with small entries. Similar studies with even higher bounds would add confidence here and, if the network is indeed approximating a statement about weight matrices with small weights, experiments of this type should also be able to deduce what the cut-off bound for the weights is. ## Supplementary Material E Proof of Proposition 3 In this section we prove Proposition 3. This is the main ingredient in the new algorithm to check terminality. Recall from the discussion above that \(X\) determines a convex polytope \(P\) with vertices \(e_{1},\ldots,e_{N}\in\mathbb{Z}^{N-2}\), and that \[a_{1}e_{1}+\cdots+a_{N}e_{N} =0\] \[b_{1}e_{1}+\cdots+b_{N}e_{N} =0\] where the \(a_{i}\) and \(b_{j}\) are entries in the weight matrix (A.1). The same argument applied to the equivalent weight matrix \[\begin{bmatrix}b_{i}/g_{i}&-a_{i}/g_{i}\\ A_{i}&B_{i}\end{bmatrix}\begin{bmatrix}a_{1}&\cdots&a_{N}\\ b_{1}&\cdots&b_{N}\end{bmatrix}\] gives barycentric co-ordinates for the origin and \(e_{i}\) in terms of the remaining vertices of \(\Delta_{i}\): \[\alpha_{i}^{1}e_{1}+\cdots+\alpha_{i}^{i-1}e_{i-1}+\alpha_{i}^{i +1}e_{i+1}+\cdots+\alpha_{i}^{N}e_{N} =0\] \[\beta_{i}^{1}e_{1}+\cdots+\beta_{i}^{i-1}e_{i-1}+\beta_{i}^{i+1}e _{i+1}+\cdots+\beta_{i}^{N}e_{N} =g_{i}e_{i}\] Figure 11. Confusion matrices for the neural network classifier on in-sample and out-of-sample data. In each case a balanced set of \(10\,000\) random examples was tested. Figure 10. (a) Accuracy for different train-test splits; (b) epochs against loss for the network trained on 5M samples; (c) epochs against loss for the network trained on 10M samples. \begin{table} \begin{tabular}{c c c c} \hline \hline **Hyperparameter** & **Value** & **Hyperparameter** & **Value** \\ \hline Layers & \((512,768,512)\) & Momentum & 0.99 \\ Batch size & \(128\) & LeakyRelu slope & 0.01 \\ Initial learning rate & 0.01 & & \\ \hline \hline \end{tabular} \end{table} Table 3. Final network architecture and configuration. Fix \(i\in\{1,2,\ldots,N\}\). Define \(u\colon\mathbb{Q}^{N-1}\to\mathbb{Q}\) by \(u(x_{1},\ldots,x_{N-1})=x_{1}+\cdots+x_{N-1}\), and let \(\Psi\) denote the lattice \[\{v\in\mathcal{Z}\mid u(v)=1\}\] where \(\mathcal{Z}\) is the span over \(\mathbb{Z}\) of the standard basis \(E_{1},\ldots,E_{N-1}\) for \(\mathbb{Q}^{N-1}\) together with \[\frac{1}{f_{i}}(\alpha_{1}^{2},\ldots,\hat{a}_{i}^{i},\ldots,\alpha_{1}^{N}) \text{and} \frac{1}{g_{i}}(\beta_{1}^{2},\ldots,\hat{\beta}_{i}^{i},\ldots, \beta_{1}^{N})\] Here the \(\hat{\ }\) indicates that the \(i\)th entry in each vector is omitted. We define \(\phi\colon\Psi\to\mathbb{Z}^{N-2}\) to be the \(\mathbb{Z}\)-linear map that sends \(E_{1},\ldots,E_{N-1}\) to \(e_{1},\ldots,\hat{e}_{i},\ldots,e_{N}\) and \[\phi\left(\frac{1}{f_{i}}(\alpha_{1}^{2},\ldots,\hat{a}_{i}^{i},\ldots,\alpha_ {1}^{N})\right)=0 \phi\left(\frac{1}{g_{i}}(\beta_{1}^{2},\ldots,\hat{\beta}_{i}^{i}, \ldots,\beta_{1}^{N})\right)=e_{i}\] It is easy to see that \(\phi\) is well-defined and bijective. Consider the higher-dimensional parallelepiped \(\Gamma\) in \(\mathcal{Z}\) generated by the standard basis of \(\mathbb{Z}^{N-1}\). We note that each lattice point of \(\mathcal{Z}\) in \(\Gamma\) can represented as a linear combination \[\frac{k}{f_{i}}(\alpha_{1}^{2},\ldots,\hat{a}_{i}^{i},\ldots,\alpha_{1}^{N})+ \frac{1}{g_{i}}(\beta_{1}^{2},\ldots,\hat{\beta}_{i}^{i},\ldots,\beta_{1}^{N})\] (E.1) for some \(k\in\{0,1,\ldots,f_{i}-1\}\) and \(l\in\{0,1,\ldots,g_{i}-1\}\); this representation is unique if and only if the vertices of \(\Delta_{i}\) span \(\mathbb{Z}^{N-2}\). Hence, \(\Delta_{i}\) is almost empty if and only if whenever \[\sum_{j\neq i}\left\{k\frac{\alpha_{i}^{j}}{f_{i}}+l\frac{\beta_{i}^{j}}{g_{i} }\right\}=1\] (E.2) we have that the linear combination in (E.1) represents the origin. But this is the case if and only if \[\left\{k\frac{\alpha_{i}^{j}}{f_{i}}+l\frac{\beta_{i}^{j}}{g_{i}}\right\}= \left\{\frac{\alpha_{i}^{j}}{\alpha_{i}}\right\}\] for all \(j\), since \((k,l)=(\frac{f_{i}}{\alpha_{i}},0)\) represents the origin by construction. Note that the sum (E.2) could include \(j=i\), since that term is an integer and its fractional part will not contribute to the sum.
2309.11080
Visual Question Answering in the Medical Domain
Medical visual question answering (Med-VQA) is a machine learning task that aims to create a system that can answer natural language questions based on given medical images. Although there has been rapid progress on the general VQA task, less progress has been made on Med-VQA due to the lack of large-scale annotated datasets. In this paper, we present domain-specific pre-training strategies, including a novel contrastive learning pretraining method, to mitigate the problem of small datasets for the Med-VQA task. We find that the model benefits from components that use fewer parameters. We also evaluate and discuss the model's visual reasoning using evidence verification techniques. Our proposed model obtained an accuracy of 60% on the VQA-Med 2019 test set, giving comparable results to other state-of-the-art Med-VQA models.
Louisa Canepa, Sonit Singh, Arcot Sowmya
2023-09-20T06:06:10Z
http://arxiv.org/abs/2309.11080v1
# Visual Question Answering in the Medical Domain ###### Abstract Medical visual question answering (Med-VQA) is a machine learning task that aims to create a system that can answer natural language questions based on given medical images. Although there has been rapid progress on the general VQA task, less progress has been made on Med-VQA due to the lack of large-scale annotated datasets. In this paper, we present domain-specific pre-training strategies, including a novel contrastive learning pretraining method, to mitigate the problem of small datasets for the Med-VQA task. We find that the model benefits from components that use fewer parameters. We also evaluate and discuss the model's visual reasoning using evidence verification techniques. Our proposed model obtained an accuracy of 60% on the VQA-Med 2019 test set, giving comparable results to other state-of-the-art Med-VQA models. Computer Vision, Natural Language Processing, Medical Visual Question Answering, Convolutional Neural Network, Recurrent Neural Network, Transformers, Computed Tomography, Magnetic Resonance Imaging ## I Introduction With recent advancements in the field of Computer Vision (CV) and Natural Language Processing (NLP), researchers have started looking at _cross-modal_ problems that require deeper understanding of both images and text. Of the various tasks at the intersection of CV and NLP, Visual Question Answering (VQA) [1] involves taking as input a natural language question and an image, and producing the correct natural language answer. The goal is to design Artificial Intelligence (AI) systems that can form a holistic understanding of images, and are able to effectively express that understanding in natural language. Inspired by the general domain VQA, Medical Visual Question Answering (Med-VQA) takes as input a natural language question and a medical image, and produces a plausible correct natural language answer as the output. The Med-VQA task has gained popularity more recently after the introduction of the ImageCLEF Med-VQA 2018 challenge [2]. However, the field is still at a nascent stage and there is still much progress to be made before Med-VQA systems are ready to be deployed in real clinical settings. The field of medicine has seen great technological advancements with increases in the amount and accessibility of data. With the federal regulations around electronic health records (EHR), patients have now more access to their medical data than ever before. Given that patients can independently check their health records outside of their official consultations, there is an increased need for an accessible way to have their questions answered correctly. Patients can book a consultation with a doctor to obtain answers to their questions, but may be hesitant due to time and money constraints. On the other hand, patients have the option to rely on search engines and conversational agents such as Chat-GPT [3]. However, there is an increased risk of getting misleading or incorrect information. To overcome these challenges, there is a need for a system that helps patients to better manage and understand their medical data without oversight from a healthcare professional. A Med-VQA system could fulfill this need, particularly as its inclusion of natural language question input makes it suited for answering unguided natural language questions, like those that patients may have about their medical images. Most existing studies on Med-VQA rely on Convolutional Neural Networks (CNNs) for images and Recurrent Neural Networks (RNNs) for questions and answers as their building blocks. However, less attention has been paid to why a particular component was chosen over another, and why the base model was built in a particular way. There exists a trend in component choice towards more advanced modules without much clarity on why. Hence, it is important to investigate and understand the benefits (or lack) of advanced modules that are being adopted. Datasets for Med-VQA are small, presenting a challenge for a model attempting to learn patterns from them. Therefore, pretraining plays a crucial role in improving model performance. However, medical images and text can be very different in nature from general domain images and text. We hypothesise that pretraining on the medical domain instead could provide performance benefits, since the model is fine-tuned with more domain-specific knowledge that is more directly applicable to the task. Apart from Med-VQA model performance comparison, it is important to have evidence verification, involving visualisation techniques to understand why a particular model gives a particular output for a particular input, and it is crucial to unbox deep learning models in the medical field. Evidence verification is particularly important for a verified Med-VQA system, as it could be making diagnostic judgements about patient's medical images. In this paper, we make the following contributions: 1. We systematically compare various components forming the image encoder, question encoder and answer encoder. This helps us to obtain a well-optimised Med-VQA model. 2. We evaluate the importance of domain-specific knowledge for the Med-VQA task. We not only used pretraining for images but also used pretraining for questions and answers, thereby forcing the Med-VQA model to utilise medical domain knowledge compared to pre trained components in the general domain. 3. We use evidence verification techniques to evaluate results. Specifically, we use Gradient Weighted Class Activation Mapping (GradCAM), highlighting regions of the image that are important for predicting a particular answer. The rest of the paper is organised as follows: in section 2 related work is briefly reviewed; in section 3 details about the methodology are provided; in section 4 experimental results are presented. In section 5 the results and ablation studies are discussed. Finally, section 6 concludes this paper and recommends future directions. ## II Related Work _Deep Learning_ (DL), a sub-field of Machine Learning (ML), makes use of neural networks, which are complex models consisting of interconnected units ("neurons") that aim to mimic the human brain in order to learn complex tasks [4]. DL gives systems the ability to extract relevant features automatically from the raw data and to be trained in an end-to-end manner. Over the past few decades, researchers have sought to push boundaries within individual fields such as CV and NLP. However, with the rise of DL, researchers found that DL has the advantage of being generalisable to a variety of tasks within a variety of fields. Therefore, researchers started focussing on problems that lie at the intersection of various fields such as CV and NLP, such as _Med-VQA_. The most common approach for a Med-VQA system is the _joint-embedding framework_. This framework consists of four components: an _image encoder_ to extract visual features from an image, a _question encoder_ to extract textual features from the question, a _feature fusion algorithm_ to combine visual and textual features in a meaningful way and an _answer generation module_ to predict an answer in the form of natural language. CNNs are specialised neural networks suitable for processing images or videos, and are typically used as an image encoder. Of the various CNNs, VGG-Net [5] and ResNet [6] have been widely used in Med-VQA systems. Other options include Inception-Net or even an ensemble of different CNNs. For e.g., Gong _et al._, noticed that the VQA-RAD dataset consists of CT, MRI and X-rays. They trained three separate ResNet models, one on each modality, and then selected the best network for a given input. However, deeper networks showed an overfitting problem due to the increase in model complexity and lack of training data. To compensate for the relatively smaller dataset sizes in Med-VQA, the use of _transfer learning_ or _pretrained CNNs_ has been widely adopted for Med-VQA systems. However, general domain images are very different to medical images in terms of their features. On question answering, Recurrent Neural Networks (RNNs), which are specialised neural networks suitable for processing sequential data such as text and speech, have been widely used. Although vanilla RNNs can successfully remember information with short-term dependencies, they struggle to remember information from further in the past (long-range dependency). To overcome this issue, Long Short-Term Memory (LSTM) [7] and Gated Recurrent Unit (GRU) [8] networks have been proposed. To further improve the long-range dependency problem, the attention mechanism was introduced and is widely used in conjunction with LSTM or GRU [9]. The _Transformer_[10], an encoder-decoder architecture that is entirely based on the attention mechanism, has recently been the preferred choice for text modelling. Some studies (e.g. [11]) chose to discard complex language encoding and make use of light language encoding, such as template matching. This strategy became a popular choice for the VQA-Med 2020 and VQA-Med 2021 challenges [12]. Researchers found that questions in these datasets have a repetitive format and belong to only a single category (abnormality), and therefore require only a simple language encoder. For e.g., Liao _et al._ used Skeleton-based Sentence Mapping [11], creating a limited number of templates based on similar questions. However, this method has a clear limitation in that it requires a limited number of question types in order to work well. Bidirectional Encoder Representations for Transformers (BERT) [13] based on the Transformer model has also been widely applied for question encoding in Med-VQA. The choice of the fusion algorithm can extend from a simple pooling mechanism to complex attention mechanisms. Common fusion algorithms include simple concatenation, element-wise multiplication or element-wise sum of image and question features. However, studies showed that simple fusion algorithms are not expressive enough to capture complex associations between image and text, and the outer product of vectors should be used instead. Calculating the outer product is computationally very expensive, therefore methods such as Multimodal Compact Bilinear Pooling have been proposed. Attention mechanisms are much more complex than simple fusion, but can provide better performance as they aim to more meaningfully relate the image and question vectors. One commonly used attention mechanism is the Stacked Attention Network (SAN) [14]. SAN has multiple attention layers that interact with the image features multiple times. Each time, the network generates an attention distribution over the image, and adds this to the query vector to generate a "refined" query vector. This allows the network to infer the answer progressively by gradually filtering out irrelevant regions. The answer generation component aims to output the correct answer, given the fused image and question features. This can be implemented via classification or generation. In the classification method, models use a softmax layer that outputs one of a finite number of possible answers, whereas the generation method involves using a language decoder to output the correct answer, such as an RNN or Transformer decoder. The classification method is much simpler than the generation method, and works particularly well when the questions and answers are closed-ended, repetitive and therefore limited in number. However, it is clearly more rigid than the generation method, which can become an issue when questions and answers are more complex. ## III Methodology ### _Dataset_ We use the VQA-Med 2019 challenge dataset [15] as it is the largest dataset currently available for the Med-VQA task and has diversity in terms of question categories and image modalities. The dataset consists of 4,200 images from the MedPix database, with 15,992 corresponding question and answer pairs. In Figure 1 a random set of examples from the dataset is shown. There are four possible question categories--modality, plane, abnormality and organ system. The questions in the VQA-Med dataset are generated artificially and then verified by humans. This allows data to be created faster and more cost-effectively, however it also introduces limitations as questions are likely to have less variation in structure. In Figure 2 a graphical representation of the frequency of each possible word for the first four words of all questions in the dataset is provided. The innermost ring shows the distribution of the first word, the second ring splits this further by the next word and so on. Sections of the chart in white indicate the next words that make up less than 2% of questions. This chart shows that questions are rigid in structure, with many questions beginning the same way and appearing predominantly close-ended. Most questions begin with "is this..." or "which plane...", which implies only one correct answer. This is further evident by examining the number of words in answers. More than 50% of answers consist of only one word, and more than 82% of answers have between one and three words. The shortness of answers indicates that there is not much opportunity for them to be worded differently. Based on this analysis, we conclude that the best answer generation strategy is likely to be classification rather than generation, since generation is more suited for open-ended questions. ### _Model development_ We use a joint-embedding framework as the structure of our model, and test the performance of various components. An overview of the model structure as well as components tested can be seen in Figure 3. The baseline model was constructed with simpler modules that are commonly used for Med-VQA, even though they may not be the most modern or advanced techniques available. This was done for the preliminary model in order to use methods that are well-tested, to provide a benchmark against which we can measure future experiments. The image is passed through a VGG-16 network that is pretrained on ImageNet to generate a 1-dimensional encoding of the image. As discussed in Section II, VGG-16 is a CNN that is quite straightforward in structure, with thirteen convolutional layers, five max-pooling layers and three dense layers. The question is first tokenised using pretrained word embeddings (specifically, we use BioWordVec embeddings [16]), which encodes the question as a numerical vector. This is then passed through an LSTM network to generate a 1-dimensional encoding of the questions. These vectors are then concatenated, and this final feature vector is passed through two fully connected layers to generate the final output class. To improve performance over the baseline, we considered other components that could be used for the image encoder, question encoder and fusion algorithms in the model. These substitutions are summarised in Table I. ResNet is a natural substitution to make for the image encoder component, as it is a newer, more advanced network than VGG-Net, that seeks to solve the vanishing gradient problem and allow for much deeper networks. ResNet models achieve a higher accuracy than VGG-Net on the ImageNet \begin{table} \begin{tabular}{|c||c|c|} \hline **Module** & **Baseline** & **Alternative Component** \\ \hline \hline **Image Encoder** & VGG-16 & ResNet-50, ResNet-152 \\ \hline **Question Encoder** & LSTM & BERT Transformer \\ \hline **Feature Fusion** & Concatenation & Stacked Attention Network \\ \hline \end{tabular} \end{table} TABLE I: Component substitutions to be tested Fig. 1: Example data from the VQA-Med 2019 dataset in each of the four categories. Fig. 2: Distribution of the first four words in the VQA-Med 2019 dataset. classification task [6]. For the question encoder, we tested the performance of a BERT transformer compared to the LSTM network that was used for the baseline model. The transformer discards the complex recurrent structure that was used by the LSTM and other similar NLP models, instead using only attention mechanisms to process text. BERT is pretrained on English Wikipedia and BookCorpus for the language modelling task. In the baseline model, the image and question feature vectors were fused by concatenating the two vectors. This is one of the simplest methods, relying on the fully connected layers in the final answer generation component to form meaningful connections between the question and image feature vectors. However, as discussed before, using an attention mechanism could be a better way to fuse image and question feature vectors. We applied Stacked Attention Network (SAN), which uses multiple attention layers to progressively refine attention distributions over the image, in order to focus on parts of the image that are more relevant to the question. This could provide benefits to the model's understanding of the image as it relates to the given question. To implement SAN, we discarded the fully connected layers from the image encoder to maintain positional information in the encoding, and pass computed image and question features to the SAN, rather than just concatenating them. We tested SAN with 2, 3, or 4 attention layers, achieving best results using 3 attention layers. ### _Incorporating medical domain knowledge_ One issue that has been integral to applying deep learning models on the Med-VQA task is the lack of large-scale annotated datasets. Therefore, it is important to consider techniques that could help mitigate this issue. Both the image encoder (VGG-Net or ResNet) and language encoder (BERT) can be pretrained on general domain images or text, respectively. However, although this pretraining is invaluable, medical images and medical text in Med-VQA are undoubtedly different from the general domain. Incorporating medical domain knowledge as part of pretraining could help the model to learn representations that are more directly applicable to downstream tasks, leading to improved performance. We implemented pretraining for both the question encoder and the image encoder to evaluate the benefits of using medical-specific pretraining for the Med-VQA task. For the image encoder, given that there is no large-scale annotated dataset available, we used _self-supervised pretraining_ which involves training the network on unlabelled data through tasks that allow it to learn a generalised representation of images. Of the various methods available for self-supervised pretraining, we applied the _contrastive learning_ method, similar to the method implemented by SimCLR [17], although modified for application to the VGG-Net encoder and using data augmentations that are more applicable to the medical image dataset. We used the Radiology Objects in COttext (ROCO) dataset [18], which consists of over 81,000 radiology images in a wide variety of imaging modalities. ROCO was chosen as it is large, diverse and has images similar to the Med-VQA task. The image encoder was pretrained for 80 epochs, with a batch size of 128. The original BERT transformer uses general domain BERT pretraining, which pretrains the model on two NLP tasks (language modelling and next-sentence prediction). The pretraining corpus consists of BooksCorpus (an approximately 800 million word collection of freely available novels) and English Wikipedia (approximately 2.5 billion words). However, since medical language can be very different from general domain language, using a significant amount of specialised terminology, it is thought that giving the transformer a better understanding of medical language could improve its performance. To investigate this, we used a BERT model called _BioBERT_[19], pretrained on the same tasks as BERT, but using the PubMed corpus, which comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books (approximately 4.5 billion words). This makes BioBERT particularly suited to biomedical NLP applications. ### _Evidence Verification_ In our experiments, we used classification accuracy to evaluate the performance of the model. However, quantitative evaluation cannot evaluate the quality of model reasoning. Evidence verification involves generating output that gives insight into why the model generated a particular answer, and it is crucial for deep learning models in the medical field, particularly when they may be making diagnostic judgements. We used Gradient Weighted Class Activation Mapping (GradCAM) [20] for evidence verification. GradCAM uses Fig. 3: An overview of the structure of our model and the various components tested. the gradients of a target output class flowing into the final convolutional layer of a network to produce a localisation map that highlights regions of the image that were important to predicting that class. In this way, GradCAM can produce a heat map over the input image showing the areas the model paid the most attention to in order to produce its answer. To implement GradCAM, we used a Python package1, with some modification in order to handle the two multi-modal inputs that are required for our network. We then qualitatively examined the heat map outputs. Footnote 1: [https://github.com/jacobgil/pytorch-grad-cam](https://github.com/jacobgil/pytorch-grad-cam) ## IV Results All experiments were implemented in Python using the PyTorch library [21]. To ensure robustness of results, we performed five-fold cross-validation on the dataset. This was done by randomly splitting the dataset into training set (80%) and test set (20%), and repeating the process five times. The split was seeded with the same number for all versions of the model to enable fair comparison. The model was trained for 50 epochs, optimising loss with the Adam optimiser [22] with a learning rate of \(1e-4\) and a batch size of 64. The input images were normalised and data augmentation was performed during training to increase dataset size and minimise model overfitting. We changed image brightness and contrast by 5% with a probability of 0.4, translation and rotation by 5 units with probability of 0.5, adding Gaussian blur with probability of 0.5 and adding Gaussian noise with a probability of 0.4. ### _Quantitative results_ The baseline model achieved an accuracy of \(0.56\pm 0.01\). We can see that there is still some overfitting happening, despite the data augmentation. Qualitatively examining the model's outputs, there is a marked difference in the model's ability for different question categories. The model has reasonable accuracy on plane and organ system questions (78% and 74% respectively), and a lower accuracy on modality questions (64%). However the model's performance on abnormality questions is extremely poor, at only 6%. This is as expected, since in this dataset most abnormality classes have very few examples, making it very difficult for the model to learn to recognise them. A limitation of the accuracy metric is that it cannot account for cases where the model was technically correct but gave the answer in different wording. For example, in a question asking "What is abnormal in the CT scan?" the model answers "Pulmonary embolism", where the ground truth answer is "PE", an acronym that stands for pulmonary embolism. These types of issues can only be fixed by someone with professional medical knowledge updating the dataset to make these answers consistent. However, even accounting for these cases, the model's performance on abnormality questions is still extremely low compared to other question categories. In Table II the overall test accuracy achieved by each of the model variations as detailed in Section III are shown. Overall best performance is achieved using a VGG-16 image encoder, BERT Transformer question encoder and concatenation as the feature fusion strategy, achieving a test accuracy of \(0.60\pm 0.01\). For the image encoder, we find that deeper and more complex networks such as ResNet-50 or ResNet-152 do not provide better results and in fact demonstrate a higher degree of overfitting. This clearly highlights the issue of the small dataset size of Med-VQA. For question encoding, the BERT transformer gave higher test performance compared to the LSTM network. In Table III the test accuracy is shown by question category for the baseline model compared to the model with highest test accuracy, showing in more detail exactly what aspects the latter model improves on. There is a significant increase in accuracy in the modality category (+12%) and a modest increase in the abnormality category (+2%), which allows the +BERT model to achieve an increased overall accuracy. By examining the model's answers, we find that this improvement is due to the model's improved understanding of the question. For example, the model no longer misclassifies question categories. In the baseline model, approximately 8% of abnormality questions were misclassified as other categories by the model, whereas the BERT model now correctly identifies all abnormality questions (e.g. Figure 3(a)), providing a performance benefit. Secondly, the BERT model is better able to understand the required answer type, and always answers questions in a way that makes sense. For example, in Figure 3(b), the baseline model incorrectly identifies this question as requiring a yes/no answer, whereas the BERT model is able to give a reasonable answer to the question. Similarly, as shown in Figure 3(c), the baseline model would sometimes incorrectly handle questions requiring the model to select from the given options, whereas the BERT model always chooses from the provided options for these questions. These improvements show that a better understanding of the question can lead to higher accuracy overall. Our results in Table II show that S \begin{table} \begin{tabular}{l c} \hline \hline **Model Variation** & **Test Accuracy** \\ \hline VGG-16 + LSTM + Concatenation & 0.56 \\ ResNet-50 + LSTM + Concatenation & 0.54 \\ ResNet-152 + LSTM + Concatenation & 0.53 \\ **VGG-16 + BERT + Concatenation** & **0.60** \\ VGG-16 + BERT + SAN & 0.58 \\ **VGG-16 + BioBERT + Concatenation** & **0.60** \\ **Pretrained VGG-16 + BERT + Concatenation** & **0.60** \\ \hline \hline \end{tabular} \end{table} TABLE II: Test accuracy achieved by each model variation. \begin{table} \begin{tabular}{l c c} \hline \hline **Model Variation** & Baseline & +BERT \\ \hline Modality & 0.64 & 0.76 \\ Plane & 0.78 & 0.77 \\ Organ & 0.74 & 0.74 \\ Abnormality & 0.06 & 0.08 \\ \hline Overall & 0.56 & **0.60** \\ \hline \hline \end{tabular} \end{table} TABLE III: Accuracy of the baseline versus BERT model per category type. method did not improve results compared to the concatenation method. This is because there is simply not enough data for the model to learn a useful refined attention distribution, and the added complexity of generating an attention distribution can actually cause the model to obscure some useful information. By qualitatively examining the model's generated attention distributions (examples of which are shown in Figure 5, with original images on the left, and attention distributions on the right), we find that the issue is that the questions in the dataset need either a holistic view of the image (for example most questions in the modality, plane and organ system categories), or a very strong focus on a small part of the image (for example most abnormality questions). For the former type, we find that the model either attends to all parts of the image more or less equally (e.g. Figure 4(a)), making attention redundant, or the attention distribution obscures relevant parts of the image (e.g. Figure 4(b)). For questions requiring good localisation, the model instead tends to produce attention distributions of the type shown in Figure 4(c), where the model does not attend to any part of the image in particular. This is due to insufficient examples of particular abnormalities in the dataset, therefore the model is not able to learn the small areas of interest in these images, and instead does not focus on any part, leading to performance loss. The results in Table II show that neither domain-specific pretraining method that we tested provided a performance benefit. Our testing of BioBERT as the question encoder showed that medical domain-specific pretraining does not appear to be important to the question encoder for the VQA-Med 2019 dataset. This is likely because of the lack of medical language used in the questions of this dataset. As previously noted, the questions in this dataset are rigid in structure, likely generated from a limited number of templates. Almost all medical language in the text comes from the answers, rather than the questions. Although there are some exceptions (e.g. "angiogenic", "gastrointestinal", "ultrasound"), these medical terms are never crucial to understanding the question as a whole. Therefore, using a question encoder that has been pretrained on the medical domain does not appear to help the model better understand the questions in this dataset. Self-supervised pretraining for the image encoder also did not appear to improve the final performance of the model. This is likely due to the distinct types of visual reasoning that are required for different questions. Some questions, such as organ system identification, require the model to be able to distinguish between large-scale differences in images. However, other questions, such as abnormality questions, require the model to be able to recognise very small-scale differences between images. Our results indicate that contrastive learning, at least in the generalised form that was tested here, is not beneficial for improving performance on the Med-VQA task, likely because it is not able to aid the model in performing both types of image differentiation. ### _Qualitative results_ In order to evaluate the quality of the model's reasoning when producing answers, we used GradCAM to produce visualisations of the model's attention over the input image, and Figure 6 shows examples of these outputs. We found that for the most part, the model does well at predicting the answer from reasonable parts of the image. In Figure 5(a), we see that when asked to determine whether the image is T1 weighted, the model focusses on the light band in the centre of the image. Since this band is dark in T1 images, this is a good reason for the model to make its prediction. In Figure 5(b), the model is asked to determine the plane of the MRI, and focusses in particular on the back of the neck and the eye area to produce the correct "sagittal" answer. These two features would only occur in skull images taken from the sagittal plane, so this is again good reasoning. There are however some cases where the model's visual reasoning could be improved. This can either mean that it gives the correct answer for the wrong reason, or it gives the wrong answer because it focusses on irrelevant areas of the image. GradCAM allows us to identify both of these cases, examples of which are shown in Figure 5(c) and Figure 5(d). In Figure 5(c), the model correctly identifies the image to be a gastrointestinal image, however the GradCAM output shows that this decision was based predominantly on the presence of the text in the bottom left corner of the image. This indicates that the model has identified this as a shortcut in the data, and has not really learned the correct features. In Figure 5(d) the model provides the wrong answer due to wrong reasoning. The model does give an abnormality as the answer, showing Fig. 4: Example responses from LSTM and BERT variations of model. that it has identified the correct question category, but has not selected the correct abnormality. The heat map reveals this to be likely a guess, based on the fact that the model was not focussing on the bone at all. Finally, analysis of GradCAM outputs can also identify answers that were almost correct. For example, in Figure 5(e), the model is asked for the abnormality, and answers with a disorder that is similar but not the same as the ground truth. Comparing the heat map with the diagram in Figure 7 shows that the model was focussing on the correct parts of the image to be able to distinguish between the two disorders. Its failure to predict the correct answer indicates that there was not enough training data for the model to accurately distinguish between the two disorders, but also shows that the model was doing the correct reasoning on the image. With more data, the model would certainly be able to improve its accuracy on abnormality images. ## V Discussion In this work, we quantitatively compared the performance of various components that are commonly used for the Med-VQA task, allowing us to evaluate their appropriateness for this task against each other. Our results show that, in general, less complex models with fewer parameters tend to perform better on this dataset. We find that models that achieve better performance in the general domain do not necessarily achieve better performance on the Med-VQA task. The Med-VQA field deals with a low-data regime, and therefore benefits from simpler models that are less likely to overfit. We conducted domain-specific pretraining in both the question and image encoders. For the question encoder, we found that domain knowledge is not useful for the VQA-Med 2019 dataset, as the questions use limited medical terminology. However, it is possible that a more complex dataset could benefit from question encoder pretraining. For the image encoder, we developed and tested a contrastive learning pretraining method. Our results indicate that this pretraining did not benefit our model, and we propose that Med-VQA is not suited to such a generalised contrastive learning method, owing to the different kinds of visual reasoning required for different questions-some requiring an understanding of large-scale differences, and others requiring an understanding of very small-scale differences. We used GradCAM to output heat maps corresponding to the visual attention of our model, allowing us to better evaluate our results by gaining insights into the model's Fig. 5: Examples of attention distribution output by SAN fusion method. Fig. 6: Example output from GradCAM. Fig. 7: Spondylosis vs. spondylolisthesis [23]. visual reasoning. We find that this provides great benefits to our model's interpretability, allowing us to gain a deeper understanding of the strengths and shortcomings of the model, beyond the quantitative accuracy metric. One of the biggest challenges inherent to the Med-VQA field is the lack of data. Datasets are very small, but consist of a very wide variety of different possible answers, presenting a barrier to achieving very high accuracy on the task. In this work we investigated self-supervised contrastive learning as a domain-specific pretraining method to mitigate this issue, but found that it did not aid model performance. However, other domain-specific pretraining methods could still be investigated that may help this problem. Further, pretraining alone will not be able to fully overcome this issue. In future, focus should be on generating significantly larger datasets. Future work could involve investigation into automated dataset generation methods for this task, in order to generate bigger and more diverse datasets for training. Better evidence verification techniques and evaluation are also crucial to further development in this area. For medical diagnosis, it is essential to be able to verify why the system answered the way it did, to ensure that the answer is supported by the right evidence. Although we have found GradCAM to give good insight into the visual reasoning of the model, future work could, for example, explore the model's attention on the question features to better evaluate how the model interprets the question. ## VI Conclusions In this work, we conducted an investigation of some existing and new techniques used in the Med-VQA task. We systematically evaluated some common Med-VQA components, and found that generally, simpler and shallower models benefit this task due to the small dataset size. We investigated the importance of domain knowledge in the question encoder and found that textual domain knowledge is not beneficial for this dataset. We developed and tested a contrastive learning pretraining method in the image encoder, and found that the contrastive learning method is not suited to the varied types of visual reasoning required for this task. Our final model achieved a 60% accuracy on the VQA-Med 2019 dataset. This matches current state of the art models in non-ensembled versions (e.g. [24, 25]), suggesting that ensembling our model could provide further performance benefit. We also evaluated our model's results and reasoning qualitatively using GradCAM, finding that as a whole our model is able to show good judgement in making decisions. ## Acknowledgement The authors would like to thank organisers of the ImageCLEF VQA-Med challenge for their time and effort in curation of the MED-VQA dataset, and sharing it for research purposes. ## Declaration of competing interest The authors declare no potential conflicts of interest.
2304.00171
Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR
Conformer models maintain a large number of internal states, the vast majority of which are associated with self-attention layers. With limited memory bandwidth, reading these from memory at each inference step can slow down inference. In this paper, we design an optimized conformer that is small enough to meet on-device restrictions and has fast inference on TPUs. We explore various ideas to improve the execution speed, including replacing lower conformer blocks with convolution-only blocks, strategically downsizing the architecture, and utilizing an RNNAttention-Performer. Our optimized conformer can be readily incorporated into a cascaded-encoder setting, allowing a second-pass decoder to operate on its output and improve the accuracy whenever more resources are available. Altogether, we find that these optimizations can reduce latency by a factor of 6.8x, and come at a reasonable trade-off in quality. With the cascaded second-pass, we show that the recognition accuracy is completely recoverable. Thus, our proposed encoder can double as a strong standalone encoder in on device, and as the first part of a high-performance ASR pipeline.
Rami Botros, Anmol Gulati, Tara N. Sainath, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu
2023-03-31T23:30:48Z
http://arxiv.org/abs/2304.00171v1
# Practical conformer: optimizing size, speed and flops ###### Abstract Conformer models maintain a large number of internal states, the vast majority of which are associated with self-attention layers. With limited memory bandwidth, reading these from memory at each inference step can slow down inference. In this paper, we design an optimized conformer that is small enough to meet on-device restrictions and has fast inference on TPUs. We explore various ideas to improve the execution speed, including replacing lower conformer blocks with convolution-only blocks, strategically downsizing the architecture, and utilizing an RNNAttention-Performer. Our optimized conformer can be readily incorporated into a cascaded-encoder [1] setting, allowing a second-pass decoder to operate on its output and improve the accuracy whenever more resources are available. Altogether, we find that these optimizations can reduce latency by a factor of 6.8x, and come at a reasonable trade-off in quality. With the cascaded second-pass, we show that the recognition accuracy is completely recoverable. Thus, our proposed encoder can double as a strong standalone encoder in on device, and as the first part of a high-performance ASR pipeline. Rami Botros\({}^{*}\), Anmol Gulati\({}^{*}\), Tara N. Sainath, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu Google LLC, USA {ramibotros, anmolgulati, tsainath, kchoro, rpang, strohman,weiranwang, jiahuiyu}@google.com end-to-end ASR, rmat, conformer ## 1 Introduction End-to-end (E2E) ASR models, which combine acoustic, pronunciation and language models from conventional systems [2] into one neural network, have become an active research area in the past few years [3, 4, 5, 6, 7, 8]. Since they are a fraction of the size of conventional models, their inference speed is often much faster [3, 4, 9, 10], which makes them attractive for various live applications. There is considerable interest to further improve inference speed of E2E models, and to better utilize large cores on Tensor Processing Unit (TPU) devices in particular. Some works have replaced the LSTM encoder and decoder of these models with parallelizable networks. For example, [11] looks at improving speed of the E2E decoder by replacing a large LSTM with a simple embedding decoder. On the encoder size, transformer [12, 13] and conformer [14] architectures, which have no recurrent connections, enable batching across multiple frames. While specialized hardware, such as on-device edge TPUs, significantly speed up computation on a single utterance, cloud TPUs, which process requests from numerous users, have additional bandwidth constraints with conformers [15]. A major issue with the conformer encoder is that the number of internal states to maintain is much larger than in the LSTM case. Most of these correspond to the _key_ and _value_ tensors in self-attention. While TPUs certainly help with parallelized computation of attention, inference is often still slowed by the memory-bandwidth cost of repeatedly loading such states [16]. For many streaming applications, in order to display the words as soon as the model can output them, it is often preferable to forgo batching across frames. Instead, the encoder is given just a few input frames at a time. This forces frequent reads of the state vectors, which exacerbates the memory-bottleneck issue and undermines the goal of fast outputs. In practice, our benchmarks have show that switching from LSTM to conformer encoder increases the per-step inference latency 10x. In this work, we seek to design a conformer that can be used as a streaming, causal encoder for different TPU environments, such as cloud-based TPUs, as well as on-device edge TPUs. To qualify as low-cost for both settings simultaneously, the design needs to meet a compounded set of conditions. Accordingly, we look for architectures that meet realistic criteria with respect to: limited model size, TPU-latency and number of floating-point operations (flops). Specifically, we seek a solution where the cloud-TPU latency is below 5ms, with flops below 100M and a model size below 50M. We include a restriction on flops, since it has been suggested that they can be predictive of energy consumption [17, 18] -- something that our earlier experiments have confirmed. Within these bounds, we try to obtain the best possible accuracy. Moreover, we seek a solution where the output of our optimized conformer should successfully serve two distinct types of downstream networks: (1) An RNN-T decoder directly, or (2) a second (cascaded) encoder whenever the computational resources and/or latency requirements can permit it. The first works much closer to the word-level modality, and can typically have a very different architecture from the second. Hence, it is reasonable to expect both downstream networks to require different types of information and representations from their inputs. Thus, achieving strong performance inside as well as outside the cascade setting can be seen as a multi-task target for our optimized encoder, and can become more challenging as we limit its capacity by making it smaller. To achieve our goals, we first look at replacing the lowest blocks of conformer with convolution-only blocks which do not suffer from large-state issues. In addition, we strategically downsize some of our model's dimensions to reduce size and computation. Finally, we explore the use of performer [19] as a way to improve conformer speed by avoiding explicit materialization of the attention tensor. There are many methods that improve the computational efficiency of self-attention, as summarized in [20, 21]. As with [22], we opt for performer layers [19] inside the conformer, which can be used as a drop-in replacement for self-attention and capture long-context dependencies. Unlike performer, emformer from [20] chunks the utterance into segments and parallelizes computation within them. Due to the importance of immediate textual for our application, we do not compare against such techniques. Another example for speeding up conformer specifically is [23], which progressively downsamples the signal along the time dimension. As with our research, the work targets the first few conformer blocks, which precede the downsampling and tend to be the most expensive. For that, the authors use so-called grouped attention, which reduces attention complexity by reshaping its input, stacking neighboring frames together into the depth dimension. In our case, we find that the self-attention layers can be removed from the earliest blocks altogether instead. For higher blocks in our model, we replace self-attention with performer layers, which lowers the cost from quadratic to linear in the length of the input sequence. Another direction in the literature works on reducing overall model size of the network. Dynamic sparsity [24] is one example, but its sparse operations are not supported on all TPUs, so we do not consider it here. As an alternative, motivated by works such as [25], we explore statically removing some connections from our fully-connected layers. Our early experiments, which we omit here for brevity, showed that this works well with our 100M-parameter models, but causes substantial quality degradation for smaller ones. Distillation with RNN-T from a large to small model has also been investigated [26]. This often requires a large teacher model to be trained independently. Recently, in-place distillation [27] addresses this by jointly training a full-context teacher, while distilling it into a smaller streaming student, all as part of a single model. We explore this as part of our cascaded encoder setting, distilling the output of a non-causal 2nd-pass to a smaller causal 1st-pass. We investigate the proposed ideas on a large-scale Voice Search task. In total, with our three proposed changes, we are able to compress our original 120M parameter model by around 2.1x in size and 2.7x in flops, with a 6.8x speedup on cloud TPU. While these optimizations give a relative WER degradation of 16%, we show that if the environment permits more relaxed constraints, cascading a 2nd-pass encoder can fully recover the accuracy without requiring any change in our optimized first-pass design. Hence, our small conformer can be trained to do two jobs simultaneously: Produce a fast output in a limited-resource setting, and potentially pass a useful representation to a larger second-pass in a high-resource setting. ## 2 Modeling ### Baseline Conformer Our baseline conformer encoder [14] consists of 12 conformer blocks, shown in Figure 1. Each block comprises a stack of 5 modules. First, a feed-forward module projects the input features to a larger dimension by a factor \(FFM\), followed by a non-linear activation, then another linear layer to project the features back to their original dimensions. Next, a convolution module aggregates information from neighboring context to capture relative-offset-based local interactions. Then, a self-attention module allows the model to look back \(L\) previous frames, and converts this into a fixed-length vector, capturing more global patterns. Afterwards, another feed-forward module operates on the output of the self-attention module. Finally, a layenorm module helps improve quality [3]. [16] describes a source of slowdown for the self-attention module. Specifically, incremental inference (when parallelization is not possible) is often slow due to the memory-bandwidth cost of repeatedly loading in large key and value tensors, which we will call _states_. For example, an 8-layer LSTM with 640 dimensions/layer has roughly 5,120 states per frame. In comparison, a conformer encoder that has 12 layers, 23 frames of left context per layer and is 512 dimensions/layer has roughly 141,312 states, almost 30x as much. To quantitatively benchmark the latency issue with conformer, Table 1 shows the WER and average per-frame TPU latency of the LSTM [4] and conformer encoders with the parameters described above. While the two models are roughly equal in size, conformer has much larger latency and flops. In the rest of this section, we describe techniques used to bring down these costs. ### Removing self-attention layers First, we hypothesize that low-level features learned by the first few conformer blocks can be captured by simpler layers. Thus, to reduce computation and size, we remove self-attention layers from the lowest blocks, relying just on the convolution and feed-forward modules in Figure 1. This frees some space and compute resources to make other layers deeper and wider, as discussed in the next section. ### Strategic Resizing Given the approximate constraints discussed in Section 1, specifically with TPU-latency less than around 5ms and flops below 100M, we carefully chose the widths and depths of our layers to get the highest accuracy from a 50M-parameter encoder. While conforming to our size requirement, we focus on tuning 3 hyperparameters of our model's architecture. The FF-module expansion factor (FFM), the number of convolution-only blocks at the bottom (NCB), and the total number of conformer blocks (TB). We will show an ablation study varying these parameters in Section 4. ### Implicit Attention with RNAAttention-Performers Another way to improve computational efficiency and reduce memory footprint of the conformer is to apply a recently introduced class of the linear attention techniques that avoid explicitly materializing the attention tensor, effectively replacing quadratic space and time complexity of the attention module by a more manageable linear one, and leveraging low-rank decomposition of the attention tensor. This leads to the class of models called _performers_[19]. We use performers-ReLU variants from [19] inside conformers and add additional trainable affine transformation for queries/keys, as proposed in [28]. We call the resulting conformer the _RNAAttention-Performer_. The RNN-prefix in the name is motivated by the causal prefix-sum computations that unidirectional performer conducts in order to emulate causal attention, see Fig. 2. Our strategy is to start with regular conformer training and then replace self-attention with performer layers halfway through the training. We ran detailed ablation studies over different performer variants in the bidirectional cascaded encoder on \(\mathrm{LibriSpeech}\) data that \begin{table} \begin{tabular}{l c c c c} \hline Encoder & WER & Size & TPU-latency & Flops \\ \hline LSTM & 6.8 & 110M & 2.4 ms & 198M \\ Conformer & 6.5 & 120M & **21.8 ms** & **247M** \\ \hline \end{tabular} \end{table} Table 1: WER and Latency of 1st-pass encoders Figure 1: Illustration of conformer block [14]. led to the choice of the ReLu-kernel. Tested attention kernels included in particular those of the form: \(\mathrm{K}_{f}(\mathbf{x},\mathbf{y})=f(\mathbf{x})^{\top}f(\mathbf{y})\) where \(f:\mathbb{R}\rightarrow\mathbb{R}\) is applied elementwise to kernel inputs. We benchmarked the following functions \(f\): \(\mathrm{ReLU}\), \(\mathrm{SoftPlus}\), \(\mathrm{exp}\), \(\mathrm{ELU}\), \(f(z)=z^{4}\), and settled on \(\mathrm{ReLU}\) as an efficient kernel. ### Usage as a First-Pass in the Cascaded Encoder Meeting our size, latency and flops constraints leads to quality degradation for our optimized conformer compared to our starting baseline. In cases where the environment allows for an additional increase in flops or latency, we explore cascading a series of non-causal conformer layers on top of our encoder and running another beam search. This model, known as Cascaded Encoder [1], is shown in Figure 3. One motivation is that the 2nd-pass can make up for any quality degradation introduced by the small, optimized 1st-pass. Note that the 2nd pass operates exclusively on the output of the 1st pass, without any further input from the acoustic signal. Thus, the high-latency application reutilizes the computation done by the 1st pass for the streaming setting, which contributes to overall efficiency. We also note that our optimized conformer is plugged in as a 1st-pass encoder in the cascade setting without any change in its design. ## 3 Experimental Settings ### Datasets As discussed in [29], all E2E models are trained on multidomain audio-text pairs [30]. All domains are anonymized and hand-transcribed, except for YouTube where the transcription is done in a semi-supervised fashion [31]. To further increase data diversity, multi-condition training (MIR) [32], random data down-sampling to 8kHz [33] and SpecAug [34] are also used. Noisy data is generated at signal-noise-ratio (SNR) from 0 to 30 dB, with an average SNR of 12 dB, and with 160 times ranging from 0 to 900ms, averaging 500ms. Noise segments are sampled from YouTube and daily life noisy environmental recordings. Both 8 kHz and 16 kHz versions of the data are generated, each with equal probability, to make the model robust to varying sample rates. The _Search_ test set has around 12K Voice Search utterances with an average length of 5.5 seconds. They are anonymized, hand-transcribed, and are representative of Google's Voice Search traffic. ### Modeling All models are trained on a 128D log-mel feature frontend with a 16-D one-hot domain-id vector appended to it [30]. Following [14], the 1st-pass base conformer model uses 512D Conformer layers in the encoder. Causal convolution and left-context attention layers, with a lookback of 23 frames, are used for the Conformer layer to strictly restrict the model to use no future inputs. 8-headed self-attention is used and the convolution kernel size is 15. The encoder consists of 12 conformer blocks. Ablation studies to vary its hyperparameters will be presented in Section 4. The RNN-T decoder comprises prediction network and a joint network with a single 640-dim FF layer. The embedding prediction network [11], uses an embedding dimension of 320, and has 9M parameters. Our E2E models work with 4,096 word pieces [35]. The 2nd-pass cascaded encoder has 5 additional non-causal conformer layers that process a total of 900 milliseconds of future audio. Both causal and non-causal encoders feed into a shared decoder. ## 4 Results ### Speedups Before Downsizing -- RNAP and Conv-Only First, Table 2 shows our TPU-speed gains for our large 120M, once by RNNAttention-Performer and once by using converting the lowest 4 blocks to conv-only. Each method improves TPU-latency immensely with little effect on WER. Yet they bring little improvement to size and flops. Downsizing of the model is still needed for edge devices, to save space and energy, and is discussed in Section 4.2. \begin{table} \begin{tabular}{l c c c c} \hline \hline Encoder & WER & TPU-latency (ms) & Size (M) & Flops (M) \\ \hline Baseline & 6.5 & 21.8 & 120 & 248 \\ RNNAP & 6.7 & 7.3 & 120 & 221 \\ First4Conv & 6.6 & 9.5 & 113 & 223 \\ \hline \hline \end{tabular} \end{table} Table 2: Baseline vs. RNNAttention-Performer vs. Conv-Only Figure 3: The cascaded encoder for joint modeling of two passes. Figure 2: Prefix-sum algorithm for unidirectional (causal) performer. Attention normalization is omitted. The algorithm tracks the prefix-sum: A matrix obtained by summing the outer products of kernel features corresponding to keys with value-vectors. At each given iteration of the prefix-sum algorithm, a kernel feature vector corresponding to a query is multiplied by the most recent prefix-sum (obtained by summing all outer-products corresponding to preceding tokens) to obtain new embedding. The features are obtained by applying ReLU elementwise to affinely-transformed queries/keys. ### Small Conformer Model Ablation Table 3 shows an ablation study where we vary FFM, NCB and TB, as described in Section 2.3. B0 is the baseline conformer encoder from Table 2. Our goal here is to hold the model size around 50M parameters and Flops around 100M, and to see how varying various hyperparameters effects both WER and TPU latency. We have sorted the above table by NCB. Notice that in E0, if we reduce TB from 12 to 3, and increase FFM, we take a large degradation in WER. In E1, adding just 1 NCB, and adjusting other parameters accordingly to keep within the size limit, improves WER. Further increasing TB in E2 and E3 improves WER again. If we continue to increase NCB to 3 at E5, WER reaches 7.7. However, increasing NCB or FFM further (\(\mathrm{E6-E8}\)), results in either too small TB or FFM, which causes a quality degradation. Our best performing system, E5, reduces the TPU latency over B0 by a factor of 4x, flops by a factor of 2.4x, and overall model size by 2.1x. It does come at a quality degradation (from 6.5% to 7.7%), which is addressable in less restricted environments, see Section 4.4. Apart from strategic resizing, we also explored other techniques from the literature. First, we tried in-place distillation [27] with the cascaded encoder in Figure 3. Specifically, we distilled a 2nd-pass non-causal output (i.e., the teacher) to a smaller 1st-pass causal model. However, we did not see any WER improvements. Degradation was also observed when we emulated sparsity by removing connections from full-connected layers [25]. ### Small Model with RNNAttention-Performer Next, we put the optimizations with the smaller model and RNNAttention-Performer together. Table 4 shows our best model versus the conformer baseline. Our design choices, such as relatively wide FF layers, conv-only layers and RNNAttention-Performer, makes it that even though we only save 2.7X on flops, we save 6.8X on the TPU latency when using the highly parallelized computation. ### Cascaded Encoder For applications where latency/flops/size constraints can be relaxed, we explore adding additional 2nd-pass non-causal layers on top of our optimized conformer, reusing its same design as part of a higher-quality model. Table 5 shows the WER for both passes of the cascaded model. We compare our setup with a baseline [9] that has a large first-pass and a small second-pass. Both settings achieve the same second-pass WER, therefore the first pass encoder does not need to be large in order to serve the 2nd-pass with an informative encoded representation. Thus, for a given a total size of a cascade model, the bulk of parameters can be allocated to the second pass, allowing the first-pass encoder to also be deployed in a small on-device model. Note that the first-pass WER stays at 7.7% the same as when it was trained alone (Table 4). Thus, even after the joint training, it can still function as a standalone encoder for edge devices. ## 5 Conclusions In this work, we designed a causal, streaming conformer that meets practical size, latency and flop criteria, while still being informative enough for a non-causal large second-pass. We addressed the issue of slow inference of self-attention layers by showing that the first 3 can be removed and the rest can be replaced with performer layers. A careful search was conducted to choose hyperparameters that balance speed and size. Overall, the size is halved and inference runs 6x faster on TPU. Though this leads to a degradation in WER, if additional compute time is permissible, a cascaded 2nd-pass encoder brings the quality back up to the baseline level.
2309.09803
Performance analysis of table-top single-pulse terahertz detection up to 1.1 MHz
Slow data acquisition in terahertz time-domain spectroscopy (THz-TDS) has hindered the technique's ability to resolve "fast" dynamics occurring on the microsecond timescale. This timescale, arguably too slow to be accessed via standard optical pump-probe techniques relying on ultrafast sources, hosts a range of phenomena that has been left unexplored due to a lack of proper real-time monitoring techniques. In this work, chirped-pulse spectral encoding, a photonic time-stretch technique, and high-speed electronics are used to demonstrate time-resolved THz detection at a rate up to 1.1 MHz. This configuration relies on a table-top source and a setup able to resolve every THz transient that it can generate. We investigate the performance of this system at different acquisition rates in terms of experimental noise, dynamic range, and signal-to-noise ratio. Our results pave the way towards single-pulse THz-TDS at arbitrarily fast rates to monitor complex dynamics in real-time.
Nicolas Couture, Markus Lippl, Wei Cui, Angela Gamouras, Nicolas Y. Joly, Jean-Michel Ménard
2023-09-18T14:19:20Z
http://arxiv.org/abs/2309.09803v1
# Performance analysis of table-top single-pulse terahertz detection up to 1.1 MHz ###### Abstract Slow data acquisition in terahertz time-domain spectroscopy (THz-TDS) has hindered the technique's ability to resolve "fast" dynamics occurring on the microsecond timescale. This timescale, arguably too slow to be accessed via standard optical pump-probe techniques relying on ultrafast sources, hosts a range of phenomena that has been left unexplored due to a lack of proper real-time monitoring techniques. In this work, chirped-pulse spectral encoding, a photonic time-stretch technique, and high-speed electronics are used to demonstrate time-resolved THz detection at a rate up to 1.1 MHz. This configuration relies on a table-top source and a setup able to resolve every THz transient that it can generate. We investigate the performance of this system at different acquisition rates in terms of experimental noise, dynamic range, and signal-to-noise ratio. Our results pave the way towards single-pulse THz-TDS at arbitrarily fast rates to monitor complex dynamics in real-time. Terahertz time-domain spectroscopy (THz-TDS) relies on resolving the oscillating electric field of a THz pulse in order to access its frequency components via Fourier transform. This technique provides full amplitude and phase information of the light passing through a medium, allowing the complex dielectric function of the medium to be extracted without the need for Kramers-Kronig relations, a powerful capability compared to other spectroscopy techniques monitoring only the transmitted optical power. THz-TDS is often performed with a detection technique involving the mechanical scanning of an ultrashort near-infrared (NIR) pulse across the THz waveform as the two interact in a nonlinear crystal. Because this technique intrinsically relies on the acquisition of multiple data points to reconstruct the full THz waveform, it requires a sample under study to exhibit the same characteristics every time it is probed by the THz wave. Thus, standard pump-probe is not a viable technique in evaluating samples whose properties evolve chaotically or experience irreversible changes. Many successful studies have tackled this issue by enabling single-shot THz detection, eliminating the need for a mechanical delay line to retrieve the time-domain THz waveform. These techniques, however, require multiple scans to achieve a high signal-to-noise ratio (SNR). These detection schemes can rely on echelon mirrors [1], chirped-pulse spectral encoding [2], or spectral interferometry [3], and have enabled kHz detection rates [4]. In fact, the combination of chirped-pulse spectral encoding and a photonic time-stretch technique [5; 6], where the repetition rate of the NIR source used for spectral encoding sets the acquisition rate, has resulted in MHz detection rates [7; 8; 9; 10; 11]. However, these MHz rates experiments were achieved with high-energy THz pulses from large synchrotron facilities to achieve a satisfactory SNR. Nevertheless, this approach has also permitted table-top THz-TDS at a rate of 50 kHz using a single ultrafast source for THz generation and detection, resolving pulse-to-pulse microsecond carrier dynamics in a semiconductor [12]. Reaching faster THz-TDS rates with table-top sources would allow complex dynamics at sub-microsecond timescales to be recorded and significantly expedite the data acquisition process in experiments such as THz two-dimensional spectroscopy [13]. To establish the suitability of a new THz-TDS scheme for such applications, the dynamic range and SNR must be thoroughly investigated. In this work, we use a single-pulse detection technique employing chirped-pulse spectral encoding and a photonic time-stretch technique to resolve THz waveforms up to a rate of 1.1 MHz, relying only on a single ultrafast source. The dynamic range and SNR of each measurement is studied to highlight the feasibility of the scheme for spectroscopy purposes. The promising results we present here lay the foundation for table-top THz-TDS at high acquisition rates as a real-time monitoring tool. For these experiments, an amplified ultrafast source centered at a wavelength of 1030 nm delivers 180 fs pulses for THz generation and detection. The laser is operated at its maximum average power of 6 W and its repetition rate is modified via software between 1 kHz to 1.1 MHz, altering only the output peak intensity while chirp and pulse duration remain effectively unaffected. The majority of the output power (90%) is used for THz generation via optical rectification in a lithium niobate crystal with the tilted-pulse-front technique to ensure an efficient generation process and relatively high THz electric fields [14]. The rest of the beam is set to a constant \(\sim\)10 nJ pulse energy and launched into a 2 m-long polarization-maintaining fiber (PMF, OZ Optics PMF-980-6/125-0.25-L) to achieve a chirped NIR supercontinuum (SC) spanning over \(\sim\)100 nm with a time duration of 6 ps (FWHM). A chirped SC with these specifications allows for frequencies up to 1.6 THz to be detected through chirped-pulse spectral encoding, imprinting the THz time-domain waveform onto the chirped NIR spectrum through nonlinear effects [2; 12]. This is achieved by overlapping the resulting THz pulse and chirped SC in a 2 mm-thick 110-oriented gallium phosphide (GaP) crystal. The NIR pulse containing the THz information is transmitted through a quarter-wave plate and linear po larizer to optimize detection sensitivity while maintaining the phase information of the THz pulse [15]. Finally, the encoded NIR pulse is launched into a 2 km-long single-mode fiber (SMF, Corning H1060 flex) to achieve photonic time-stretch, dispersing the pulse duration from a few picoseconds to tens of nanoseconds, which can then be sampled with a high-speed photodiode (12 GHz bandwidth, Newport 1544-B) and oscilloscope (8 GHz bandwidth, Tektronix MSO 64B). A diagram of the experimental configuration is shown in Fig. 1. With this technique, the THz detection rate can be arbitrarily high and is determined solely by the repetition rate of the ultrafast source providing NIR pulse energies of at least a few \(\mu\)J. To retrieve the THz waveform through the photonic time-stretch technique, the signal is recorded on the oscilloscope with and without the THz pulse impinging on the GaP crystal, as shown in Fig. 2a (red line and dashed black line, respectively). Subtracting the square root of each measurement yields the THz waveform in the time-stretch domain [12]. The recovered waveform is presented in Fig. 2b when the delay between the chirped SC and THz pulse is varied by increments of 500 fs. By imprinting the THz on a different portion of the NIR spectrum, the time-axis of the oscilloscope can be calibrated to retrieve the picosecond information of the THz pulse. The linear relationship between the THz peak measured on the oscilloscope and relative delay allows us to extract the time-stretch factor of 1138 and confirms that higher-order dispersion in the 2 km-long SMF is negligible [5]. The grey highlighted areas in Fig. 2 correspond to the portion of the spectrum where the THz waveforms are subject to deformations, and that we have therefore deemed suboptimal for spectral encoding. This spectral region is sensitive to intensity and fiber-coupling fluctuations whereas the rest of the spectrum has inherently higher stability, as displayed by the relative noise of the SC signal shown in Fig. 2c. Delaying the pulses such that the THz waveform is imprinted on the most stable parts of the NIR spectrum is a valid method of performing these experiments as the THz information is only contained within \(\sim\)4 ns and the time-stretched spectrum spans \(>\)20 ns. The noise recorded in these most stable regions is solely limited by electronic noise from the oscilloscope and pulse-to-pulse laser power fluctuations. To use the entire SC spectrum for spectral encoding, balanced detection techniques adapted to single-pulse detection, such as diversity electro Figure 1: **Experimental setup.** A Yb:KGW amplifier with 6 W average power and tunable repetition rate is employed for THz generation and detection. Most of the optical power is used for THz generation with a tilted-pulse-front technique in a lithium niobate (LN) crystal wedge. The remainder is launched into a 2 m-long polarization-maintaining fiber (PMF) to generate a chirped supercontinuum (SC) with 100 nm bandwidth and 6 ps time duration (FWHM). The THz pulse and the chirped SC are overlapped in a 2 mm-thick gallium phosphide (GaP) crystal to achieve chirped-pulse spectral encoding, imprinting the THz waveform onto the chirped NIR spectrum through the Pockels effect. After polarization filtering with a quarter-wave plate (QWP) and linear polarizer (P), photonic time-stretch is realized by injecting the THz-modulated SC into a 2 km-long single-mode fiber (SMF) and detecting it with a fast photodiode (12 GHz) and oscilloscope (8 GHz). Figure 2: **Time-axis calibration.** a) Unmodulated (dashed black line) and THz-modulated (red line) time-stretched signals measured with the fast photodiode and oscilloscope. To isolate the THz waveform, the square root of the unmodulated signal is subtracted from the square root of the THz-modulated signal. b) The extracted THz waveforms as the relative delay between the THz and chirped SC is shifted in increments of 500 fs, varying the frequencies within the SC that the THz transient is imprinted onto. The waveforms are stacked vertically for clarity. The shaded grey area indicates noisy parts of the spectrum and is quantified by c) the relative noise of the SC, which we define as the standard deviation of the SC divided by the mean SC signal. optic sampling (EOS) [11], can be used to reduce pulse-to-pulse fluctuations. Figure 3a displays the extracted time-domain THz waveform after performing the time-axis calibration for laser output pulse energies of 120, 20, and 5.5 \(\mu\)J, corresponding to laser repetition rates of 50 kHz, 300 kHz, and 1.1 MHz, respectively. The field strength and pulse energy of the THz pulses, in kV/cm and pJ, respectively, are extracted with EOS and not the single-pulse detection configuration [16]. For clarity, the data in Fig. 3b collected with a 20 \(\mu\)J and 5.5 \(\mu\)J NIR pulse energies are multiplied by factors of 3 and 4, respectively. The shaded areas surrounding the colored lines in Fig. 3a represent the standard deviation measured over 10k pulses. Although the signal at 5.5 \(\mu\)J (1.1 MHz) is weaker, this result marks, to our knowledge, the fastest table-top time-resolved THz detection rate to date. The THz waveform is measured for several pulse energies to form the line plotted in Fig. 3b, where the peak THz amplitude and corresponding pulse energy is shown as a function of the laser output pulse energy. The linear relationship between the NIR pulse energy and the detected THz amplitude validates the linearity of the polarization filtering scheme in Fig. 1 [12]. For each measurement, the pulse energy injected into the PMF is kept fixed at \(\sim\)10 nJ, hence, the NIR pulse energy in the GaP detection crystal is also fixed, indicating that the detection efficiency is not limited by the repetition rate but instead the THz field strength inside the crystal. The highest THz field in this work, corresponding to the red line in Fig. 3a, is \(\sim\)35 kV/cm and corresponds to a THz pulse energy of \(\sim\)85 pJ. In contrast to other schemes, since our system only relies on a single laser and a single photodiode, timing jitter is negligible. Jitter between the THz and the SC would appear in the experimental data as phase noise. In this work, the standard deviation of the phase is on the order \(10^{-2}\) radians across the whole spectrum. This feature makes it highly suitable as a Figure 3: **Relative amplitude characterization.** a) The extracted THz transients generated with NIR pulse energies of 120 \(\mu\)J (red), 20 \(\mu\)J (green), and 5.5 \(\mu\)J (blue); corresponding to detection rates of 50 kHz, 300 kHz, and 1.1 MHz, respectively. The shaded area represents the error of the measurement calculated from the standard deviation over 10k pulses. For clarity, the blue and green lines (and their corresponding standard deviation) are multiplied by factors of 3 and 4, respectively. b) The peak THz transient amplitude as the pulse energy of the ultrafast source is increased from 5.5 \(\mu\)J to 120 \(\mu\)J by decreasing the repetition rate (i.e. 1.1 MHz to 50 kHz). The linear relationship between detected THz amplitude and NIR pulse energy indicates that the system can reach higher detection rates at the cost of detection efficiency. The highest THz field, corresponding to a detection rate of 50 kHz, is \(\sim\)35 kV/cm or \(\sim\)85 pJ. Figure 4: **Dynamic range of single-pulse THz detection.** The dynamic range of the presented scheme plotted as a function of the detection rate (i.e. NIR pulse energy). The dynamic range of a) the recorded time-domain data and b) corresponding THz spectra are calculated with the methods described in Ref. [17]. The arrows in the inset of a) indicate parts of the THz waveform used to calculate the dynamic range. The inset of b) contains THz spectra (plotted in log scale) recorded at repetition rates of 50 kHz (red), 300 kHz (green), 1.1 MHz (blue), and the noise floor of the single-pulse case (dashed black line). non-invasive probe in industrial assembly lines. The temporal shift between THz and SC induced by a product in the THz path can be used to extract the thickness of the product with great accuracy. For a more in-depth analysis of the measured THz waveforms at each of the studied repetition rates, we calculate the dynamic range of the recorded time-domain signals (Fig. 4a) and their corresponding spectral amplitude (Fig. 4b). The dynamic range in the time-domain is defined as the mean of the peak THz amplitude (\(A_{\textit{THz}}\)) divided by the off-peak standard deviation (\(\sigma_{\textit{OP}}\)), whereas in the Fourier domain it is defined as the maximum spectral amplitude of a single-pulse measurement divided by the noise floor (dashed black line inset Fig. 4b) [17]. Notably, the peak dynamic range approaches 300 in amplitude (50 dB in power) in the Fourier domain when operating the system at 50 kHz. The SNR of the single-pulse data can be extracted with a similar approach. In both domains, the SNR is defined as the quotient between the peak THz amplitude and the on-peak standard deviation [17]. The peak SNR achieved with a repetition rate of 50 kHz is \(\sim\)60 in the time domain and \(\sim\)150 in the Fourier domain. The dynamic range in each case follows the same trend: As the repetition rate is increased, the dynamic range decreases correspondingly. This trend is a result of the weaker nonlinear interactions in the GaP detection crystal and the constant noise floor. At repetition rates exceeding 600 kHz, the frequency domain SNR of the single pulse measurement approaches unity and is therefore too low to perform any kind of spectroscopy without pulse-to-pulse averaging. Nonetheless, the fact that single THz pulses can be recorded at MHz repetition rates with this scheme is promising for future applications of single-pulse THz-TDS at high repetition rates. For example, an oscillator delivering tens of \(\mu\)J NIR pulses would allow the presented system to investigate single-pulse dynamics approaching the nanosecond timescale. In summary, we have used chirped-pulse spectral encoding and a photonic time-stretch technique to demonstrate time-resolved THz detection at a rate up to 1.1 MHz using a single ultrafast source, to our knowledge, the fastest single-pulse table-top detection rate to date. By thoroughly investigating the noise of the presented system, we have deduced the limitation of the system to be the THz field strength at high repetition rates and, hence, low NIR pulse energies. With the simple addition of high-speed electronics and commercially available optical fibers, existing systems with high THz fields (tens of kV/cm) can almost effortlessly implement the presented detection scheme. We believe that this work leads the way towards table-top THz-TDS with high sensitivity to resolve sub-microsecond dynamics in exotic systems that are evolving on a pulse-to-pulse basis. **Funding** J.-M.M. acknowledges funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) (RGPIN-2023-05365 and RTI-2023-00252) and the Canada Foundation for Innovation (CFI) (Project Number 35269). N.C. acknowledges financial support from the Ontario Graduate Scholarship. M.L. is part of the Max Planck School of Photonics supported by BMBF, Max Planck Society, and Fraunhofer Society. N.Y.J. and M.L. acknowledge the Max Planck Institute for the Science of Light in Erlangen for financial support. This work was also supported by the National Research Council of Canada via the Joint Centre for Extreme Photonics (JCEP). **Disclosures** The authors declare no conflicts of interest.
2304.00165
Significant Phonon Drag Effect in Wide Bandgap GaN and AlN
A thorough understanding of electrical and thermal transport properties of group-III nitride semiconductors is essential for their electronic and thermoelectric applications. Despite extensive previous studies, these transport properties were typically calculated without considering the nonequilibrium coupling effect between electrons and phonons, which can be particularly strong in group-III nitride semiconductors due to the high electric fields and high heat currents in devices based on them. In this work, we systematically examine the phonon drag effect, namely the momentum exchange between nonequilibrium phonons and electrons, and its impact on charge mobility and Seebeck coefficient in GaN and AlN by solving the fully coupled electron and phonon Boltzmann transport equations with ab initio scattering parameters. We find that, even at room temperature, the phonon drag effect can significantly enhance mobility and Seebeck coefficient in GaN and AlN, especially at higher carrier concentrations. Furthermore, we show that the phonon drag contribution to mobility and Seebeck coefficient scale differently with the carrier concentration and we highlight a surprisingly important contribution to the mobility enhancement from the polar optical phonons. We attribute both findings to the distinct mechanisms the phonon drag affects mobility and Seebeck coefficient. Our study advances the understanding of the strong phonon drag effect on carrier transport in wide bandgap GaN and AlN and gives new insights into the nature of coupled electron-phonon transport in polar semiconductors.
Yujie Quan, Yubi Chen, Bolin Liao
2023-03-31T23:08:50Z
http://arxiv.org/abs/2304.00165v1
# Significant Phonon Drag Effect in Wide Bandgap GaN and AlN ###### Abstract A thorough understanding of electrical and thermal transport properties of group-III nitride semiconductors is essential for their electronic and thermoelectric applications. Despite extensive previous studies, these transport properties were typically calculated without considering the nonequilibrium coupling effect between electrons and phonons, which can be particularly strong in group-III nitride semiconductors due to the high electric fields and high heat currents in devices based on them. In this work, we systematically examine the phonon drag effect, namely the momentum exchange between nonequilibrium phonons and electrons, and its impact on charge mobility and Seebeck coefficient in GaN and AlN by solving the fully coupled electron and phonon Boltzmann transport equations with _ab initio_ scattering parameters. We find that, even at room temperature, the phonon drag effect can significantly enhance mobility and Seebeck coefficient in GaN and AlN, especially at higher carrier concentrations. Furthermore, we show that the phonon drag contribution to mobility and Seebeck coefficient scale differently with the carrier concentration and we highlight a surprisingly important contribution to the mobility enhancement from the polar optical phonons. We attribute both findings to the distinct mechanisms the phonon drag affects mobility and Seebeck coefficient. Our study advances the understanding of the strong phonon drag effect on carrier transport in wide bandgap GaN and AlN and gives new insights into the nature of coupled electron-phonon transport in polar semiconductors. Phonon Drag, Electron-phonon Coupling, Transport Coefficient Introduction Recently, the development of group-III nitride semiconductors, including GaN, AlN, and their alloys, has made a significant impact on a broad range of applications such as solar cells [1], light-emitting diodes [2; 3; 4; 5], and photodetectors [6; 7]. Moreover, the high carrier mobility, which is as high as 1200 cm\({}^{2}\)/V s in GaN [8] and 400 cm\({}^{2}\)/V s in AlN[9] at room temperature, together with high breakdown electric fields and high thermal conductivities make these two materials excellent candidates for high-power and high-frequency electronic devices [10; 11]. However, the self-heating effect due to the involved high power density is one of the main limitations of the performance of the III-nitride semiconductors [12; 13]. Thanks to their high Seebeck coefficient, it is promising to integrate on-chip thermoelectric spot cooling [14] in the proximity of a high-power transistor with the same III-nitride material, which provides a solution for heat dissipation without adding new materials. In addition, their intrinsically high Seebeck coefficient, large band gap and high temperature stability of their electrical properties make them good candidates for thermoelectric power generation devices at elevated temperatures [15; 16] Therefore, a thorough theoretical understanding of the coupled electrical, thermal and thermoelectric transport in these materials is critical for the electrical-thermal codesign of devices based on them [17]. There have been extensive theoretical studies of electrical, thermal and thermoelectric transport properties of the group-III nitrides. Like other semiconductors, their intrinsic electrical transport properties are limited by electron-phonon interactions [18]. In addition to the short-ranged deformation potential mechanism, the strong polar nature of the group-III nitrides features significant polar optical phonon scattering, mediated by the long-ranged Frohlich dipole interaction [19]. Furthermore, the wurtzite structure of the group-III nitrides lacking the inversion symmetry also gives rise to important piezoelectric scattering by the acoustic phonon modes [20]. Early mobility calculations were based on Monte Carlo simulation, which took polar optical phonon, piezoelectric, deformation potential, and ionized impurity scatterings into account[21; 22]. Analytical models considering various electron scattering mechanisms based on the electron Boltzmann transport equation (BTE) with the relaxation time approximation (RTA) were also developed[23; 16; 24]. However, these methods often relied on empirical and experimental parameters and the physical insights into the transport details provided by these studies were usually limited. Recent advancements in _ab initio_ methods based on the density functional theory (DFT) have made it possible to directly evaluate the electron-phonon scattering rates associated with each scattering channel from first principles [25]. Jhalani et al. calculated the electron-phonon scattering rates of electrons and holes in GaN from first principles and simulated the cooling process of hot electrons and holes [26]. In a follow-up work, they also showed the importance of the dynamic quadrupolar interaction on the piezoelectric electron-phonon scattering in GaN [27]. In parallel, Ponce et al. calculated the electron and hole mobilities in GaN limited by electron-phonon interactions from first principles and predicted that the hole mobility can be improved by strain [28; 29]. On the thermal transport side, modeling thermal transport by solving the phonon BTE with interatomic force constants evaluated from DFT and density functional perturbation theory (DFPT) is now routine, and has been applied to understand the thermal conductivity of GaN [30; 31] with good agreement with experiments. Utilizing first-principles phonon calculations coupled with the modern theory of polarization, we recently showed that thermal transport in GaN can be modulated by strong external electric fields [32]. Despite the remarkable progress, one limitation of existing first-principles calculations of electrical and thermal transport in group III-nitrides is that the interactions between the nonequilibrium populations of electrons and phonons are ignored. Namely, phonons are assumed to be in thermal equilibrium when the scattering rates of electrons are calculated and vice versa. Due to the existence of high electric fields and high heat currents in group-III nitride-based devices, which tends to result in highly nonequilibrium electron and phonon distributions, the effect of nonequilibrium phonons on electronic transport properties, known as the phonon drag[33; 34], is critical to evaluating the performance of these devices. Phonon drag refers to the momentum exchange between non-equilibrium phonons and electrons and is typically suppressed at higher temperatures due to the more predominant anharmonic phonon-phonon scattering than electron-phonon scattering, which hinders the momentum flow between electrons and phonons [34]. The phonon drag contribution to the Seebeck coefficient was first recognized in germanium [35; 36], and then in silicon [37] and FeSb\({}_{2}\)[38; 39; 40]. In all these cases, the phonon drag contribution is prominent only at very low temperatures. The nonnegligible phonon drag contribution to the Seebeck coefficient at room temperature was first recognized in silicon by Mahan et al[41], who combined first-principles phonon calculations with an analytical electron-phonon interaction model. Zhou et al. calculated the Seebeck coefficient in silicon including the phonon drag by first-principles calculations and found that the phonon drag contributes to more than 30% of the total Seebeck coefficient at room temperature [42]. Both of these calculations solved the partially decoupled electron BTE with the assumption that the non-equilibrium distribution of the electronic system does not affect the phonon system. A similar approach was used by Bonini et al. to evaluate thermoelectric transport properties in silicon and diamond [43; 44]. Recently, Protik et al. developed a coupled electron-phonon BTE solver framework [45; 46], where the mutual drag between electrons and phonons is fully captured. Using this framework, they found that in n-doped 3C-SiC, the phonon drag contributes to more than 50% of the total Seebeck coefficient even at room temperature [47]. Using the same method, Li et al. found that in p-doped diamond, the Seebeck coefficient is enhanced by more than a factor of 2 at 300 K when the phonon drag is included [48]. In a related work, they identified an unusually large phonon drag contribution to the Seebeck coefficient one order of magnitude higher than the normal diffusive Seebeck coefficient in heavily doped p-type cubic boron arsenide [49]. Experimentally, a recent study in AlGaN/GaN two-dimensional electron gas showed that the phonon drag contributes to 32% of the Seebeck coefficient at room temperature [50]. In addition to the Seebeck coefficient, the increase of carrier mobility at 300 K due to the phonon drag effect was also predicted in 3C-SiC [47] and GaAs [45]. In this work, we utilized the computational framework developed by Protik et al. [46] to investigate the influence of the phonon drag effect on electrical transport properties in both n-type and p-type GaN and AlN. We focused our analysis on n-type transport with electron concentrations ranging from \(10^{15}\) cm\({}^{-3}\) to \(10^{19}\) cm\({}^{-3}\). We found that, at room temperature, the phonon drag effect has little contribution to the electron mobility in GaN and AlN at low doping levels, while its contribution becomes more evident with the increasing carrier concentration. Besides, a significant enhancement of the Seebeck coefficient due to phonon drag was found in both GaN and AlN throughout the carrier concentration range that we investigated. The microscopic mechanisms of the phonon drag contribution to the mobility and the Seebeck coefficient were also analyzed. Our work provides a detailed fundamental understanding of the phonon drag effect and its impact on electrical transport properties in wide bandgap group-III nitrides. Computational methods ### Density Functional Theory Calculations First principles electronic structure calculations were carried out using the Quantum ESPRESSO (QE) package [51] with the scalar-relativistic Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials [52] within the local density approximation (LDA) [53]. The kinetic energy cutoff for wavefunctions was set to 80 Ry. A mesh grid of \(12\times 12\times 12\) in the first Brillouin zone (BZ) was adopted and the total electron energy convergence threshold for self-consistency was set to \(1\times 10^{-10}\) Ry. The crystal lattice was fully relaxed with a force threshold of \(10^{-4}\) eV/A, with lattice parameters \(a=3.16\) A, \(c=5.148\) A in GaN and \(a=3.12\) A, \(c=5.0\) A in AlN, both of which were in excellent agreement with the experimental values [54]. The phonon dispersion was calculated using DFPT [55] with a threshold of \(10^{-13}\) Ry for self-consistency on a \(6\times 6\times 6\) q-point grid. The non-analytical correction term due to the long-range Coulomb interactions (the Frolich interaction) was also included. The third-order anharmonic interatomic force constants were computed using a \(3\times 3\times 3\) supercell using the finite displacement method [56], taking up to the fifth nearest neighbors into consideration. The electron-phonon coupling were calculated using the EPW code [57], where the electron-phonon matrix elements were first calculated on a coarse \(12\times 12\times 12\) k-point grid and \(6\times 6\times 6\) q-point grid and then transformed to the real-space Wannier representation. ### Phonon Drag and Coupled Electron-Phonon BTEs To capture the phonon drag contribution to electrical transport properties, coupled electron-phonon BTEs with electron-phonon matrix elements calculated from first principles need to be solved. The steady-state electron and phonon BTEs can be written as [58]: \[\left\{\begin{array}{l}\mathbf{v}_{\alpha}(\mathbf{k})\cdot\nabla_{\mathbf{ r}}f_{\alpha}(\mathbf{k})+\frac{\mathbf{F}}{\hbar}\cdot\nabla_{\mathbf{k}}f_{ \alpha}(\mathbf{k})=\left(\frac{\partial f_{\alpha}(\mathbf{k})}{\partial t} \right)_{\mathrm{e-ph}}+\left(\frac{\partial f_{\alpha}(\mathbf{k})}{\partial t }\right)_{\mathrm{e-imp}}+...\\ \mathbf{v}_{\lambda}(\mathbf{q})\cdot\nabla_{\mathbf{r}}n_{\lambda}(\mathbf{q })=\left(\frac{\partial n_{\lambda}(\mathbf{q})}{\partial t}\right)_{\mathrm{ ph-ph}}+\left(\frac{\partial n_{\lambda}(\mathbf{q})}{\partial t} \right)_{\mathrm{ph-e}}+\left(\frac{\partial n_{\lambda}(\mathbf{q})}{\partial t }\right)_{\mathrm{ph-imp}}+...,\end{array}\right. \tag{1}\] where \(\mathbf{v}_{\alpha}(\mathbf{k})\) and \(\mathbf{v}_{\lambda}(\mathbf{q})\) are velocity vectors for electrons and phonons with wave vectors \(\mathbf{k}\) and \(\mathbf{q}\) and band index \(\alpha\) and \(\lambda\), respectively, \(f\) and \(n\) are the distribution functions for elec trons and phonons and \(\mathbf{F}\) is the external force, which is the electrostatic field in this work. The collision terms on the right side of the equations represent different mechanisms, including electron-phonon and electron-impurity scatterings for electrons and phonon-phonon, phonon-electron and phonon-impurity scatterings for phonons. Within a linearized BTE formalism, which only takes into account the first-order deviation of the electron and phonon distribution functions from their equilibrium values, the collision term due to electron-phonon interactions can be rewritten as [46]: \[\left\{\begin{aligned} \left(\frac{\partial f_{\alpha}(\mathbf{k})}{ \partial t}\right)_{e-ph}\simeq&-\left[\sum_{\mathbf{k}^{ \prime}\beta,\mathbf{q}\lambda}F_{\mathbf{k}\alpha}\left(\mathbf{k}^{\prime} \beta,\mathbf{q}\lambda\right)\right]\cdot\Delta f_{\mathbf{k}\alpha}+\sum_{ \mathbf{k}^{\prime}\beta,\mathbf{q}\lambda}\left[F_{\mathbf{k}^{\prime}\beta }(\mathbf{k}\alpha,\mathbf{q}\lambda)\cdot\Delta f_{\mathbf{k}^{\prime}\beta }\right]+\\ &\sum_{\mathbf{k}^{\prime}\beta,\mathbf{q}\lambda}\left[F_{ \mathbf{q}\lambda}\left(\mathbf{k}\alpha,\mathbf{k}^{\prime}\beta\right) \cdot\Delta n_{\mathbf{q}\lambda}\right]\\ \left(\frac{\partial n_{\lambda}(\mathbf{q})}{\partial t}\right)_{ e-ph}\simeq&\sum_{\mathbf{k}\alpha,\mathbf{k}^{\prime}\beta} \left[G_{\mathbf{k}\alpha}\left(\mathbf{k}^{\prime}\beta,\mathbf{q}\lambda \right)\cdot\Delta f_{\mathbf{k}\alpha}+G_{\mathbf{k}^{\prime}\beta}(\mathbf{ k}\alpha,\mathbf{q}\lambda)\cdot\Delta f_{\mathbf{k}^{\prime}\beta}\right]-\\ &\left[\sum_{\mathbf{k}\alpha,\mathbf{k}^{\prime}\beta}G_{ \mathbf{q}\lambda}\left(\mathbf{k}\alpha,\mathbf{k}^{\prime}\beta\right) \right]\cdot\Delta n_{\mathbf{q}\lambda},\end{aligned}\right. \tag{2}\] where the coefficients \(F\) and \(G\) only depend on the equilibrium distribution functions of electrons \(f^{0}\) and phonons \(n^{0}\), \[\left\{\begin{aligned} & F_{\mathbf{k}\alpha}\left(\mathbf{k}^{ \prime}\beta,\mathbf{q}\lambda\right)=\left[\left(n^{0}_{\mathbf{q}\lambda}+f ^{0}_{\mathbf{k}^{\prime}\beta}\right)\Pi_{-}+\left(n^{0}_{\mathbf{q}\lambda} +1-f^{0}_{\mathbf{k}^{\prime}\beta}\right)\Pi_{+}\right]\\ & F_{\mathbf{k}^{\prime}\beta}(\mathbf{k}\alpha,\mathbf{q} \lambda)=\left[\left(n^{0}_{\mathbf{q}\lambda}+1-f^{0}_{\mathbf{k}\alpha} \right)\Pi_{-}+\left(n^{0}_{\mathbf{q}\lambda}+f^{0}_{\mathbf{k}\alpha} \right)\Pi_{+}\right]\\ & F_{\mathbf{q}\lambda}\left(\mathbf{k}\alpha,\mathbf{k}^{\prime} \beta\right)=\left[\left(f^{0}_{\mathbf{k}^{\prime}\beta}-f^{0}_{\mathbf{k} \alpha}\right)\Pi_{-}+\left(f^{0}_{\mathbf{k}^{\prime}\beta}-f^{0}_{\mathbf{k} \alpha}\right)\Pi_{+}\right]\\ & G_{\mathbf{k}\alpha}\left(\mathbf{k}^{\prime}\beta,\mathbf{q} \lambda\right)=\left[-\left(n^{0}_{\mathbf{q}\lambda}+f^{0}_{\mathbf{k}^{ \prime}\beta}\right)\Pi_{-}+\left(n^{0}_{\mathbf{q}\lambda}+1-f^{0}_{\mathbf{ k}^{\prime}\beta}\right)\Pi_{+}\right]\\ & G_{\mathbf{k}^{\prime}\beta}(\mathbf{k}\alpha,\mathbf{q}\lambda )=\left[\left(n^{0}_{\mathbf{q}\lambda}+1-f^{0}_{\mathbf{k}\alpha}\right)\Pi_ {-}-\left(n^{0}_{\mathbf{q}\lambda}+f^{0}_{\mathbf{k}\alpha}\right)\Pi_{+} \right]\\ & G_{\mathbf{q}\lambda}\left(\mathbf{k}\alpha,\mathbf{k}^{\prime} \beta\right)=\left[\left(f^{0}_{\mathbf{k}^{\prime}\beta}-f^{0}_{\mathbf{k} \alpha}\right)\Pi_{-}-\left(f^{0}_{\mathbf{k}^{\prime}\beta}-f^{0}_{\mathbf{ k}\alpha}\right)\Pi_{+}\right],\end{aligned}\right. \tag{3}\] and \[\left\{\begin{aligned} \Pi_{-}&=\frac{2\pi}{\hbar} \left|g_{\alpha\beta\lambda}\left(\mathbf{k},\mathbf{k}^{\prime},\mathbf{q} \right)\right|^{2}\cdot\delta\left(E_{\mathbf{k}^{\prime}\beta}-E_{\mathbf{k }\alpha}-\hbar\omega_{\mathbf{q}\lambda}\right)\cdot\delta\left(\mathbf{k}^{ \prime}-\mathbf{k}-\mathbf{q}\right)\\ \Pi_{+}&=\frac{2\pi}{\hbar}\left|g_{\alpha\beta \lambda}\left(\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}\right)\right|^{2} \cdot\delta\left(E_{\mathbf{k}^{\prime}\beta}-E_{\mathbf{k}\alpha}+\hbar \omega_{\mathbf{q}\lambda}\right)\cdot\delta\left(\mathbf{k}^{\prime}-\mathbf{k }+\mathbf{q}\right),\end{aligned}\right. \tag{4}\] where \[g_{\alpha\beta\lambda}\left(\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}\right)= \left(\frac{\hbar}{2m_{0}\omega_{\mathbf{q}_{\lambda}}}\right)^{1/2}\cdot \left\langle\mathbf{k}^{\prime}\beta\left|\partial_{\mathbf{q}\lambda}V \right|\mathbf{k}\alpha\right\rangle \tag{5}\] is the electron-phonon interaction matrix element. The non-equilibrium phonon distributions, described by \(\Delta n_{\mathbf{q}\lambda}\), which appear in the electron BTE, are responsible for the phonon drag effect. When phonons are driven far away from thermal equilibrium, this term is no longer negligible and can greatly modify the electron distribution and, thus, the electronic transport properties. To capture the phonon drag effect in GaN and AlN, we utilized the Elphbolt package developed by Protik et al. [46] to solve the fully coupled electron-phonon BTEs with electron and phonon dispersions and electron-phonon coupling matrix elements all evaluated from first principles. Specifically, we used Elphbolt to transform the quantities from the real-space Wannier representation to the Bloch representation in the reciprocal space on a fine \(50\times 50\times 50\) q-point grid and a \(150\times 150\times 150\) k-point grid. These matrix elements were used as input to the coupled electron-phonon BTEs, where the nonequilibrium distribution functions of electrons and phonons can be solved and used to evaluate the electrical transport properties [58]. In addition to electron-phonon scatterings, the Brooks-Herring model was employed to calculate the electron-charged-impurity scattering rates, where the impurity potential has a screened Coulomb form [59]. The solution of the distribution function of electrons in the linear response regime can be written as [58]: \[\begin{split} f_{\mathbf{k}\alpha}&=f_{\mathbf{k} \alpha}^{0}-\frac{\partial f_{\mathbf{k}\alpha}^{0}}{\partial\varepsilon_{ \mathbf{k}\alpha}}\left(\mathbf{J}_{\mathbf{k}\alpha}\cdot\mathbf{E}+\mathbf{ I}_{\mathbf{k}\alpha}\cdot\nabla T\right)\\ &=f_{\mathbf{k}\alpha}^{0}\left[1-\frac{1}{k_{B}T}(1-f_{\mathbf{ k}\alpha}^{0})(\mathbf{J}_{\mathbf{k}\alpha}\cdot\mathbf{E}+\mathbf{I}_{\mathbf{k} \alpha}\cdot\nabla T)\right],\end{split} \tag{6}\] where \(\varepsilon_{\mathbf{k}\alpha}\) is the electron energy, \(k_{B}\) is the Boltzmann constant, and \(\mathbf{J}_{\mathbf{k}\alpha}\) and \(\mathbf{I}_{\mathbf{k}\alpha}\) are the electron response coefficients of the electron state \(\mathbf{k}\alpha\) to the applied electric field \(\mathbf{E}\) and temperature gradient \(\nabla T\), respectively. The electrical conductivity \(\sigma\) and the Seebeck coefficient \(S\) can then be calculated as \[\begin{split}\sigma&=\frac{2e}{VN_{k}k_{B}T}\sum_{ \mathbf{k}\alpha}\mathbf{v}_{\mathbf{k}\alpha}f_{\mathbf{k}\alpha}^{0}(1-f_{ \mathbf{k}\alpha}^{0})\times\mathbf{J}_{\mathbf{k}\alpha}\\ \sigma S&=-\frac{2e}{VN_{k}k_{B}T}\sum_{\mathbf{k} \alpha}\mathbf{v}_{\mathbf{k}\alpha}f_{\mathbf{k}\alpha}^{0}(1-f_{\mathbf{k} \alpha}^{0})\times\mathbf{I}_{\mathbf{k}\alpha},\end{split} \tag{7}\] where \(V\) is the volume of the unit cell and \(N_{\mathbf{k}}\) is the number of electronic wave vectors in the BZ. ## III Results and Discussions ### Phonon drag effect on carrier mobility The electronic band structures of wurtzite GaN and AlN were calculated using ONCV pseudopotentials within LDA, as shown in Fig. 1. It is well established that LDA tends to underestimate the bandgap. Due to the large bandgaps in GaN and AlN, which suppresses thermal excitations and bipolar transport, accurate bandgap values are not essential in the current study. To further demonstrate the feasibility of using LDA throughout the calculation, we calculated the electron effective mass in GaN and AlN in the vicinity of the conduction-band minimum (CBM), which is transport relevant, and compared the results with the Heyd, Scuseria, and Ernzerhof (HSE) [60; 61] screened hybrid functional calculations [62] and quasiparticle \(G_{0}W_{0}\) calculations [63] in the literature, which were known to provide more accurate bandgap values. The results are listed in Table 1. Our calculation shows that the effective mass of GaN is 0.182 in the unit of free electron mass parallel to the c-axis and 0.202 perpendicular to the c-axis. In AlN, the effective mass is 0.304 along the c-axis and 0.321 perpendicular to the c-axis, which are close to the literature values. Although LDA underestimates the band gap, the similarity of electron effective mass among LDA, \(G_{0}W_{0}\) and HSE justifies our usage of LDA in the calculation of electronic transport properties. The Wannier-interpolated electronic band structures are shown in Fig. 1, which are in excellent agreement with the first-principles DFT calculation, providing a solid foundation for accurate electron-phonon matrix elements calculations. First, we focus on the enhancement of carrier mobility in GaN and AlN due to the phonon drag effect. When the phonon drag effect is not considered, the electron-phonon interac \begin{table} \begin{tabular}{l c c c} & Direction & \(G_{0}W_{0}\)[63] & HSE [62] & LDA (this work) \\ \hline GaN & \(m_{e}^{\parallel}\) & 0.19 & 0.19 & 0.182 \\ & \(m_{e}^{\perp}\) & 0.21 & 0.22 & 0.202 \\ AlN & \(m_{e}^{\parallel}\) & 0.32 & 0.31 & 0.304 \\ & \(m_{e}^{\perp}\) & 0.33 & 0.32 & 0.321 \\ \end{tabular} \end{table} Table 1: Electron effective mass of GaN and AlN parallel and perpendicular to the c-direction in the unit of free electron mass. tion is a purely momentum-dissipation process for electrons that limits the electron mobility. Microscopically, however, the electron-phonon interaction process conserves the total momentum and electrons transfer their momentum to phonons, creating a nonequilibrium phonon distribution. While a fraction of the excess momentum that phonons receive from electrons will be dissipated through anharmonic phonon-phonon interactions and phonon-impurity scatterings, the rest can be pumped back into electrons through electron-phonon interactions, which can act to boost the carrier mobility. This effect has been observed in previous phonon drag studies in GaAs [45] and 3C-SiC [47]. Here, we evaluated this effect on the carrier mobility in GaN and AlN, which is of paramount importance for device applications. The room-temperature electron mobility of n-type GaN and n-type AlN were calculated with the electron concentrations ranging from \(10^{15}\) cm\({}^{-3}\) to \(10^{19}\) cm\({}^{-3}\). In Fig. 2, we show the electron mobility as a function of electron concentration at room temperature with and without the phonon drag contribution. The experimental values [64, 65, 66, 67, 68, 69] and the theoretical BTE result without the phonon drag contribution [16] are also plotted here for comparison. It is noted that the experimental values of the electron mobility in GaN at \(n=10^{16}\) cm\({}^{-3}\) are around \(1000\) cm\({}^{2}\)/V s [65], while our calculation predicts \(600\) cm\({}^{2}\)/V s. This discrepancy is due to the fact that our current electron-phonon interaction calculations only included the dipole-like Frolich long-range coupling [19], where the higher-order quadrupolar term was excluded. Jhalani et al. has shown that considering only the dipole interaction overestimates the electron interactions with acoustic phonons in polar, and par Figure 1: The electronic band structure of wurtzite (a) GaN and (b) AlN. The Wannier interpolated band structures (red dashed lines) are in perfect agreement with first-principles calculations (black solid lines). ticularly, piezoelectric materials like GaN and AlN, and the inclusion of the quadrupole term can correct this overestimation and provide more accurate coupling matrix elements between electrons and acoustic phonons [27]. Their result is also labeled in Fig. 2(a) for comparison. Although our calculations underestimated the electron mobility of GaN at lower carrier concentrations, the results at higher carrier concentrations were in better agreement with the experiments, since the quadrupole term primarily impacts the interactions between electrons and acoustic phonons with small wave vectors, which represents a smaller fraction of the total electron-phonon scatterings at higher concentrations. It is shown here that, without taking the phonon drag effect into consideration, the calculated mobility (labeled "decoupled" in Fig. 2) would be lower than the experimental values, which are usually further limited by the sample quality and other nonidealities, implying that the phonon drag is crucial for accurate calculation of the mobility at higher electron concentrations. Although the experimental values of mobility in AlN are scarce in the literature given the difficulty in making highly n-doped AlN samples, our results are in reasonable agreement with the reference values that can be found. The percentage of the phonon drag contribution to the electron mobility in GaN and AlN is shown in Fig. 3. For both GaN and AlN, the phonon drag contribution is negligible at \(n=10^{15}\) cm\({}^{-3}\). However, the phonon drag contribution becomes more prominent with an increasing carrier concentration. At \(n=10^{19}\) cm\({}^{-3}\), 32.4% of the total electron mobility is due to the phonon drag in GaN, and 46.4% in AlN. This carrier concentration dependence can be explained by the relative strength between phonon-phonon scatterings and phonon-electron scatterings. The phonon scattering rates of GaN and AlN within RTA at \(n=10^{16}\) cm\({}^{-3}\) and \(n=10^{19}\) cm\({}^{-3}\) are shown in Fig. 4, where the black dots denote the phonon-phonon scattering rates and the green dots denote the phonon-electron scattering rates. Although the phonon drag effect is not included at the RTA level, the phonon scattering rates within RTA can provide useful information for our analysis. It is seen that at low carrier concentrations, the phonon-electron scattering is much weaker than the phonon-phonon scattering, while at high carrier concentrations, the phonon-electron scattering becomes stronger than the phonon-phonon scattering for the the low-frequency acoustic phonons and the polar longitudinal optical (LO) phonons, since the number of available electronic states that phonons can couple with increases [70; 71]. This strong phonon-electron scattering facilitates the momentum circulation between electrons and phonons. The momentum previously transferred to phonons from electrons can be pumped back more effectively at higher carrier concentrations, making electrons less dissipative compared with the case without the phonon drag effect. This suggests that, as the carrier concentration increases, although the overall carrier mobility decreases due to the increased electron-phonon scattering, the relative enhancement from the phonon drag effect increases due to more effective electron-phonon momentum circulation. The details of the phonon drag influence on mobility can be further illustrated by analyzing the percentage contribution of each phonon branch to the enhanced mobility due to the drag effect, as shown in Fig. 5. The acoustic modes' contribution is depicted by the solid lines, while the LO phonon mode contribution is represented by the dashed lines. The contributions from other optical modes are negligibly small and, thus, are not shown here. The strong interaction between LO phonons and electrons in both GaN and AlN results in a significant contribution to the drag-induced mobility enhancement, making the LO phonons' impact on mobility non-negligible. Specifically, at a carrier concentration of \(n=10^{19}\) cm\({}^{-3}\) Figure 2: The calculated electron mobility at room temperature with (black solid line) and without (red solid line) the phonon drag effect as a function of carrier concentration in (a) GaN and (b) AlN. The experimental and theoretical values are also shown for comparison. Experimental mobility values of GaN are taken from Ref. [64; 65; 66; 67; 68]. The experimental values of AlN are taken from Ref. [9; 69]. The theoretical BTE result is taken from Ref. [16], which does not consider the phonon drag effect. The first-principles calculation of GaN including the quadrupolar term is taken from Ref. [27]. the contribution of LO phonons to the mobility gain is almost equal to that of the acoustic phonons. As discussed previously, the exclusion of quadrupolar correction in our calculation tends to overestimate the electron interaction with acoustic phonons [27], leading to inaccuracies in the mobility calculation. However, since the LO phonon scattering with electrons is largely controlled by the Frohlich dipolar interaction rather than the quadrupolar term [27], this significant drag effect from LO phonons is accurately captured in our calculation, signaling the important role of polar LO phonons in the phonon drag effect in strongly polar semiconductors. This observation is also a manifestation of the fact that the impact of the phonon drag on the carrier mobility mainly depends on the electron-phonon scattering rate. This is in contrast to the phonon drag contribution to the Seebeck coefficient, which also depends on the phonon mean free paths. This distinction will be discussed in more detail in the next section. ### Phonon drag effect on Seebeck coefficient The most prominent manifestation of the phonon drag effect is its impact on the Seebeck coefficient. Here, the Seebeck coefficient of n-doped GaN and AlN at room temperature Figure 3: The phonon drag contribution to the total electron mobility in GaN and AlN. The phonon drag contribution becomes more prominent with the increasing carrier concentration. was calculated with and without considering the phonon drag contribution. The absolute value of the Seebeck coefficient as a function of carrier concentration is shown in Fig. 6. The experimental values [67; 72; 73; 74] and previous theoretical calculation results [16; 75] are also shown for comparison. Our results from solving the decoupled electron and phonon BTEs, shown as the gray line, are in agreement with previous theoretical calculations using analytical models without considering the phonon drag effect. It is noted that due to the exclusion of the quadrupolar corrections [27], our calculated phonon drag contribution can be overestimated at low carrier concentrations because of the overestimated electron-phonon contribution. Figure 4: Calculated phonon-phonon and phonon-electron scattering rates of (a) GaN at \(n=10^{16}\) cm\({}^{-3}\); (b) GaN at \(n=10^{19}\) cm\({}^{-3}\); (c) AlN at \(n=10^{16}\) cm\({}^{-3}\) and (d) AlN at \(n=10^{19}\) cm\({}^{-3}\). The strong phonon-electron scattering of the low-frequency acoustic phonons and longitudinal optical phonons at higher concentrations facilitates the momentum circulation between electrons and phonons. matrix elements involving low-frequency acoustic phonons. At higher carrier concentrations, where a broader spectrum of phonons contribute to electron-phonon interactions and the overestimated part accounts for less of the total electron-phonon scatterings, our results are in good agreement with the experimental results. Since the calculated Seebeck coefficient without considering the phonon drag is much lower than the experiments, in which samples are also affected by high threading dislocation densities and other impurities, the phonon drag effect is indispensable for accurately predicting the Seebeck coefficient in GaN and AlN. We also found that, different from the electron mobility, to which the phonon drag contribution is negligible at low carrier concentrations, the influence of the phonon drag on the Seebeck coefficient is prominent throughout the entire carrier concentration range that we investigated. It should also be mentioned that, at high carrier concentrations, in addition to electron-impurity scattering, phonon-impurity scattering also needs to be considered. The typical method for n-type doping in GaN and AlN is adding silicon, which acts as a shallow donor [76; 77], meaning that the carrier concentration is quite close to the concentration of silicon atoms at room temperature. At \(n=10^{19}\) cm\({}^{-3}\), our calculation shows that the electron mobility is much higher than the electron mobility. Figure 5: Phonon-branch-resolved contribution to the drag-enhanced electron mobility in GaN and AlN. Black lines denote the results in GaN and red lines denote the results in AlN. Solid lines represent the acoustic modes contribution and the dashed lines represent the LO mode contribution. The contributions from other optical phonon modes are negligible and not plotted here. found that the inclusion of the silicon atoms with the same concentration only leads to less than 1% difference compared with the result without including the phonon-impurity scattering. To gain more insight into the phonon drag contribution to the Seebeck coefficient, the percentage phonon drag contribution is plotted as a function of the carrier concentration, as shown in Fig. 7. It is found that at relatively low carrier concentrations, the phonon drag contribution changes very little with the carrier concentration, whereas at higher concentrations, the drag contribution decreases rapidly with an increasing carrier concentration. This trend is also observed and discussed in silicon [42] and SiC [47], where the phonon drag contribution to the Seebeck coefficient is expected to be constant at low carrier concentrations. According to Herring's theory [34], the drag contribution to the Seebeck coefficient is proportional to the momentum gain per electron from nonequilibrium phonons. At low carrier concentrations, it is the number of available electronic states to participate in electron-phonon interactions that limits the momentum transfer from phonons to electrons. In such a case, the momentum gain per electron does not change significantly with the carrier concentration. In contrast, at higher carrier concentrations, the number of electronic states that Figure 6: The calculated Seebeck coefficient with and without considering the phonon drag as a function of carrier concentration in (a) GaN and (b) AlN. The experimental values of the Seebeck coefficient in GaN are taken from Ref. [67; 73] and the theoretical calculation results are from Ref. [16; 75]. The experimental values of the Seebeck coefficient in AlN is taken from Ref. [74] and the theoretical BTE calculation result is from Ref. [16]. The calculation results cited here did not take the phonon drag effect into consideration. are able to couple with phonons is no longer a limiting factor. Instead, with the increased phonon-electron scattering rate, the shortened phonon relaxation time becomes the limiting factor, since phonons with shorter lifetimes imply more likelihood to be scattered before the momentum can be transferred to the electron system. As a result, the total electron momentum gain saturates, and because of the increased carrier concentration, the momentum gain per electron decreases, leading to the rapid reduction of the phonon drag contribution at higher concentrations. The above interpretation is based on the Seebeck picture, in which phonon drag facilitates more electrons traveling along the temperature gradient by transferring momentum from non-equilibrium phonons to electrons. In parallel, the equivalent Peltier picture, in which an isothermal electric field induces a heat flow, can provide a more straightforward interpretation regarding the phonon and the electron contribution to the Seebeck coefficient. Since phonons cannot directly couple to the electric field, the nonzero Seebeck contribution from phonons is completely a result of the drag effect. Given the Kelvin relation[78], an extra contribution to the Peltier coefficient implies the same extra contribution to the Seebeck coefficient. In the Peltier picture, the electric field generates a non-equilibrium distribution of electrons, which transfer momentum to phonons through electron-phonon interactions so that phonons can also contribute to the total Seebeck coefficient. At low carrier concentrations, the phonon contribution is almost independent on the carrier concentration, since the phonon lifetimes remain the same due to the dominant phonon-phonon scatterings. At higher carrier concentrations, phonon lifetimes decrease due to stronger phonon-electron scatterings and more momentum is returned back to electrons, leading to the decrease of the phonon contribution to the Seebeck coefficient. Although the phonon drag percentage contribution decreases with an increasing carrier concentration, as shown in Fig. 7, the overall contribution is still significant throughout the carrier concentration range, stemming from the weak anharmonic phonon-phonon scattering and, thus, a low dissipation rate of excess phonon momentum in GaN and AlN. To further understand the roles played by different phonon modes in the phonon drag effect, we also calculated the accumulated contribution to the phonon drag enhancement of the Seebeck coefficient and the carrier mobility with respect to the phonon mean free path (MFP), as shown in Fig. 8. This calculation was conducted by excluding the contribution from phonons with MFPs beyond a given threshold, which is labeled in the horizontal axis in Fig. 8. The mobility accumulation curves at lower carrier concentrations are not shown here, since the mobility gain due to phonon drag is quite small at lower carrier concentrations. It is found that at the same carrier concentration and the same phonon MFP threshold, the percentage contribution of the phonon drag effect to the mobility enhancement is greater than that to the Seebeck coefficient enhancement, indicating that phonons with shorter MFPs contribute more to the carrier mobility than to the Seebeck coefficient. This distinction arises again from the different mechanisms of the phonon drag influence on mobility and Seebeck coefficient. The increase in mobility due to phonon drag originates from the extra momentum that electrons acquire through interactions with nonequilibrium phonons. With stronger electron-phonon interactions, more momentum is transferred back to electrons and, thus, more mobility gain can be achieved. In other words, the mobility gain due to phonon drag is only determined by the electron-phonon interaction strength, and the lifetime of drag-active phonons is not a contributing factor. In contrast, a large phonon drag contribution to the Seebeck coefficient requires not only strong electron-phonon interactions but also weak phonon momentum dissipation due to phonon-phonon and phonon-electron scatterings, as discussed in the Peltier picture above. For instance, as can be seen in Fig. 8, the mobility gain due to phonon drag is much higher than the Seebeck coefficient. This is due to the fact that the phonon drag is not a contributing factor, and the phonon drag is not a contributing factor. Figure 7: The calculated phonon drag contribution to the total Seebeck coefficient as a functino of the carrier concentration in GaN and AlN. As the carrier concentration increases, the phonon drag component changes little at first and then decreases rapidly. Despite this trend, the phonon drag impact remains significant across the entire range of carrier concentrations. at \(n=10^{19}\) cm\({}^{-3}\), phonons with MFP shorter than \(0.5\,\mu\)m, whose lifetime is shorter than \(1000\,\)ps as shown in Fig. 9(a) and (c), contribute to more than \(60\%\) and \(40\%\) of the total mobility gain in GaN and AlN, whereas these short-lived phonons only contribute to \(24\%\) and \(22\%\) of the total Seebeck gain in GaN and AlN, respectively. We also found that the accumulated phonon drag contribution to the Seebeck coefficient and the mobility shifts toward the shorter phonon MFP region as the carrier concentration increases. This MFP-related feature is due to momentum and energy conservation in electron-phonon interactions. At low carrier concentrations, only the phonons with small wave vectors can couple with electronic states close to the band edges while satisfying the energy conservation condition. Typically, phonons with smaller wave vectors possess longer MFPs. With the increase of the carrier concentration, electrons occupy more of the reciprocal space and the energy and momentum conservation requirements in electron-phonon interactions become easier to satisfy, making phonons with shorter MFP active contributors to the phonon drag, as shown in Fig. 9(b) and Fig. 9(d). Figure 8: The calculated accumulated percentage contribution of the phonon drag effect to the Seebeck coefficient (solid lines) and the mobility (dashed lines) as a function of the phonon mean free paths in (a) GaN and (b) AlN. Phonons with shorter mean free paths tend to contribute more to the mobility enhancement than to the Seebeck coefficient. ### Phonon drag effect in p-type GaN and AlN For the completeness of the study, we also calculated the phonon drag contribution in p-type GaN and AlN in both low and high doping regimes, despite the fact that effective p-type doping remains challenging experimentally in both materials [79]. The results are summarized in Table 2. The heavy effective mass of the valence band results in a greater number of electronic states available for scattering, which makes the calculation computationally intensive and time-consuming. In our calculation, a \(30\times 30\times 30\) q-point grid and a \(90\times 90\times 90\) k-point grid were used to transform the electron-phonon scattering matrix elements from the real-space Wannier representation to the Bloch representation in the reciprocal space. The convergence of our calculation with respect to the sampling mesh density was checked. Similar to the results in n-type materials, the contribution of the phonon drag Figure 9: The calculated phonon lifetime (left column) and phonon-electron scattering rate (right column) as a function of the phonon mean free path in GaN (top panels) and AlN (bottom panels). effect to mobility is negligible at the low carrier concentration, while it has a great impact on the Seebeck coefficient. However, at the high carrier concentration, the phonon drag effect leads to a noticeable increase in the mobility and also has a significant contribution to the Seebeck coefficient enhancement. ## IV Conclusion In summary, we have studied the phonon drag effect on electrical transport properties in wide bandgap GaN and AlN by solving the fully coupled electron-phonon BTEs. The electron mobility and the Seebeck coefficient were calculated as a function of carrier concentration ranging from \(10^{15}\) cm\({}^{-3}\) to \(10^{19}\) cm\({}^{-3}\). We found that, even at room temperature, the phonon drag effect is prominent in both the mobility and the Seebeck coefficient. Significant enhancements of both the carrier mobility and the Seebeck coefficient were observed especially at high carrier concentrations. The strong electron-phonon coupling strength can enhance momentum transfer between electrons and phonons, and the weak anharmonic phonon-phonon scattering is beneficial for the significant enhancement of the Seebeck coefficient. Our findings highlight the importance of including the phonon drag effect in accurately predicting the electrical transport properties not only in wide bandgap group-III nitrides but also in strongly polar semiconductors in general. \begin{table} \begin{tabular}{c c c c} & Carrier & & Drag Contribution & Drag Contribution to \\ & Concentration & & to Mobility (\%) & Seebeck Coefficient (\%) \\ & (cm\({}^{-3}\)) & & & \\ \hline GaN & \(10^{16}\) & 0.3 & 90.1 \\ & \(10^{19}\) & 25.7 & 82.7 \\ AlN & \(10^{16}\) & 1.0 & 94.0 \\ & \(10^{19}\) & 27.3 & 80.1 \\ \end{tabular} \end{table} Table 2: Electron mobility and the Seebeck coefficient of p-type GaN and AlN. ###### Acknowledgements. We are grateful to Dr. Nakib H. Protik for assistance with the Elphbolt code. This work is based on research supported by the U.S. Air Force Office of Scientific Research under award number FA9550-22-1-0468 and the National Science Foundation (NSF) under award number CBET-1846927. Y.Q. and Y.C. also acknowledge the support from the Graduate Traineeship Program of the NSF Quantum Foundry via the Q-AMASE-i program under award number DMR-1906325 at the University of California, Santa Barbara (UCSB). This work used Stampede2 at Texas Advanced Computing Center (TACC) and Expanse at San Diego Supercomputer Center (SDSC) through allocation MAT200011 from the Advanced Cyber-infrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants 2138259, 2138286, 2138307, 2137603, and 2138296. Use was also made of computational facilities purchased with funds from the National Science Foundation (award number CNS-1725797) and administered by the Center for Scientific Computing (CSC) at UCSB. The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR-1720256) at UCSB.
2309.03490
Lipschitz Transport Maps via the Follmer Flow
Inspired by the construction of the F{\"o}llmer process, we construct a unit-time flow on the Euclidean space, termed the F{\"o}llmer flow, whose flow map at time 1 pushes forward a standard Gaussian measure onto a general target measure. We study the well-posedness of the F{\"o}llmer flow and establish the Lipschitz property of the flow map at time 1. We apply the Lipschitz mapping to several rich classes of probability measures on deriving dimension-free functional inequalities and concentration inequalities for the empirical measure.
Yin Dai, Yuan Gao, Jian Huang, Yuling Jiao, Lican Kang, Jin Liu
2023-09-07T05:59:55Z
http://arxiv.org/abs/2309.03490v1
# Lipschitz transport maps via the Follmer flow ###### Abstract. Inspired by the construction of the Follmer process [42], we construct a unit-time flow on the Euclidean space, termed the Follmer flow, whose flow map at time 1 pushes forward a standard Gaussian measure onto a general target measure. We study the well-posedness of the Follmer flow and establish the Lipschitz property of the flow map at time 1. We apply the Lipschitz mapping to several rich classes of probability measures on deriving dimension-free functional inequalities and concentration inequalities for the empirical measure. Key words and phrases:Lipschitz transport maps, functional inequalities, empirical measures, Gaussian mixtures 1991 Mathematics Subject Classification: _Key words and phrases_. Lipschitz transport maps, functional inequalities, empirical measures, Gaussian mixtures 2010 Mathematics Subject Classification: _Key words and phrases_. In this work, we construct a flow over the unit time interval on the Euclidean space, named the Follmer flow as in Definition 2.3 and Theorem 2.4. Our construction is greatly enlightened by Follmer's derivation of the Follmer process. Then we define and analyze a new transport map, along the Follmer flow, which pushes forward the standard Gaussian measure to a general measure satisfying mild regularity assumptions (see Assumptions 1, 2 and 3). The well-posedness of the Follmer flow and the Lipschitz property of its flow map at time \(1\) are rigorously investigated under these regularity assumptions. By virtue of the Lipschitz changes of variables principle, we prove dimension-free \(\Psi\)-Sobolev inequalities, isoperimetric inequalities, \(q\)-Poincare inequalities and sharp non-asymptotic concentration bounds for the empirical measure. Furthermore, we shall emphasize that both the Follmer flow and its flow map possess much computational flexibility in terms of the analytic expression of its velocity field, which we believe may be of independent interest, to develop sampling algorithms and generative models with theoretical guarantees. ### Related work The work is notably relevant to the Brownian transport map built upon the Follmer process [65] and the transport map defined via the reverse heat flow [69, 48, 67, 66]. The Brownian transport map, acquired from a strong solution of the Follmer process, pushes forward the Wiener measure onto probability measures on the Euclidean space. The infinite-dimensional nature of the Brownian transport map is quite different from that of the Follmer flow which is defined on the finite-dimensional Euclidean space. To produce the randomness within the target measure, the Brownian transport map leverages the randomness of the path while the Follmer flow makes use of the randomness delivered by the source measure. Meanwhile, [66] studies a transport map along the reverse heat flow from the standard Gaussian measure to a target measure, constructed by [69] and [48], as well as its Lipschitz property. The transport map investigated in the work shares a similar Lipschitz property with the transport map associated with the reverse heat flow. Nonetheless, the transport map investigated by [48] and [66] is deduced via a limiting argument, thus has no explicit expression. Under Assumptions 1, 2 and 3, our considered transport map could be expressed as the flow map of the well-posed Follmer flow at time \(t=1\) in a simple and explicit form. Towards connections between the flows, the Follmer flow over time interval \([0,1)\) (without time \(1\)) is equivalent to the reverse heat flow through a deterministic change of time, as revealed in Lemmas D.1 and D.2. Technically, the equivalence cannot ensure well-posedness of the Follmer flow at time \(1\). We explicitly extend the flow to time \(1\) by deriving a uniform lower bound on the Jacobian matrix of the velocity field via the Cramer-Rao inequality. Additionally, it is worth mentioning that [2] and [1] introduce a unit-time normalizing flow, relevant to the Follmer flow, from the perspective of stochastic interpolation between the Gaussian measure and a target measure. Nonetheless, the well-posedness of this normalizing flow is not studied in the scope of their work. ### Notations For any integer \(d\geq 1\), the Borel \(\sigma\)-algebra of \(\mathbb{R}^{d}\) is denoted by \(\mathcal{B}(\mathbb{R}^{d})\). For \(x,y\in\mathbb{R}^{d}\), define \(\langle x,y\rangle:=\sum_{i=1}^{d}x_{i}y_{i}\) and the Euclidean norm \(|x|:=\langle x,x\rangle^{1/2}\). Denote by \(\mathbb{S}^{d-1}:=\{x\in\mathbb{R}^{d}:|x|=1\}\). The operator norm of a matrix \(M\in\mathbb{R}^{m\times n}\) is denoted by \(\|M\|_{\mathrm{op}}:=\sup_{x\in\mathbb{R}^{n},|x|=1}|Mx|\) and \(M^{\top}\) is the transpose of \(M\). Let \(f:\mathbb{R}^{d}\to\mathbb{R}\) be a twice continuously differentiable function. Denote by \(\nabla f,\nabla^{2}f\) and \(\Delta f\) the gradient of \(f\), the Hessian of \(f\) and the Laplacian of \(f\), respectively. Let \(\gamma_{d}\) denote the standard Gaussian measure on \(\mathbb{R}^{d}\), i.e., \(\gamma_{d}(\mathrm{d}x):=(2\pi)^{-d/2}\exp(-|x|^{2}/2)\mathrm{d}x\). Let \(N(0,\mathbf{I}_{d})\) stand for a \(d\)-dimensional Gaussian random variable with mean \(0\) and covariance \(\mathbf{I}_{d}\) being the \(d\times d\) identity matrix. Moreover, we use \(\phi(x)\) to denote its probability density function with respect to the Lebesgue measure. The set of probability measures defined on a measurable space \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\) is denoted as \(\mathcal{P}(\mathbb{R}^{d})\). For any \(\mathbb{R}^{d}\)-valued random vector, \(\mathbb{E}[X]\) is used to denote its expectation. We say that \(\Pi\) is a transference plan of \(\mu\) and \(\nu\) if it is a probability measure on \((\mathbb{R}^{d}\times\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d})\times\mathcal{ B}(\mathbb{R}^{d}))\) such that for any Borel set \(A\) of \(\mathbb{R}^{d}\), \(\Pi(A\times\mathbb{R}^{d})=\mu(A)\) and \(\Pi(\mathbb{R}^{d}\times A)=\nu(A)\). We denote \(\mathcal{C}(\mu,\nu)\) the set of transference plans of \(\mu\) and \(\nu\). Furthermore, we say that a couple of \(\mathbb{R}^{d}\)-valued random variables \((X,Y)\) is a coupling of \(\mu\) and \(\nu\) if there exists \(\Pi\in\mathcal{C}(\mu,\nu)\) such that \((X,Y)\) is distributed according to \(\Pi\). For two probability measures \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\), the Wasserstein distance of order \(p\geq 1\) is defined as \[W_{p}(\mu,\nu):=\inf_{\Pi\in\mathcal{C}(\mu,\nu)}\left(\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}|x-y|^{p}\,\Pi(\mathrm{d}x,\mathrm{d}y)\right)^{1/p}.\] Let \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\). The relative entropy of \(\nu\) with respect to \(\mu\) is defined by \[H(\nu\mid\mu)=\begin{cases}\int_{\mathbb{R}^{d}}\log\left(\frac{\mathrm{d}\nu}{ \mathrm{d}\mu}\right)\nu(\mathrm{d}x),&\text{if $\nu\ll\mu$,}\\ +\infty,&\text{otherwise.}\end{cases}\] ## 2. Main results We first present two definitions to characterize convexity properties of probability measures and some useful notations. **Definition 2.1** ([18, 65]).: _A probability measure \(\mu(\mathrm{d}x)=\exp(-U(x))\mathrm{d}x\) is \(\kappa\)-semi-log-concave for some \(\kappa\in\mathbb{R}\) if its support \(\Omega\subseteq\mathbb{R}^{d}\) is convex and \(U\in C^{2}(\Omega)\) satisfies_ \[\nabla^{2}U(x)\succeq\kappa\mathbf{I}_{d},\quad\forall x\in\Omega.\] **Definition 2.2** ([38]).: _A probability measure \(\mu(\mathrm{d}x)=\exp(-U(x))\mathrm{d}x\) is \(\beta\)-semi-log-convex for some \(\beta>0\) if its support \(\Omega\subseteq\mathbb{R}^{d}\) is convex and \(U\in C^{2}(\Omega)\) satisfies_ \[\nabla^{2}U(x)\preceq\beta\mathbf{I}_{d},\quad\forall x\in\Omega.\] Let \(\nu(\mathrm{d}x)=p(x)\mathrm{d}x\) be a probability measure on \(\mathbb{R}^{d}\) and define an operator \((\mathbb{Q}_{t})_{t\in[0,1]}\), acting on function \(f:\mathbb{R}^{d}\to\mathbb{R}\) by \[\mathbb{Q}_{1-t}f(x):=\int_{\mathbb{R}^{d}}\varphi^{tx,1-t^{2}}(y)f(y)\mathrm{ d}y=\int_{\mathbb{R}^{d}}f\left(tx+\sqrt{1-t^{2}}z\right)\mathrm{d}\gamma_{d}(z)\] where \(\varphi^{tx,1-t^{2}}(y)\) is the density of the \(d\)-dimensional Gaussian measure with mean \(tx\) and covariance \((1-t^{2})\mathbf{I}_{d}\). Our first result is that we construct a flow over the unit time interval, named the Follmer flow, that pushes forward a standard Gaussian measure \(\gamma_{d}\) to a general target measure \(\nu\) at time \(t=1\). Before rigorously defining the Follmer flow, let us specify several regularity assumptions that would ensure well-definedness and well-posedness of the Follmer flow. **Assumption 1**.: _The probability measure \(\nu\) has a finite third moment and is absolutely continuous with respect to the standard Gaussian measure \(\gamma_{d}\)._ **Assumption 2**.: _The probability measure \(\nu\) is \(\beta\)-semi-log-convex for some \(\beta>0\)._ **Assumption 3**.: _Let \(D:=(1/\sqrt{2})\mathrm{diam}(\mathrm{supp}(\nu))\). The probability measure \(\nu\) satisfies one or more of the following assumptions:_ 1. \(\nu\) _is_ \(\kappa\)_-semi-log-concave for some_ \(\kappa>0\) _with_ \(D\in(0,\infty]\)_;_ 2. \(\nu\) _is_ \(\kappa\)_-semi-log-concave for some_ \(\kappa\leq 0\) _with_ \(D\in(0,\infty)\)_;_ 3. \(\nu=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\) _where_ \(\rho\) _is a probability measure supported on a ball of radius_ \(R\) _on_ \(\mathbb{R}^{d}\)_._ Let us move to a formal definition of the Follmer flow and the exhibition of its well-posedness. A complete exposition would be found in Section 3. **Definition 2.3**.: _Suppose that probability measure \(\nu\) satisfies Assumption 1. If \((X_{t})_{t\in[0,1]}\) solves the initial value problem (IVP)_ \[\frac{\mathrm{d}X_{t}}{\mathrm{d}t}=V(t,X_{t}),\quad X_{0}\sim\gamma_{d}, \quad t\in[0,1] \tag{1}\] _where the velocity field \(V\) is defined by_ \[V(t,x):=\frac{\nabla\log\mathbb{Q}_{1-t}r(x)}{t},\qquad\forall t\in(0,1] \tag{2}\] _with \(V(0,x):=\mathbb{E}_{\nu}[X],r(x):=\frac{\mathrm{d}\nu}{\mathrm{d}\gamma_{d}}(x)\). We call \((X_{t})_{t\in[0,1]}\) a Follmer flow and \(V(t,x)\) a Follmer velocity field associated to \(\nu\)._ **Theorem 2.4** (Well-posedness).: _Suppose that Assumptions 1, 2 and 3 hold. Then the Follmer flow \((X_{t})_{t\in[0,1]}\) associated to \(\nu\) is a unique solution to the IVP (1). Moreover, the push-forward measure \(\gamma_{d}\circ(X_{1}^{-1})=\nu\)._ The following results show that the Follmer flow map at time \(t=1\) is Lipschitz when the target measure satisfies either the strong log-concavity assumption or the bounded support assumption. **Theorem 2.5** (Lipschitz mapping).: _Assume that Assumptions 1, 2, 3-(i) or 3-(ii) hold._ 1. _If_ \(\kappa D^{2}\geq 1\)_, then_ \(X_{1}(x)\) _is a Lipschitz mapping with constant_ \(\frac{1}{\sqrt{\kappa}}\)_, i.e.,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}\leq\frac{1}{\sqrt{\kappa}},\quad\forall x \in\mathbb{R}^{d}.\] 2. _If_ \(\kappa D^{2}<1\)_, then_ \(X_{1}(x)\) _is a Lipschitz mapping with constant_ \(\exp\left(\frac{1-\kappa D^{2}}{2}\right)D\)_, i.e.,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}\leq\exp\left(\frac{1-\kappa D^{2}}{2}\right) D,\quad\forall x\in\mathbb{R}^{d}.\] **Theorem 2.6** (Gaussian mixtures).: _Assume that Assumptions 1, 2 and 3-(iii) hold. Then \(X_{1}(x)\) is a Lipschitz mapping with constant \(\sigma\exp\left(\frac{R^{2}}{2\sigma^{2}}\right)\), i.e.,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}\leq\sigma\exp\left(\frac{R^{2}}{2\sigma^{2} }\right),\quad\forall x\in\mathbb{R}^{d}.\] **Remark 2.7**.: _Combining \(\mathrm{Lip}(X_{1}(x))\leq\|\nabla X_{1}(x)\|_{\mathrm{op}}\) and Theorem 2.6, we get_ \[\mathrm{Lip}(X_{1}(x))\leq\sigma\exp\left(\frac{R^{2}}{2\sigma^{2}}\right), \quad\forall x\in\mathbb{R}^{d}. \tag{3}\] _For Gaussian mixtures, the Lipschitz constants of (3) are better than those provided by the Brownian transport map [65, Theorem 1.4] and match those presented in [66]. Meanwhile, the Lipschitz constants of \(X_{1}\) lead to a dimension-free logarithmic Sobolev constant and a dimension-free Poincare constant_ \[C_{\mathrm{LS}}(p)\leq 2\sigma^{2}\exp\left(\frac{R^{2}}{\sigma^{2}}\right), \quad C_{\mathrm{P}}(p)\leq\sigma^{2}\exp\left(\frac{R^{2}}{\sigma^{2}}\right). \tag{4}\] _On the one hand, (4) implies a Gaussian log-Sobolev constant \(2\sigma^{2}\) and a Gaussian Poincare constant \(\sigma^{2}\) as \(R\) goes to zero. In fact, Poincare constant \(\sigma^{2}\) and log-Sobolev constant \(2\sigma^{2}\) are optimal for Gaussian measure \(N(0,\sigma^{2}\mathbf{I}_{d})\) on \(\mathbb{R}^{d}\). On the other hand, the Poincare constant obtained by (4) is obviously smaller than the result in [9, Theorem 1.2]. In fact, the upper bound of Poincare constant for distribution \(p=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\) in [9, Theorem 1.2] is \(\sigma^{2}\exp\left(4R^{2}/\sigma^{2}\right)\). Similarly, the log-Sobolev constant (4) we obtained is slightly better than that in [21, Corollary 1]. Indeed, the upper bound of log-Sobolev constant for distribution \(\nu=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\) in [21, Corollary 1] is \(6(4R^{2}+\sigma^{2})\exp\left(4R^{2}/\sigma^{2}\right)\). Nonetheless, it is worthwhile to remark that [21] considers a rich class of probability measures with the convolutional structure, which leads to general results on dimension-free log-Sobolev and Poincare inequlities._ ## 3. The Follmer flow and its well-posedness Let us present our motivations to derive the Follmer flow. We are largely inspired by the construction of the Follmer process [42, 57, 38, 39], which provides a probabilistic solution to the Schrodinger problem [74, 60], though our construction of the Follmer flow is partially heuristic using a similar time-reversal argument. ### The Follmer process In Follmer's lecture notes at the Ecole d'Ete de Probabilites de Saint-Flour in 1986 [42], the Follmer process is constructed with time reversal of a linear Ito SDE under a finite relative entropy condition, which rigorously determines a Schrodinger bridge from a source Dirac measure \(\delta_{0}\) to a general target measure \(\nu\). Let us briefly revisit Follmer's arguments to derive such a process. **Definition 3.1** ([42]).: _A diffusion process \(\overline{P}:=\left(\overline{X}_{t}\right)_{t\in[0,1]}\) starting with marginal distribution \(\nu\) at time \(t=0\) and reaching \(0\) at time \(t=1\) is defined by the following Ito SDE_ \[\mathrm{d}\overline{X}_{t}=-\frac{1}{1-t}\overline{X}_{t}\mathrm{d}t+ \mathrm{d}\overline{W}_{t},\ \overline{X}_{0}\sim\nu,\ t\in[0,1) \tag{5}\] _with an extended solution at time \(t=1\), i.e., \(\overline{X}_{1}\sim\delta_{0}\). The transition probability distribution of (5) from \(\overline{X}_{0}\) to \(\overline{X}_{t}\) is given by \(\overline{X}_{t}|\overline{X}_{0}\sim N((1-t)\overline{X}_{0},t(1-t)\mathbf{I} _{d})\) for every \(0\leq t<1\)._ **Lemma 3.2** ([41]).: _Suppose that the diffusion process \(Q\) has finite relative entropy with respect to a standard Wiener process \(W_{t}\) over the unit time interval, i.e., \(t\in[0,1]\). Then for almost all \(t\in[0,1]\), the logarithmic derivative of marginal density \(\rho_{t}\) of \(Q\) satisfies the duality equation \(\nabla\log\rho_{t}(x)=b(x,t)+\overline{b}(x,1-t)\) for almost all \(x\in\mathbb{R}^{d}\), where \(b(x,t)\) and \(\overline{b}(x,t)\) are drifts of diffusion process \(Q\) and its time-reversed diffusion process \(\overline{Q}\), respectively._ **Definition 3.3** ([42, 57]).: _Follmer process \(P=(X_{t})_{t\in[0,1]}\) is defined by the Ito SDE_ \[\mathrm{d}X_{t}=\nabla\log\mathcal{P}_{1-t}r(X_{t})\mathrm{d}t+\mathrm{d}W_{t },\;X_{0}=0,\;t\in[0,1] \tag{6}\] _where \(W_{t}\) is a standard Wiener process and \(\mathcal{P}_{t}\) is the heat semigroup defined by \(\mathcal{P}_{t}h(x):=\mathbb{E}\left[h(x+W_{t})\right]\). Moreover, the drift \(\nabla\log\mathcal{P}_{1-t}r(X_{t})\) is called the Follmer drift._ **Remark 3.4**.: _According to Lemma 3.2, the Follmer process \(P\) can be obtained by taking the time reversal of the diffusion process \(\overline{P}\) over \(t\in[0,1]\). It implies that the Follmer drift has an alternative representation, i.e., for any \(t\in(0,1]\), \(\nabla\log\mathcal{P}_{1-t}r(X_{t})=X_{t}/t+\nabla\log p_{t}(X_{t})\), where \(p_{t}\) is the marginal density of the Follmer process \(P\)._ ### The Follmer flow via time reversal Since \(\delta_{0}\) is a degenerate distribution in the sense that its mass is concentrated at \(0\), we consider constructing a diffusion process that starts with a marginal distribution \(\nu\) and would be able to keep the nonzero variance of its marginal distribution at time \(t=1\). Let us present the constructed diffusion process first. For any \(\varepsilon\in(0,1)\), we consider a diffusion process \(\big{(}\overline{X}_{t}\big{)}_{t\in[0,1-\varepsilon]}\) defined by the following Ito SDE \[\mathrm{d}\overline{X}_{t}=-\frac{1}{1-t}\overline{X}_{t}\mathrm{d}t+\sqrt{ \frac{2}{1-t}}\mathrm{d}\overline{W}_{t},\quad\overline{X}_{0}\sim\nu \tag{7}\] for all \(t\in[0,1-\varepsilon]\). By Theorem 2.1 in [72, Chapter IX], the diffusion process \(\overline{X}_{t}\) defined in (7) has a unique strong solution on \([0,1-\varepsilon]\). Moreover, the transition probability distribution of (7) from \(\overline{X}_{0}\) to \(\overline{X}_{t}\) is given by \(\overline{X}_{t}|\overline{X}_{0}=x_{0}\sim N((1-t)x_{0},\;t(2-t)\mathbf{I}_{ d})\) for every \(t\in[0,1-\varepsilon]\). It is a straightforward observation that the variance of \(\overline{X}_{1-\varepsilon}|\overline{X}_{0}\) for SDE (7) will approach the identity matrix \(\mathbf{I}_{d}\) when \(\varepsilon\) is small enough. That is why we could expect the marginal distribution of \(\overline{X}_{1-\varepsilon}\) would have a nonzero variance. In contrast, for the time-reversed Follmer process (5), the variance of \(\overline{X}_{1-\varepsilon}|\overline{X}_{0}\) will approach constant \(0\) as \(\varepsilon\to 0\), which indicates the variance of its marginal distribution vanishes at time \(t=1\). However, SDE (7) is not well-defined at time \(t=1\) due to unbounded drift and diffusion coefficients. Then we leverage the fact that the marginal distribution \(\overline{\mu}_{t}\) of the diffusion process \(\overline{X}_{t}\) defined in (7) has been determined in the sense that \(\overline{X}_{t}\overset{d}{=}(1-t)X+\sqrt{t(2-t)}Y\) with \(X\sim\nu,Y\sim\gamma_{d}\), and concentrate on an ODEs system sharing the same marginal distribution flow with SDE (7) in order to circumvent the singularity of SDE (7) at time \(t=1\). Note that the marginal distribution flow \((\overline{\mu}_{t})_{t\in[0,1-\varepsilon]}\) of the diffusion process (7) satisfies the Fokker-Planck-Kolmogorov equation in an Eulerian framework [12] \[\partial_{t}\overline{\mu}_{t}=\nabla\cdot(\overline{\mu}_{t}V(1-t,x))\quad \text{on }[0,1-\varepsilon]\times\mathbb{R}^{d},\;\overline{\mu}_{0}=\nu \tag{8}\] in the sense that \(\overline{\mu}_{t}\) is continuous in \(t\) under the weak topology, i.e., \[\overline{\mu}_{t}(f):=\int_{\mathbb{R}^{d}}f(x)\mu_{t}(\mathrm{d}x)=\nu(f)- \int_{0}^{t}\overline{\mu}_{s}\left(\langle V(1-s,\cdot),\nabla f\rangle \right)\mathrm{d}s\] for all \(f\in C_{0}^{\infty}(\mathbb{R}^{d})\) and the velocity field is given by \[V(1-t,x):=\frac{1}{1-t}\left[x+S(1-t,x)\right],\quad t\in[0,1-\varepsilon] \tag{9}\] and \[S(t,x):=\nabla\log\int_{\mathbb{R}^{d}}(2\pi(1-t^{2}))^{-\frac{d}{2}}\exp \left(-\frac{|x-ty|^{2}}{2(1-t^{2})}\right)p(y)\mathrm{d}y\] for all \(t\in[\varepsilon,1]\). Due to the classical Cauchy-Lipschitz theory [4, Section 2] with a Lipschitz velocity field or the well-established Ambrosio-DiPerna-Lions theory with lower Sobolev regularity assumptions on the velocity field [35, 3], we shall define a flow \((X_{t}^{*})_{t\in[0,1-\varepsilon]}\) in a Lagrangian formulation via the following ODEs system \[\mathrm{d}X_{t}^{*}=-V\left(1-t,X_{t}^{*}\right)\mathrm{d}t,\quad X_{0}^{*} \sim\nu,\quad t\in[0,1-\varepsilon]. \tag{10}\] **Proposition 3.5**.: _Assume the velocity field \(V(t,x)\) satisfies \(V\in L^{1}([\varepsilon,1];W^{1,\infty}_{\mathrm{loc}}(\mathbb{R}^{d};\mathbb{R} ^{d}))\) and \(|V|/(1+|x|)\in L^{1}([\varepsilon,1];L^{\infty}(\mathbb{R}^{d}))\). Then the push-forward measure associated with the flow map \(X_{t}^{*}\) satisfies \(X_{t}^{*}\stackrel{{ d}}{{=}}(1-t)X+\sqrt{t(2-t)}Y\) with \(X\sim\nu,Y\sim\gamma_{d}\). Moreover, the push-forward measure \(\nu\circ(X_{1-\varepsilon}^{*})^{-1}\) converges to the Gaussian measure \(\gamma_{d}\) in the sense of Wasserstein-2 distance as \(\varepsilon\) tends to zero, i.e., \(W_{2}(\nu\circ(X_{1-\varepsilon}^{*})^{-1},\gamma_{d})\to 0\)._ **Remark 3.6**.: _Suppose that the target measure \(\nu\) has a finite third moment. By Lemma A.1, we can supplement the definition of velocity field \(V(1-t,x)\) at time \(t=1\), i.e.,_ \[V(0,x):=\lim_{t\downarrow 0}V(t,x)=\lim_{t\downarrow 0}\frac{x+S(t,x)}{t}= \mathbb{E}_{\nu}[X].\] _Then we extend the flow \((X_{t}^{*})_{t\in[0,1)}\) to time \(t=1\) such that \(X_{1}^{*}\sim\gamma_{d}\), which solves the IVP_ \[\mathrm{d}X_{t}^{*}=-V\left(1-t,X_{t}^{*}\right)\mathrm{d}t,\quad X_{0}^{*} \sim\nu,\quad t\in[0,1], \tag{11}\] _where the velocity field_ \[V(1-t,x)=\frac{1}{1-t}\left[x+S(1-t,x)\right],\quad\forall t\in[0,1)\] _and \(V(0,x)=\mathbb{E}_{\nu}[X]\)._ In order to exploit a time-reversal argument inspired by Follmer, it remains crucial to establish the well-posedness of a flow \((X_{t}^{*})_{t\in[0,1]}\) that solves the IVP (11). We proceed to study regularity properties of the velocity field \(V\) on \([0,1]\times\mathbb{R}^{d}\) by imposing structural assumptions on the target measure \(\nu\). By Theorem B.3, we know that there exists \(0\leq\theta_{t}^{\star}<\infty\) such that \[\|-\nabla V(t,x)\|_{\mathrm{op}}=\|\nabla V(t,x)\|_{\mathrm{op}}\leq\theta_{ t}^{\star} \tag{12}\] for any \(t\in[0,1]\). Furthermore, the velocity field \(-V(1-t,x)\) is smooth and with the bounded derivative for any \(t\in[0,1]\) and \(x\in\mathbb{R}^{d}\). Therefore, the IVP (11) has a unique solution and the flow map \(x\mapsto X_{t}^{*}(x)\) is a diffeomorphism from \(\mathbb{R}^{d}\) onto \(\mathbb{R}^{d}\) at any time \(t\in[0,1]\). A standard time-reversal argument of ODE would yield a formal definition of the Follmer flow. **Definition 3.7**.: _Suppose that probability measure \(\nu\) satisfies Assumption 1. If \((X_{t})_{t\in[0,1]}\) solves the IVP_ \[\frac{\mathrm{d}X_{t}}{\mathrm{d}t}=V(t,X_{t}),\quad X_{0}\sim\gamma_{d}, \quad t\in[0,1] \tag{13}\] _where the velocity field_ \[V(t,x)=\frac{1}{t}\left[x+S(t,x)\right],\quad\forall t\in(0,1],\quad V(0,x)= \mathbb{E}_{\nu}[X],\] _we call \((X_{t})_{t\in[0,1]}\) a Follmer flow and \(V(t,x)\) a Follmer velocity field associated to \(\nu\)._ **Remark 3.8**.: _Notice that_ \[\mathcal{Q}_{1-t}r(x)=(2\pi)^{d/2}\exp\left(\frac{|x|^{2}}{2}\right)\frac{1}{ (2\pi(1-t^{2}))^{d/2}}\int_{\mathbb{R}^{d}}p(y)\exp\left(-\frac{|x-ty|^{2}}{2( 1-t^{2})}\right)\mathrm{d}y\] _where \(\mathcal{Q}_{1-t}r(x)\) is defined in (22). We further obtain \(\nabla\log\mathcal{Q}_{1-t}r(x)=x+S(t,x),\ \forall t\in[0,1]\). Therefore, we have that (1) and (13) are equivalent, which satisfy \(X_{0}\sim\gamma_{d}\) and \(X_{1}\sim\nu\)._ Finally, let us conclude with the well-posedness properties of the Follmer flow, which is presented in Theorem 2.4 and summarized below. **Theorem 3.9** (Well-posedness).: _Suppose that Assumptions 1, 2 and 3 hold. Then the Follmer flow \((X_{t})_{t\in[0,1]}\) associated to \(\nu\) is a unique solution to the IVP (13). Moreover, the push-forward measure \(\gamma_{d}\circ(X_{1}^{-1})=\nu\)._ ## 4. Applications Owing to the Lipschitz transport properties proved in Theorems 2.5 and 2.6, we are motivated to establish a variety of functional inequalities and concentration inequalities for several classes of probability measures on Euclidean space. ### Dimension-free inequalities In this subsection, we provide dimension-free results for the \(\Psi\)-Sobolev inequalities. For completeness, we incorporate classical results for strongly-log-concave measures (\(\kappa>0\)), which have been studied with the optimal transport maps [17]. Compared with [65], we obtain that the upper bound constants of \(\Psi\)-Sobolev inequalities, Isoperimetric inequalities and \(q\)-Poincare inequalities are the same for \(\kappa D^{2}\geq 1\). When \(\kappa D^{2}<1\), our upper bound constants to \(\Psi\)-Sobolev inequalities and Isoperimetric inequalities are in the same order with the results of Lemmas 5.3-5.5 in [65]. For the Gaussian mixtures case, we obtain that the constants of these inequalities are slightly better than the result of Lemmas 5.3-5.5 in [65]. **Definition 4.1**.: _Let \(\mathcal{I}\) be a closed interval (not necessarily bounded) and let \(\Psi:\mathcal{I}\to\mathbb{R}\) be a twice differentiable function. We say that \(\Psi\) is a divergence if each of the functions \(\Psi,\Psi^{\prime\prime}\) and \(-1/\Psi^{\prime\prime}\) is a convex function. Given a probability measure \(\nu(\mathrm{d}x)=p(x)\mathrm{d}x\) on \(\mathbb{R}^{d}\) and a function \(\zeta:\mathbb{R}^{d}\to\mathcal{I}\) such that \(\int_{\mathbb{R}^{d}}\zeta(x)p(x)\mathrm{d}x\in\mathcal{I}\), we define_ \[\mathrm{Ent}_{p}^{\Psi}(\zeta):=\int_{\mathbb{R}^{d}}\Psi(\zeta(x))p(x) \mathrm{d}x-\Psi\left(\int_{\mathbb{R}^{d}}\zeta(x)p(x)\mathrm{d}x\right).\] Some examples of the divergences are \(\Psi:\mathbb{R}\to\mathbb{R}\) with \(\Psi(x)=x^{2}\) (Poincare inequality) and \(\Psi:\mathbb{R}_{+}\to\mathbb{R}\) with \(\Psi(x)=x\log x\) (log-Sobolev inequality). **Theorem 4.2** (\(\Psi\)-Sobolev inequalities).: _Let Assumptions 1, 2 and 3 hold._ 1. _Let_ \(\zeta:\mathbb{R}^{d}\to\mathcal{I}\) _be any continuously differentiable function such that_ \(\int_{\mathbb{R}^{d}}\zeta^{2}(x)p(x)\mathrm{d}x\in\mathcal{I}\)_._ 1. _If_ \(\kappa D^{2}\geq 1\)_, then_ 2. _If_ \(\kappa D^{2}<1\)_, then_ \[\mathrm{Ent}_{p}^{\Psi}(\zeta)\leq\frac{\exp(1-\kappa D^{2})}{2}D^{2}\int_{ \mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta(x))|\nabla\zeta(x)|^{2}p(x)\mathrm{d }x.\] 2. _Fix a probability measure_ \(\rho\) _on_ \(\mathbb{R}^{d}\) _supported on a ball of radius_ \(R\) _and let_ \(p:=N(a,\Sigma)*\rho\) _and denote_ \(\lambda_{\min}:=\lambda_{\min}(\Sigma)\) _and_ \(\lambda_{\max}=\lambda_{\max}(\Sigma)\)_. Then for any continuously differentiable function_ \(\zeta:\mathbb{R}^{d}\to\mathcal{I}\) _such that_ \(\int_{\mathbb{R}^{d}}\zeta^{2}(x)p(x)\mathrm{d}x\in\mathcal{I}\)_, we have_ \[\mathrm{Ent}_{p}^{\Psi}(\zeta)\leq\frac{1}{2}\lambda_{\max}\exp\left(\frac{R^{ 2}}{\lambda_{\min}}\right)\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta(x))| \nabla\zeta(x)|^{2}p(x)\mathrm{d}x.\] **Theorem 4.3** (Isoperimetric inequalities).: _Assume that Assumptions 1, 2 and 3 hold. Let \(\Phi\) be the cumulative distribution function of \(\gamma_{1}\) on \(\mathbb{R}\), that is,_ \[\Phi(x)=\gamma_{1}(-\infty,x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}\exp \left(-\frac{y^{2}}{2}\right)\mathrm{d}y,\quad-\infty<\forall x<+\infty\] _and \(B_{2}^{d}:=\{x\in\mathbb{R}^{d}:|x|\leq 1\}\) be the unit ball in \(\mathbb{R}^{d}\)._ 1. _Let_ \(A_{t}:=A+tB_{2}^{d}\) _for any Borel set_ \(A\subseteq\mathbb{R}^{d}\) _and_ \(t\geq 0\)_, then_ \[p\left(A_{t}\right)\geq\Phi\left(p(A)+\frac{t}{C}\right),\quad C:=\begin{cases}1/ \sqrt{\kappa},&\text{if }\kappa D^{2}\geq 1,\\ \exp\left(\frac{1-\kappa D^{2}}{2}\right)D,&\text{if }\kappa D^{2}<1.\end{cases}\] 2. _Let_ \(p:=N(a,\Sigma)*\rho\) _where_ \(\rho\) _is a probability measure on_ \(\mathbb{R}^{d}\) _and is supported on a ball of radius_ \(R\)_. Set_ \(\lambda_{\min}:=\lambda_{\min}(\Sigma),\lambda_{\max}:=\lambda_{\max}(\Sigma)\) _and_ \[C:=(\lambda_{\min}\lambda_{\max})^{1/2}\exp\left(\frac{R^{2}}{2\lambda_{\min}} \right).\] _Then_ \[p(A_{t})\geq\Phi\left(p(A)+\frac{t}{C}\right),\qquad A_{t}:=A+tB_{2}^{d}.\] Finally, let \(\eta:\mathbb{R}^{d}\to\mathbb{R}\) be any continuously differentiable function such that \(\int_{\mathbb{R}^{d}}\eta(x)p(x)\,\mathrm{d}x=0\). **Theorem 4.4** (\(q\)-Poincare inequalities).: _Suppose that Assumptions 1, 2 and 3 hold._ 1. _Let_ \(q\geq 2\) _be an even integer and_ \(\eta\in L^{q}(\gamma_{d})\)_, then it holds that_ \[\int_{\mathbb{R}^{d}}\eta^{q}(x)p(x)\,\mathrm{d}x\leq\left(\int_{\mathbb{R}^{d}} |\nabla\eta(x)|^{q}p(x)\,\mathrm{d}x\right)\begin{cases}C_{1}^{\star}&\text{if }\kappa D^{2}\geq 1,\\ C_{2}^{\star}&\text{if }\kappa D^{2}<1.\end{cases}\] _where_ \[C_{1}^{\star}:=\left(\frac{q-1}{\kappa}\right)^{q/2},\quad C_{2}^{\star}:=D^{q }\exp\left(\frac{q(1-\kappa D^{2})}{2}\right).\] 2. _Fix a probability measure_ \(\rho\) _on_ \(\mathbb{R}^{d}\) _supported on a ball of radius_ \(R\)_, and let_ \(p:=N(a,\Sigma)*\rho\) _and denote_ \(\lambda_{\min}:=\lambda_{\min}(\Sigma)\) _and_ \(\lambda_{\max}:=\lambda_{\max}(\Sigma)\)_. Then for any_ \(\eta\in L^{q}(\gamma_{d})\) _with even integer_ \(q\geq 2\)_, it holds that_ \[\int_{\mathbb{R}^{d}}\eta^{q}(x)p(x)\,\mathrm{d}x\leq(q-1)^{\frac{q}{2}}( \lambda_{\min}\lambda_{\max})^{\frac{q}{2}}\exp\left(\frac{qR^{2}}{2\lambda_{ \min}}\right)\int_{\mathbb{R}^{d}}|\nabla\eta(x)|^{q}p(x)\,\mathrm{d}x.\] ### Non-asymptotic bounds for empirical measures Let \(\mu\) be a probability distribution on \(\mathbb{R}^{d}\) and \[\mu_{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}, \tag{14}\] be the empirical measure, where \((X_{i})_{i=1}^{n}\) are i.i.d. samples drawn from \(\mu\). Deriving the non-asymptotic convergence rate under the Wasserstein distance of the empirical measure \(\mu_{n}\) and the probability measure \(\mu\) on Polish space is one of the most important topics in statistics, probability, and machine learning. In recent years, significant progress has been made on this topic. When \(p=1\), the Kantorovich-Rubinstein duality [47] implies that \(W_{1}(\mu_{n},\mu)\) is equivalent to the supremum of the empirical process indexed by Lipschitz functions. As a consequence, [37] provides sharp lower and upper bounds of \(\mathbb{E}\left[W_{1}(\mu_{n},\mu)\right]\) for \(\mu\) supported on a bounded finite dimensional set. Subsequently, [76] studies the case when \(\mu\) is the uniform distribution on a \(d\)-dimensional unit cube. For general distributions, [13, 34, 43] establish sharp upper bounds of \(\mathbb{E}\left[W_{p}(\mu_{n},\mu)\right]\) in finite dimensional Euclidean spaces. Recently, by extending finite dimensional spaces to infinite dimensional functional spaces, [59] establishes similar results for general distributions. Besides the above mentioned bounds in expectation, [79] obtains a high probability bound on \(W_{p}(\mu_{n},\mu)\) for measures \(\mu\) with bounded supports. By applying Sanov's theorem to independent random variables, [14] establishes concentration inequalities for empirical measures on non-compact space. In this subsection, we will give a high probability bound on \(W_{2}(\mu_{n},\mu)\) by the Lipschitz transport properties proved in Theorems 2.5 and 2.6. To begin with, we will review the transportation inequality defined in Definition 4.5, the non-asymptotic convergence rate of \(\mathbb{E}\left[W_{p}(\mu_{n},\mu)\right]\) as given in Theorem 4.7, and its concentration inequality for \(p=2\) as stated in Theorem 4.8. Then, we will derive the non-asymptotic convergence rate for \(W_{p}(\mu_{n},\mu)\), as stated in Theorem 4.9 by combining the transportation inequality of the Gaussian measure on \(\mathbb{R}^{d}\) as established in [76] and the transportation inequality of the push-forward measure of the Gaussian measure under Lipschitz mapping, as shown in Lemma 4.6. **Definition 4.5** (Transportation inequality).: _The probability measure \(\mu\) satisfies the \(L^{p}\)-transportation inequality on \(\mathbb{R}^{d}\) if there is some constant \(C>0\) such that for any probability measure \(\nu\), \(W_{p}(\mu,\nu)\leq\sqrt{2CH(\nu\mid\mu)}\). To be short, we write \(\mu\in\mathrm{T}_{\mathrm{p}}(C)\) for this relation._ **Lemma 4.6** ([36]).: _Assume that \(\mu\in\mathrm{T}_{\mathrm{p}}(C)\) on \(\mathbb{R}^{d}\). If \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is Lipschitz continuous with constant \(\alpha>0\), then \(\nu=\mu\circ\Phi^{-1}\in\mathrm{T}_{\mathrm{p}}(\alpha^{2}C)\) on \(\mathbb{R}^{d}\)._ **Theorem 4.7** ([43]).: _Let \(p>0\), assume that for some \(r>p\) and \(\int_{\mathbb{R}^{d}}|x|^{r}\,\mu(\mathrm{d}x)\) is finite. Then there exists a constant \(C>0\) depending only on \(p,r,d\) such that for all \(n\geq 1\),_ \[\mathbb{E}\left[W_{p}(\mu_{n},\mu)\right]\leq C\left(\int_{\mathbb{R}^{d}}|x| ^{r}\,\mu(\mathrm{d}x)\right)^{p/r}\begin{cases}n^{-\frac{1}{2}}+n^{-\frac{r-p} {r}},&\text{if }p>d/2\text{ and }r\neq 2p\\ n^{-\frac{1}{2}}\log(1+n)+n^{-\frac{r-p}{r}},&\text{if }p=d/2\text{ and }r\neq 2p\\ n^{-\frac{p}{d}}+n^{-\frac{r-p}{r}},&\text{if }p<d/2\text{ and }r\neq\frac{d}{d-p}\end{cases}\] _where the expectation is taken on the samples \(X_{1},\cdots,X_{n}\)._ The next result states that a \(\mathrm{T}_{2}(C)\) inequality on \(\mu\) implies Gaussian concentration inequality for \(W_{2}(\mu_{n},\mu)\). **Theorem 4.8** ([44]).: _Let a probability measure \(\mu\) on \(\mathbb{R}^{d}\) satisfy the transportation inequality \(\mathrm{T}_{2}(C)\). The following holds:_ \[\mathbb{P}\left(W_{2}(\mu_{n},\mu)\geq\mathbb{E}\left[W_{2}(\mu_{n},\mu)\right]+t \right)\leq\exp\left(-\frac{nt^{2}}{C}\right).\] For any probability measure \(\nu\) on \(\mathbb{R}^{d}\) with a finite fifth moment, let us define \[\mathsf{M}(\nu,d,n):=c_{d}\left(\int_{\mathbb{R}^{d}}|x|^{5}\,\nu(\mathrm{d}x) \right)^{2/5}\begin{cases}n^{-1/2}&\text{if }d<4\\ n^{-1/2}\log(1+n)&\text{if }d=4\\ n^{-2/d}&\text{if }d>4\end{cases} \tag{15}\] where the constant \(c_{d}\) depends only on \(d\). On the other hand, for the \(L^{2}\)-transportation inequality \(\mathrm{T}_{2}(C)\), recall that Talagrand [77] proved that the standard Gaussian measure \(\gamma_{1}=N(0,1)\) satisfies \(\mathrm{T}_{2}(C)\) on \(\mathbb{R}\) w.r.t. the Euclidean distance with the sharp constant \(C=1\) and found that \(\mathrm{T}_{2}(C)\) is stable for product (or independent) tensorization. Therefore, combining Lemma 4.6, Theorems 2.5, 2.6, 4.7 and 4.8, we obtain the following results. **Theorem 4.9** (Concentration for empirical measures).: _Suppose that Assumptions 1, 2 and 3 hold, and let probability measure \(\nu\) has a finite fifth moment._ 1. _If_ \(\kappa D^{2}\geq 1\)_, then_ \(\nu\in\mathrm{T}_{2}(1/\kappa)\)_. Moreover, for any_ \(\varepsilon\in(0,1)\)_, it holds that_ \[W_{2}(\nu_{n},\nu)\leq\left(\frac{\log\varepsilon^{-1}}{n\kappa}\right)^{1/2 }+\mathsf{M}(\nu,d,n)\] _with probability at least_ \(1-\varepsilon\) _and constant_ \(\mathsf{M}(\nu,d,n)\) _given in (_15_)._ 2. _If_ \(\kappa D^{2}<1\)_, then_ \(\nu\in\mathrm{T}_{2}\left(D^{2}\exp(1-\kappa D^{2})\right)\)_. Moreover, for any_ \(\varepsilon\in(0,1)\)_, it holds that_ \[W_{2}(\nu_{n},\nu)\leq\left\{\frac{\log\varepsilon^{-1}}{nD^{2}\exp(1-\kappa D ^{2})}\right\}^{1/2}+\mathsf{M}(\nu,d,n)\] _with probability at least_ \(1-\varepsilon\) _and constant_ \(\mathsf{M}(\nu,d,n)\) _given in (_15_)._ 3. _If_ \(\nu=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\) _where_ \(\rho\) _is a probability measure supported on a ball of radius_ \(R\) _on_ \(\mathbb{R}^{d}\)_, then_ \(\nu\in\mathrm{T}_{2}\left(\sigma^{2}\exp(R^{2}/\sigma^{2})\right)\)_. Moreover, for any_ \(\varepsilon\in(0,1)\)_, it holds that_ \[W_{2}(\nu_{n},\nu)\leq\left\{\frac{\log\varepsilon^{-1}}{n\sigma^{2}\exp(R^{ 2}/\sigma^{2})}\right\}^{1/2}+\mathsf{M}(\nu,d,n)\] _with probability at least_ \(1-\varepsilon\) _and constant_ \(\mathsf{M}(\nu,d,n)\) _given in (_15_)._ ## 5. Conclusion We have constructed the Follmer flow originating from a standard Gaussian measure and hitting a general target measure. By studying the well-posedness of the Follmer flow, we have established the Lipschitz property of its flow map at time \(t=1\). Such a Lipschitz transport map enables get functional inequalities with dimension-free constants and derive concentration inequalities for the empirical measure for rich classes of probability measures. It is worthwhile to notice that the Follmer velocity field has an analytic expression that is compatible with Monte Carlo approximations. Therefore, a possible direction of future research would be to design general-purpose sampling algorithms and score-based generative models using the Follmer flow. Besides, being limited to scenarios covered in Assumptions 2 and 3, the work could be extended to explore weaker and even minimal regularity assumptions on the target measure. For example, replacing semi-log-concavity with "convexity at infinity" in [15, 18] is a potential step. ## Appendix A Proof of Theorem 2.4 and Proposition 3.5 ### Well-definedness of the Follmer flow Recall that the velocity field \(V(t,x)\) defined in (2) yields \[V(t,x):=\frac{\nabla\log\mathcal{Q}_{1-t}r(x)}{t},\quad r(x):=\frac{p(x)}{\phi( x)}\] where \(\nu(\mathrm{d}x)=p(x)\mathrm{d}x\). For any \(t\in(0,1]\), then one obtains \[\mathcal{Q}_{1-t}r(x)=\int_{\mathbb{R}^{d}}\varphi^{tx,1-t^{2}}(y)r(y) \mathrm{d}y=\int_{\mathbb{R}^{d}}\phi(z)r(tx+\sqrt{1-t^{2}}z)\mathrm{d}z,\] where \(\varphi^{tx,1-t^{2}}(y)\) is the density of the \(d\)-dimensional Gaussian measure with mean \(tx\) and covariance \((1-t^{2})\mathbf{I}_{d}\). For the convenience of subsequent calculation, we introduce the following symbols: \[S(t,x):=\nabla\log q_{t}(x),\quad q_{t}(x):=\int_{\mathbb{R}^{d}}q(t,x|1,y)p(y )\mathrm{d}y\] where \(q(t,x|1,y):=(2\pi(1-t^{2}))^{-\frac{d}{2}}\exp\left(-\frac{|x-ty|^{2}}{2(1-t^ {2})}\right)\) for any \(t\in[0,1]\). Notice that \[\mathcal{Q}_{1-t}r(x)=(2\pi)^{d/2}\exp\left(\frac{|x|^{2}}{2}\right)\times \frac{1}{(2\pi(1-t^{2}))^{d/2}}\int_{\mathbb{R}^{d}}p(y)\exp\left(-\frac{|x-ty| ^{2}}{2(1-t^{2})}\right)\mathrm{d}y.\] Then we have \(\nabla\log\mathcal{Q}_{1-t}r(x)=x+S(t,x)\), for any \(t\in[0,1]\). Suppose that the target distribution \(p\) satisfies the third moment condition, we can supplement the definition of velocity field \(V\) at time \(t=0\), so that \(V\) is well-defined on the interval \([0,1]\). Then we have the following result: **Lemma A.1**.: _Suppose that \(\mathbb{E}_{p}[|X|^{3}]<\infty\), then_ \[\lim_{t\downarrow 0}V(t,x)=\lim_{t\downarrow 0}\frac{x+S(t,x)}{t}=\mathbb{E}_{p }[X].\] Proof.: Let \(t\to 0\), then it yields \[\lim_{t\downarrow 0}V(t,x)=\lim_{t\downarrow 0}\partial_{t}S(t,x)=\lim_{t \downarrow 0}\left\{\frac{\nabla[\partial_{t}q_{t}(x)]}{q_{t}(x)}-\frac{ \partial_{t}q_{t}(x)}{q_{t}(x)}S(t,x)\right\}.\] On the one hand, by simple calculation, it holds that \[\partial_{t}q_{t}(x) =\partial_{t}\int_{\mathbb{R}^{d}}q(t,x|1,y)p(y)\mathrm{d}y= \partial_{t}\int_{\mathbb{R}^{d}}\left[2\pi(1-t^{2})\right]^{-\frac{d}{2}}\exp \left(-\frac{|x-ty|^{2}}{2(1-t^{2})}\right)p(y)\mathrm{d}y\] \[=\frac{td}{1-t^{2}}q_{t}(x)-\frac{t}{(1-t^{2})^{2}}|x|^{2}q_{t}(x )+\frac{1+t^{2}}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}x^{\top}yq(t,x|1,y)p(y) \mathrm{d}y\] \[\quad-\frac{t}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}|y|^{2}q(t,x|1, y)p(y)\mathrm{d}y.\] Furthermore, we also obtain \[\frac{\partial_{t}q_{t}(x)}{q_{t}(x)}=\frac{td}{1-t^{2}}-\frac{t}{(1-t^{2})^{2 }}|x|^{2}+\frac{1+t^{2}}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}x^{\top}yq(1,y|t, x)\mathrm{d}y-\frac{t}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}|y|^{2}q(1,y|t,x) \mathrm{d}y.\] On the other hand, by straightforward calculation, it yields \[\nabla[\partial_{t}q_{t}(x)]=-\frac{td}{(1-t^{2})^{2}}xq_{t}(x)+ \frac{t^{2}d}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}yq(t,x|1,y)p(y)\mathrm{d}y\] \[\quad-\frac{2t}{(1-t^{2})^{2}}xq_{t}(x)+\frac{t}{(1-t^{2})^{3}}|x| ^{2}xq_{t}(x)-\frac{t^{2}|x|^{2}}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}yq(t,x| 1,y)p(y)\mathrm{d}y\] \[\quad+\frac{1+t^{2}}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}yq(t,x|1, y)p(y)\mathrm{d}y\] \[\quad-\frac{1+t^{2}}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}(x^{\top} y)xq(t,x|1,y)p(y)\mathrm{d}y+\frac{t(1+t^{2})}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}(x^{ \top}y)yq(t,x|1,y)p(y)\mathrm{d}y\] \[\quad+\frac{t}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}x|y|^{2}q(t,x|1, y)p(y)\mathrm{d}y-\frac{t^{2}}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}y|y|^{2}q(t,x|1,y)p(y) \mathrm{d}y.\] Moreover, we also obtain \[\frac{\nabla[\partial_{t}q_{t}(x)]}{q_{t}(x)} =-\frac{td}{(1-t^{2})^{2}}x+\frac{t^{2}d}{(1-t^{2})^{2}}\int_{ \mathbb{R}^{d}}yq(1,y|t,x)\mathrm{d}y-\frac{2tx}{(1-t^{2})^{2}}+\frac{t|x|^{2}x }{(1-t^{2})^{3}}\] \[\quad-\frac{t^{2}}{(1-t^{2})^{3}}|x|^{2}\int_{\mathbb{R}^{d}}yq(1,y|t,x)\mathrm{d}y+\frac{1+t^{2}}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}yq(1,y|t,x )\mathrm{d}y\] \[\quad-\frac{1+t^{2}}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}(x^{\top} y)xq(1,y|t,x)\mathrm{d}y+\frac{t(1+t^{2})}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}(x^{ \top}y)yq(1,y|t,x)\mathrm{d}y\] \[\quad+\frac{t}{(1-t^{2})^{3}}x\int_{\mathbb{R}^{d}}|y|^{2}q(1,y|t,x)\mathrm{d}y-\frac{t^{2}}{(1-t^{2})^{3}}\int_{\mathbb{R}^{d}}y|y|^{2}q(1,y| t,x)\mathrm{d}y.\] Since \(\mathbb{E}_{p}[|X|^{3}]<\infty\), it yields \[\lim_{t\downarrow 0}\int_{\mathbb{R}^{d}}|y|^{3}q(1,y|t,x)\mathrm{d}y=\int_{ \mathbb{R}^{d}}|y|^{3}\lim_{t\downarrow 0}q(1,y|t,x)\mathrm{d}y=\mathbb{E}_{p} \left[|X|^{3}\right]<+\infty.\] Furthermore, we have \[\lim_{t\downarrow 0}\frac{\partial_{t}q_{t}(x)}{q_{t}(x)}S(t,x)=-xx^{\top} \mathbb{E}_{p}[X],\quad\lim_{t\downarrow 0}\frac{\nabla[\partial_{t}q_{t}(x)]}{q_{t}( x)}=\mathbb{E}_{p}[X]-xx^{\top}\mathbb{E}_{p}[X].\] Therefore, it yields \(\lim_{t\downarrow 0}V(t,x)=\mathbb{E}_{p}[X]\), which completes the proof. ### Cramer-Rao inequality In order to obtain a lower bound of the Jacobian matrix of velocity field \(V\) defined in (2), we apply the classical Cramer-Rao bound [71, 31] in statistical parameter estimation to a special case for location parameter estimation. This particular application is far from being new in information theory and convex geometry (for example, see [33, 64, 26, 73]). We include it here for the sake of completeness. A lower bound on the covariance matrix of a prescribed probability measure directly follows the Cramer-Rao bound [30, Theorem 11.10.1]. **Lemma A.2** (Cramer-Rao bound).: _Let \(\mu_{\theta}(\mathrm{d}x)=f_{\theta}(x)\mathrm{d}x\) be a probability measure on \(\mathbb{R}^{d}\) such that the density \(f_{\theta}(x)\) is of class \(C^{2}\) with respect to an unknown parameter \(\theta\in\Theta\). Assume that a few mild regularity assumptions hold. Then provided that \(\{X_{i}\}_{i=1}^{n}\) are i.i.d. samples from \(\mu_{\theta}\) with size \(n\), the mean-squared error of any unbiased estimator \(g(X_{1},X_{2},\cdots,X_{n})\) for the parameter \(\theta\) is lower bounded by the inverse of the Fisher information matrix:_ \[\mathbb{E}_{\mu_{\theta}}\left[(g(X_{1},X_{2},\cdots,X_{n})-\theta)^{\otimes 2 }\right]\succeq\left(-n\mathbb{E}_{\mu_{\theta}}\left[\frac{\partial^{2}}{ \partial\theta^{2}}\log f_{\theta}(X_{1})\right]\right)^{-1}.\] We consider the example of location parameter estimation. Suppose \(\theta\) is the location parameter and let \(f_{\theta}(x)=f(x-\theta)\) and \(g(x)=x\). Specifically, it yields a lower bound on the covariance matrix of the probability measure \(\mu_{\theta}\) in the case that \(\theta=\mathbb{E}_{\mu_{\theta}}[X]\), i.e., a random sample \(X\sim\mu_{\theta}\) is an unbiased estimator of the mean \(\theta\). Apart from this implication, an alternative proof of the same lower bound on the covariance matrix is presented in [23]. It is worth noting that a compactly supported probability measure \(\mu\) would suffice to ensure the Cramer-Rao inequality holds. **Lemma A.3**.: _Let \(\mu(\mathrm{d}x)=\exp(-U(x))\mathrm{d}x\) be a probability measure on \(\mathbb{R}^{d}\) such that \(U\) of class \(C^{2}\) on the interior of its domain. Suppose \(X\) is a random sample from \(\mu\). Then the covariance matrix is lower bounded as \(\mathrm{Cov}_{\mu}(X)\succeq\left(\mathbb{E}_{\mu}\left[\nabla^{2}U(X)\right] \right)^{-1}\)._ ### Proof of Propositions 3.5 Proof.: By Ito SDE defined in (7), we have the distribution \(\overline{X}_{t}\), which is given by \[\overline{X}_{t}|\overline{X}_{0}=x_{0}\sim N((1-t)x_{0},t(2-t)\mathbf{I}_{d}). \tag{16}\] Due to the Cauchy-Lipschitz theory [4, Section 2], the push-forward map \(X_{t}^{*}\) and process \(\overline{X}_{t}\) have the same distribution for any \(t\in[0,1-\varepsilon]\). Then by (16), we obtain \[X_{t}^{*}\overset{d}{=}\overline{X}_{t}\overset{d}{=}(1-t)X+\sqrt{t(2-t)}Y\] with \(X\sim\nu,Y\sim\gamma_{d}\). Recall that \(X_{1-\varepsilon}^{*}\) and \(\varepsilon X+\sqrt{1-\varepsilon^{2}}Y\) have the same distribution. Therefore, by the definition of \(W_{2}\) and Cauchy-Schwarz's inequality, it yields \[W_{2}^{2}(\nu\circ(X_{1-\varepsilon}^{*})^{-1},\gamma_{d}) \leq\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|\varepsilon x+( \sqrt{1-\varepsilon^{2}}-1)y|^{2}p(x)\phi(y)\mathrm{d}x\mathrm{d}y\] \[\leq 2\varepsilon^{2}\int_{\mathbb{R}^{d}}|x|^{2}p(x)\mathrm{d}x +2\left(\sqrt{1-\varepsilon^{2}}-1\right)^{2}\int_{\mathbb{R}^{d}}|y|^{2}\phi( y)\mathrm{d}y\] \[=2\varepsilon^{2}\operatorname{\mathbb{E}}_{p}[|X|^{2}]+2d\left( \sqrt{1-\varepsilon^{2}}-1\right)^{2}.\] Let \(\varepsilon\to 0\), and it yields \(\lim_{\varepsilon\to 0}W_{2}(\nu\circ(X_{1-\varepsilon}^{*})^{-1},\gamma_{d})=0\), which completes the proof. ## Appendix B Proof of Theorem 2.5 and Theorem 2.6 ### Bound on the Lipschitz constant of the flow map We take a close look at Lipschitz properties of the Follmer flow (1). In order to derive functional inequalities, we deploy the approach of Lipschitz changes of variables from the Gaussian measure \(\gamma_{d}\) to the target measure \(\nu\). One key argument is to bound the maximum eigenvalue of the Jacobian matrix of velocity field denoted as \(\lambda_{\max}(\nabla V(t,x))\). By integrating both sides of (1) w.r.t. time \(s\in[0,t]\), we have \[X_{t}(x)-X_{0}(x)=\int_{0}^{t}V(s,X_{s}(x))\mathrm{d}s,\quad X_{0}(x)=x. \tag{17}\] Taking the first-order derivative w.r.t \(x\) on both sides of (17), we get \[\nabla X_{t}(x)-\nabla X_{0}(x)=\int_{0}^{t}\nabla V(s,X_{s}(x))\nabla X_{s}( x)\mathrm{d}s. \tag{18}\] Taking the first-order derivative w.r.t \(t\) on both sides of (18), we get \[\frac{\partial}{\partial t}\nabla X_{t}(x)=\nabla V(t,X_{t}(x))\nabla X_{t}(x). \tag{19}\] Let \(a_{t}=|\nabla X_{t}(x)r|^{2}\) with \(|r|=1\). Assume \(\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}\). By (19) we get \[\frac{\partial}{\partial t}a_{t}=2\left\langle(\nabla X_{t})r,\frac{\partial }{\partial t}(\nabla X_{t})r\right\rangle=2\left\langle(\nabla X_{t})r,\left( \nabla V(t,X_{t})\nabla X_{t}\right)r\right\rangle\leq 2\theta_{t}a_{t}.\] The above display and Gronwall's inequality imply \[\|\nabla X_{t}(x)\|_{\mathrm{op}}=\sup_{\|r\|_{2}=1}\sqrt{a_{t}}\leq\sup_{\|r \|_{2}=1}\sqrt{a_{0}}\exp\left(\int_{0}^{t}\theta_{s}\mathrm{d}s\right)=\exp \left(\int_{0}^{t}\theta_{s}\mathrm{d}s\right).\] Let \(t=1\), then we get \[\mathrm{Lip}(X_{1}(x))\leq\|\nabla X_{1}(x)\|_{\mathrm{op}}\leq\exp\left(\int_ {0}^{1}\theta_{s}\mathrm{d}s\right). \tag{20}\] ### Lipschitz properties of transport maps In this subsection, we show that the considered flow map is Lipschitz in various settings. The following is the main result of this subsection and it covers the Lipschitz statements of Theorem 2.5 and Theorem 2.6. **Theorem B.1**.: 1. _Suppose that either_ \(p\) _is_ \(\kappa\)_-semi-log-concave for some_ \(\kappa>0\)_, or_ \(p\) _is_ \(\kappa\)_-semi-log-concave for some_ \(\kappa\in\mathbb{R}\) _and_ \(D<+\infty\)_. Then the Follmer flow (_1_) has a unique solution for all_ \(t\in[0,1]\)_. Furthermore,_ 1. _If_ \(\kappa D^{2}\geq 1\)_, then_ \(X_{1}(x)\) _is a Lipschitz mapping with constant_ \(\frac{1}{\sqrt{\kappa}}\)_, or equivalently,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}^{2}\leq\frac{1}{\kappa},\quad\forall x\in \mathbb{R}^{d}.\] 2. _If_ \(\kappa D^{2}<1\)_, then_ \(X_{1}(x)\) _is a Lipschitz mapping with constant_ \(\exp\left(\frac{1-\kappa D^{2}}{2}\right)D\)_, or equivalently,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}^{2}\leq\exp\left(1-\kappa D^{2}\right)D^{2},\quad\forall x\in\mathbb{R}^{d}.\] _._ 2. _Fix a probability measure_ \(\rho\) _on_ \(\mathbb{R}^{d}\) _supported on a ball of radius_ \(R\) _and let_ \(p:=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\)_. Then the Follmer flow (_1_) has a unique solution for all_ \(t\in[0,1]\)_. Furthermore,_ \(X_{1}(x)\) _is a Lipschitz mapping with constant_ \(\sigma\exp\left(\frac{R^{2}}{2\sigma^{2}}\right)\)_, or equivalently,_ \[\|\nabla X_{1}(x)\|_{\mathrm{op}}^{2}\leq\sigma^{2}\exp\left(\frac{R^{2}}{ \sigma^{2}}\right),\quad\forall x\in\mathbb{R}^{d}.\] Proof.: Combine Theorem B.3-(4) and Corollaries B.4-B.5 and then complete the proof. In fact, the existence of a solution to the IVP (1) also relies on controlling \(\nabla V\). To this end, we represent \(\nabla V\) as a covariance matrix. We start by defining a measure \(p^{tx,1-t^{2}}\) on \(\mathbb{R}^{d}\), for fixed \(t\in[0,1)\) and \(x\in\mathbb{R}^{d}\), by \[p^{tx,1-t^{2}}(y):= \frac{\varphi^{tx,1-t^{2}}(y)r(y)}{\mathcal{Q}_{1-t}r(x)},\quad r :=\frac{\mathrm{d}\nu}{\mathrm{d}\gamma_{d}}=\frac{p}{\phi}, \tag{21}\] where \(\varphi^{tx,1-t^{2}}(y)\) is the density of the \(d\)-dimensional Gaussian measure with mean \(tx\) and covariance \((1-t^{2})\mathbf{I}_{d}\) and \[\mathcal{Q}_{1-t}r(x)=\int_{\mathbb{R}^{d}}\varphi^{tx,1-t^{2}}(y)r(y)\mathrm{ d}y=\int_{\mathbb{R}^{d}}\phi(z)r(tx+\sqrt{1-t^{2}}z)\mathrm{d}z. \tag{22}\] Notice that \[\mathcal{Q}_{1-t}r(x)=(2\pi)^{d/2}\exp\left(\frac{|x|^{2}}{2}\right)\frac{1}{( 2\pi(1-t^{2}))^{d/2}}\int_{\mathbb{R}^{d}}p(y)\exp\left(-\frac{|x-ty|^{2}}{2(1 -t^{2})}\right)\mathrm{d}y.\] Hence, we obtain \[V(t,x)=\frac{x+S(t,x)}{t}=\frac{\nabla\log\mathcal{Q}_{1-t}r(x)}{t},\quad 0 <t\leq 1.\] **Lemma B.2**.: _Suppose velocity field \(V\) is defined in (2), then_ \[\nabla V(t,x)=\frac{t}{(1-t^{2})^{2}}\mathrm{Cov}\left(p^{tx,1-t^{2}}\right)- \frac{t}{1-t^{2}}\mathbf{I}_{d},\quad\forall t\in(0,1),\quad\nabla V(0,x)=0. \tag{23}\] Proof.: By taking the first-order and the second-order derivatives on both sides in (22), we get \[\nabla\mathcal{Q}_{1-t}r(x) =\frac{t}{1-t^{2}}\int_{\mathbb{R}^{d}}(y-tx)\varphi^{tx,1-t^{2}} (y)r(y)\mathrm{d}y,\] \[\nabla^{2}\mathcal{Q}_{1-t}r(x) =\frac{t^{2}}{(1-t^{2})^{2}}\int_{\mathbb{R}^{d}}(y-tx)^{\otimes 2 }\varphi^{tx,1-t^{2}}(y)r(y)\mathrm{d}y-\left(\frac{t^{2}}{1-t^{2}}\int_{ \mathbb{R}^{d}}\varphi^{tx,1-t^{2}}(y)r(y)\mathrm{d}y\right)\mathbf{I}_{d}.\] Then we obtain \[\nabla^{2}\log\mathcal{Q}_{1-t}r(x)\] \[= \frac{\nabla^{2}\mathcal{Q}_{1-t}r(x)}{\mathcal{Q}_{1-t}r(x)}- \left(\frac{\nabla\mathcal{Q}_{1-t}r(x)}{\mathcal{Q}_{1-t}r(x)}\right)^{ \otimes 2}\] \[= \frac{t^{2}}{(1-t^{2})^{2}}\left[\int_{\mathbb{R}^{d}}(y-tx)^{ \otimes 2}p^{tx,1-t^{2}}(y)\mathrm{d}y-\left(\int_{\mathbb{R}^{d}}(y-tx)p^{tx,1-t^{2}}(y)\mathrm{d}y\right)^{\otimes 2}\right]-\frac{t^{2}}{1-t^{2}}\mathbf{I}_{d}\] \[= \frac{t^{2}}{(1-t^{2})^{2}}\left[\int_{\mathbb{R}^{d}}y^{\otimes 2 }p^{tx,1-t^{2}}(y)\mathrm{d}y-\left(\int_{\mathbb{R}^{d}}yp^{tx,1-t^{2}}(y) \mathrm{d}y\right)^{\otimes 2}\right]-\frac{t^{2}}{1-t^{2}}\mathbf{I}_{d}\] \[= \frac{t^{2}}{(1-t^{2})^{2}}\mathrm{Cov}(p^{tx,1-t^{2}})-\frac{t^{ 2}}{1-t^{2}}\mathbf{I}_{d}.\] Therefore, we get \[\nabla V(t,x)=\frac{t}{(1-t^{2})^{2}}\mathrm{Cov}(p^{tx,1-t^{2}})-\frac{t}{1-t ^{2}}\mathbf{I}_{d}. \tag{24}\] This completes the proof. Next, we use the representation of (23) to estimate the upper bound of \(\nabla V(t,x)\). **Theorem B.3**.: _Let \(p\) be a probability measure on \(\mathbb{R}^{d}\) with \(D:=(1/\sqrt{2})\mathrm{diam}(\mathrm{supp}(p))\)._ 1. _For every_ \(t\in[0,1)\)_,_ \[\frac{t}{1-t^{2}}\mathbf{I}_{d}\preceq\nabla V(t,x)\preceq\left(\frac{tD^{2}}{(1-t ^{2})^{2}}-\frac{t}{1-t^{2}}\right)\mathbf{I}_{d}.\] (25) 2. _Suppose that_ \(p\) _is_ \(\beta\)_-semi-log-convex with_ \(\beta\in(0,+\infty)\)_. Then for any_ \(t\in[0,1]\)_,_ \[\nabla V(t,x)\succeq\frac{t(1-\beta)}{\beta(1-t^{2})+t^{2}}\mathbf{I}_{d}.\] (26) _In particular, when_ \(p\sim N\left(0,\frac{1}{\beta}\mathbf{I}_{d}\right)\)_, then_ \[\nabla V(t,x)=\frac{t(1-\beta)}{\beta(1-t^{2})+t^{2}}\mathbf{I}_{d}.\] 3. _Let_ \(\kappa\in\mathbb{R}\) _and suppose that_ \(p\) _is_ \(\kappa\)_-semi-log-concave. Then for any_ \(t\in\left[\sqrt{\frac{\kappa}{\kappa-1}\mathds{1}_{\kappa<0}},1\right]\)_,_ \[\nabla V(t,x)\preceq\frac{t(1-\kappa)}{\kappa(1-t^{2})+t^{2}}\mathbf{I}_{d}.\] (27) 4. _Fix a probability measure_ \(\rho\) _on_ \(\mathbb{R}^{d}\) _supported on a ball of radius_ \(R\) _and let_ \(p:=N(0,\sigma^{2}\mathbf{I}_{d})*\rho\) _with_ \(\sigma>0\)_. Then for any_ \(t\in[0,1]\)_,_ \[\frac{(\sigma^{2}-1)t}{1+(\sigma^{2}-1)t^{2}}\mathbf{I}_{d}\preceq\nabla V(t, x)\preceq t\left\{\frac{(\sigma^{2}-1)[1+(\sigma^{2}-1)t^{2}]+R^{2}}{[1+( \sigma^{2}-1)t^{2}]^{2}}\right\}\mathbf{I}_{d}.\] (28) Proof.: The proof idea of this theorem follows similar arguments as in [65, Lemma 3.3]. 1. By [32, Theorem 2.6], there exists a closed ball with radius less than \(D:=(1/\sqrt{2})\mathrm{diam}(\mathrm{supp}(p))\) that contains \(\mathrm{supp}(p)\) in \(\mathbb{R}^{d}\). Then the desired bounds are a direct result of \(0\mathbf{I}_{d}\preceq\mathrm{Cov}(p^{tx,1-t^{2}})\preceq D^{2}\mathbf{I}_{d}\) and (23). 2. For any \(t\in(0,1)\), recall that (23) reads \[\nabla V(t,x)=\frac{t}{(1-t^{2})^{2}}\mathrm{Cov}\left(p^{tx,1-t^{2}}\right)- \frac{t}{1-t^{2}}\mathbf{I}_{d}.\] (29) On the one hand, let \(p\) be \(\beta\)-semi-log-convex for some \(\beta>0\). Then for any \(t\in[0,1),p^{tx,1-t^{2}}\) is \(\left(\beta+\frac{t^{2}}{1-t^{2}}\right)\)-semi-log-convex because \[-\nabla^{2}\log\left(p^{tx,1-t^{2}}(y)\right)=-\nabla^{2}\log\left(r(y)\phi(y) \right)-\nabla^{2}\log\left(\frac{\varphi^{tx,1-t^{2}}(y)}{\phi(y)}\right) \preceq\left(\beta+\frac{t^{2}}{1-t^{2}}\right)\mathbf{I}_{d}\] where we use that \(p(y)=r(y)\phi(y)\). On the other hand, by Lemma A.3, we obtain \[\mathrm{Cov}\left(p^{tx,1-t^{2}}\right)\succeq\left(\beta+\frac{t^{2}}{1-t^{ 2}}\right)^{-1}\mathbf{I}_{d}.\] Furthermore, by (29), we obtain \[\nabla V(t,x)\succeq\left\{\frac{t}{(1-t^{2})^{2}}\left(\beta+\frac{t^{2}}{1- t^{2}}\right)^{-1}-\frac{t}{1-t^{2}}\right\}\mathbf{I}_{d}=\frac{t(1-\beta)}{ \beta(1-t^{2})+t^{2}}\mathbf{I}_{d}.\] Recall that \((X_{t})_{t\in[0,1]}\) satisfies the IVP (1), then we have \[\nabla V(t,x)=\frac{\nabla^{2}\log\frac{\Omega_{1-t}r(x)}{t}}{r(x)},\quad r(x ):=\frac{p(x)}{\phi(x)}.\] Since \(p\sim N\left(0,\frac{1}{\beta}\mathbf{I}_{d}\right)\), then it yields \[r(x)=\beta^{d/2}\exp\left(-\frac{\beta-1}{2}|x|^{2}\right)\propto\exp\left(- \frac{\beta-1}{2}|x|^{2}\right),\] where the symbol \(\propto\) signifies equality up to a constant which does not depend on \(x\). Then by straightforward calculation for \(\mathcal{Q}_{1-t}r(x)\), we obtain \[\mathcal{Q}_{1-t}r(x) \propto\int_{\mathbb{R}^{d}}\exp\left\{-\frac{\beta-1}{2}\left|tx+ \sqrt{1-t^{2}}y\right|^{2}-\frac{|y|^{2}}{2}\right\}\mathrm{d}y\] \[=\int_{\mathbb{R}^{d}}\exp\left\{-\frac{(\beta-1)t^{2}}{2}|x|^{2} -(\beta-1)t\sqrt{1-t^{2}}\left\langle x,y\right\rangle-\frac{\beta(1-t^{2})+t ^{2}}{2}|y|^{2}\right\}\mathrm{d}y\] \[=\exp\left(-\frac{(\beta-1)t^{2}|x|^{2}}{2\beta_{t}}\right)\int_ {\mathbb{R}^{d}}\exp\left\{-\frac{\beta_{t}}{2}\left|y+\frac{(\beta-1)t\sqrt{1 -t^{2}}}{\beta_{t}}x\right|^{2}\right\}\mathrm{d}y,\] where we denote \(\beta_{t}:=(1-t^{2})\beta+t^{2}\). Considering that the integrand in the last line is proportional to the density of a Gaussian measure, then the value of the integral does not depend on \(x\), and \[\mathcal{Q}_{1-t}r(x)\propto\exp\left(-\frac{(\beta-1)t^{2}|x|^{2}}{2\beta_{ t}}\right)=\exp\left(-\frac{|x|^{2}}{2}\cdot\frac{(\beta-1)t^{2}}{(1-t^{2}) \beta+t^{2}}\right).\] So we have \[\nabla V(t,x)=\frac{\nabla^{2}\mathcal{Q}_{1-t}r(x)}{t}=\frac{(1-\beta)t}{(1 -t^{2})\beta+t^{2}}\mathbf{I}_{d}.\] 3. Let \(p\) be \(\kappa\)-semi-log-concave. Then for any \(t\in[0,1)\), \(p^{tx,1-t^{2}}\) is \(\left(\kappa+\frac{t^{2}}{1-t^{2}}\right)\)-semi-log-concave because \[-\nabla^{2}\log\left(p^{tx,1-t^{2}}(y)\right)=-\nabla^{2}\log\left(r(y)\phi(y) \right)-\nabla^{2}\log\left(\frac{\varphi^{tx,1-t^{2}}(y)}{\phi(y)}\right) \succeq\left(\kappa+\frac{t^{2}}{1-t^{2}}\right)\mathbf{I}_{d}\] where we use \(p(y)=r(y)\phi(y)\). If \(t\in\left[\sqrt{\frac{\kappa}{\kappa-1}\mathds{1}_{\kappa<0}},1\right]\), then \(\kappa+\frac{t^{2}}{1-t^{2}}\geq 0\). By the well-known Brascamp-Lieb inequality [8, 16], applied to functions of the form \(\mathbb{R}^{d}\ni x\mapsto f(x)=\left\langle x,v\right\rangle\) for any \(v\in\mathbb{S}^{d-1}\), we obtain \[\mathrm{Cov}\left(p^{tx,1-t^{2}}\right)\preceq\left(\kappa+\frac{t^{2}}{1-t^{ 2}}\right)^{-1}\mathbf{I}_{d}\] and the result follows by (23). 4. On the one hand, we have \[p^{tx,1-t^{2}}(y)=\frac{(N(0,\sigma^{2}\mathbf{I}_{d})*\rho)(y)}{\varphi^{0,1 }(y)}\cdot\frac{\varphi^{tx,1-t^{2}}(y)}{\mathcal{Q}_{1-t}\left(\frac{N(0, \sigma^{2}\mathbf{I}_{d})*\rho)}{\varphi^{0,1}}\right)(x)}=A_{x,t}\int_{ \mathbb{R}^{d}}\varphi^{z,\sigma^{2}}(y)\varphi^{\frac{x}{t},\frac{1-t^{2}}{t^ {2}}}(y)\rho(\mathrm{d}z),\] where the constant \(A_{x,t}\) depends only on \(x\) and \(t\). Moreover, we obtain \[p^{tx,1-t^{2}}(y)=\int_{\mathbb{R}^{d}}\varphi^{\frac{(1-t^{2})z+\sigma^{2}t _{2}}{1+(\sigma^{2}-1)t^{2}},\frac{\sigma^{2}(1-t^{2})}{1+(\sigma^{2}-1)t^{2} }}(y)\tilde{\rho}(\mathrm{d}z)\] where \(\tilde{\rho}\) is a probability measure on \(\mathbb{R}^{d}\) which is a multiple of \(\rho\) by a positive function. In particular, \(\tilde{\rho}\) is supported on the same ball as \(\rho\). On the other hand, let \(Z\sim\gamma_{d}\) and \(Y\sim\tilde{\rho}\) be independent. Then \[\sqrt{\frac{\sigma^{2}(1-t^{2})}{1+(\sigma^{2}-1)t^{2}}}Z+\frac{(1-t^{2})}{1+( \sigma^{2}-1)t^{2}}Y+\frac{t\sigma^{2}}{1+(\sigma^{2}-1)t^{2}}x\sim p^{tx,1-t ^{2}}.\] Due to \(0\mathbf{I}_{d}\preceq\mathrm{Cov}(Y)\preceq R^{2}\mathbf{I}_{d}\), it holds that \[\frac{\sigma^{2}(1-t^{2})}{1+(\sigma^{2}-1)t^{2}}\mathbf{I}_{d}\preceq \mathrm{Cov}(p^{tx,1-t^{2}})\preceq\frac{\sigma^{2}(1-t^{2})[1+(\sigma^{2}-1)t ^{2}]+(1-t^{2})^{2}R^{2}}{[1+(\sigma^{2}-1)t^{2}]^{2}}\mathbf{I}_{d}.\] By applying (23) again, it yields \[\frac{(\sigma^{2}-1)t}{1+(\sigma^{2}-1)t^{2}}\mathbf{I}_{d}\preceq\nabla V(t,x )\preceq t\left\{\frac{(\sigma^{2}-1)[1+(\sigma^{2}-1)t^{2}]+R^{2}}{[1+( \sigma^{2}-1)t^{2}]^{2}}\right\}\mathbf{I}_{d}.\] This completes the proof of Theorem B.3. Next, we present an upper bound on \(\lambda_{\max}(\nabla V(t,x))\) and its exponential estimation. **Corollary B.4**.: _Let \(p\) be a probability measure on \(\mathbb{R}^{d}\) with \(D:=(1/\sqrt{2})\mathrm{diam}(\mathrm{supp}(p))\) and suppose that \(p\) is \(\kappa\)-semi-log-concave with \(\kappa\in[0,+\infty)\)._ 1. _If_ \(\kappa D^{2}\geq 1\)_, then_ \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\frac{t(1-\kappa)}{t^{2}(1- \kappa)+\kappa}.\] (30) _and_ \[\exp\left(\int_{0}^{1}\theta_{s}\mathrm{d}s\right)=\frac{1}{\sqrt{\kappa}}.\] (31) 2. _If_ \(\kappa D^{2}<1\)_, then_ \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\begin{cases}\frac{t(t^{2}+D^{2} -1)}{(1-t^{2})^{2}},&t\in[0,t_{0}],\\ \frac{t(1-\kappa)}{t^{2}(1-\kappa)+\kappa},&t\in[t_{0},1],\end{cases}\] (32) _where_ \(t_{0}=\sqrt{\frac{1-\kappa D^{2}}{(1-\kappa)D^{2}+1}}\) _and_ \[\exp\left(\int_{0}^{1}\theta_{s}\mathrm{d}s\right)=\exp\left(\frac{1-\kappa D^ {2}}{2}\right)D.\] (33) Proof.: By Theorem B.3, we obtain \[\lambda_{\max}(\nabla V(t,x))\leq\frac{tD^{2}}{(1-t^{2})^{2}}-\frac{t}{1-t^{2 }},\quad\lambda_{\max}(\nabla V(t,x))\leq\frac{t(1-\kappa)}{\kappa(1-t^{2})+t ^{2}},\quad\forall t\in[0,1].\] By simple algebra calculation, it yields \[\frac{t(D^{2}+t^{2}-1)}{(1-t^{2})^{2}}\leq\frac{t(1-\kappa)}{\kappa(1-t^{2})+ t^{2}}\quad\text{if and only if}\quad(1+D^{2}-\kappa D^{2})t^{2}\leq 1-\kappa D^{2}.\] We consider two cases. 1. \(\kappa D^{2}\geq 1\): By considering \(\kappa D^{2}=1\), we see that the bound \((1+D^{2}-\kappa D^{2})t^{2}\leq 1-\kappa D^{2}\) cannot hold. So it would be advantageous to use the bound \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\frac{t(1-\kappa)}{\kappa(1-t^{2 })+t^{2}}=\frac{t(1-\kappa)}{t^{2}(1-\kappa)+\kappa}.\] Next, we will compute \(\exp\left(\int_{0}^{1}\theta_{t}\mathrm{d}t\right)\) and we first check that the integral \(\int_{0}^{1}\theta_{t}\mathrm{d}t\) is well-defined. For this reason, we only need to consider whether the sign of the denominator \((1-\kappa)t^{2}+\kappa\) is equal to 0. The only case is \((1-\kappa)t^{2}+\kappa=0\) that happens when \(t_{0}^{2}:=\kappa/(\kappa-1)\). If \(\kappa\in(0,1]\), \((1-\kappa)t^{2}+\kappa\neq 0\). Thus, \(\theta_{t}\) is integrable on \([0,1]\). If \(\kappa>1\), \(t_{0}>1\). Then \(\theta_{t}\) is integrable on \([0,1]\) as well. The only case is \(\kappa=0\) which results in \(t_{0}=0\). However, in this case, we cannot have \(\kappa D^{2}\geq 1\) as \(\kappa=0\). Then by simple calculation, \[\int_{0}^{1}\theta_{t}\mathrm{d}t=(1-\kappa)\int_{0}^{1}\frac{t\mathrm{d}t}{(1 -\kappa)t^{2}+\kappa}=-\frac{1}{2}\log\kappa,\quad\exp\left(\int_{0}^{1} \theta_{t}\mathrm{d}t\right)\leq\frac{1}{\sqrt{\kappa}}.\] 2. \(\kappa D^{2}<1\): The condition \((1+D^{2}-\kappa D^{2})t^{2}\leq 1-\kappa D^{2}\) is equivalent to \[t\leq\sqrt{\frac{1-\kappa D^{2}}{1+(1-\kappa)D^{2}}}\] since the denominator is nonnegative as \(\kappa D^{2}<1\). Hence, we define \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\begin{cases}\frac{t(t^{2}+D^{2} -1)}{(1-t^{2})^{2}},&0\leq t\leq t_{0},\\ \frac{t(1-\kappa)}{t^{2}(1-\kappa)+\kappa},&t_{0}\leq t\leq 1,\end{cases}\] where \(t_{0}:=\sqrt{\frac{1-\kappa D^{2}}{(1-\kappa)D^{2}+1}}\). In order to compute integral \(\int_{0}^{1}\theta_{\mathrm{r}}\mathrm{d}t\), we note that, following the discussion in the case \(\kappa D^{2}\geq 1\), the denominators \(1-t^{2}\) and \((1-\kappa)t^{2}+\kappa\) do not vanish in the intervals \([0,t_{0}]\) and \([t_{0},1]\), respectively. For \(t\in[0,t_{0}]\), using integral by parts, we have \[\int_{0}^{t_{0}}\frac{t(t^{2}+D^{2}-1)}{(1-t^{2})^{2}}\mathrm{d}t =\frac{1}{2}\int_{0}^{t_{0}}(t^{2}+D^{2}-1)\mathrm{d}\left(\frac{1}{1-t^{2}}\right)\] \[=\left.\frac{t^{2}+D^{2}-1}{2(1-t^{2})}\right|_{t=0}^{t=t_{0}}- \frac{1}{2}\int_{0}^{t_{0}}\frac{2t}{1-t^{2}}\mathrm{d}t=\frac{t_{0}^{2}}{2(1- t_{0}^{2})}D^{2}+\frac{1}{2}\log(1-t_{0}^{2})\] \[=\frac{1-\kappa D^{2}}{2}+\frac{1}{2}\log\left(\frac{D^{2}}{1+(1 -\kappa)D^{2}}\right).\] For \(t\in[t_{0},1]\), we have \[\int_{t_{0}}^{1}\frac{t(1-\kappa)}{\kappa+(1-\kappa)t^{2}}\mathrm{d}t=-\frac{ 1}{2}\log\left(t_{0}^{2}+(1-t_{0}^{2})\kappa\right)=-\frac{1}{2}\log\left( \frac{1}{1+(1-\kappa)D^{2}}\right).\] Hence, we obtain \[\int_{0}^{1}\theta_{\mathrm{r}}\mathrm{d}t=\int_{0}^{t_{0}}\theta_{\mathrm{r} }\mathrm{d}t+\int_{t_{0}}^{1}\theta_{\mathrm{r}}\mathrm{d}t=\frac{1-\kappa D^ {2}}{2}+\log D.\] Then \[\exp\left(\int_{0}^{1}\theta_{\mathrm{r}}\mathrm{d}t\right)=\exp\left(\frac{1 -\kappa D^{2}}{2}+\log D\right)=D\exp\left(\frac{1-\kappa D^{2}}{2}\right).\] This completes the proof of Corollary B.4. **Corollary B.5**.: _Let \(p\) be a probability measure on \(\mathbb{R}^{d}\) with \(D:=(1/\sqrt{2})\mathrm{diam}(\mathrm{supp}(p))<\infty\) and suppose that \(p\) is \(\kappa\)-semi-log-concave with \(\kappa\in(-\infty,0)\). We have_ \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\begin{cases}\frac{t(t^{2}+D^{2} -1)}{(1-t^{2})^{2}},&t\in[0,t_{0}]\\ \frac{t(1-\kappa)}{t^{2}(1-\kappa)+\kappa},&t\in[t_{0},1]\end{cases} \tag{34}\] _where \(t_{0}=\sqrt{\frac{1-\kappa D^{2}}{(1-\kappa)D^{2}+1}}\) and_ \[\exp\left(\int_{0}^{1}\theta_{\mathrm{s}}\mathrm{d}s\right)=\exp\left(\frac{1 -\kappa D^{2}}{2}\right)D. \tag{35}\] Proof.: By Theorem B.3, we obtain \[\lambda_{\max}(\nabla V(t,x))\leq\frac{tD^{2}}{(1-t^{2})^{2}}-\frac{t}{1-t^{ 2}},\quad\forall t\in[0,1),\quad\lambda_{\max}(\nabla V(t,x))\leq\frac{t(1- \kappa)}{\kappa(1-t^{2})+t^{2}},\quad\forall t\in\left[\sqrt{\frac{\kappa}{ \kappa-1}},1\right].\] Then it yields \[\lambda_{\max}(\nabla V(t,x))\leq\frac{t(t^{2}+D^{2}-1)}{(1-t^{2})^{2}},\quad \forall t\in\left[0,\sqrt{\frac{\kappa}{\kappa-1}}\right).\] Next, since \(0<\sqrt{\frac{\kappa}{\kappa-1}}<\sqrt{\frac{1-\kappa D^{2}}{(1-\kappa)D^{2}+1 }}\leq 1\) and \(\kappa(1-t^{2})+t^{2}\geq 0\) for all \(t\geq\sqrt{\frac{\kappa}{\kappa-1}}\), then one obtains \[\frac{t(t^{2}+D^{2}-1)}{(1-t^{2})^{2}}\leq\frac{t(1-\kappa)}{\kappa(1-t^{2})+ t^{2}}\] for all \(t\in\left[\sqrt{\frac{\kappa}{\kappa-1}},\sqrt{\frac{-\kappa D^{2}}{(1-\kappa)D^{2}+1}}\right]\). We define \[\lambda_{\max}(\nabla V(t,x))\leq\theta_{t}:=\begin{cases}\frac{t(t^{2}+D^{2} -1)}{(1-t^{2})^{2}},&t\in[0,t_{0}]\\ \frac{t(1-\kappa)}{t^{2}(1-\kappa)+\kappa},&t\in[t_{0},1]\end{cases}\] where \(t_{0}:=\sqrt{\frac{1-\kappa D^{2}}{(1-\kappa)D^{2}+1}}\). As in the proof of Corollary B.4, it holds that \[\int_{0}^{t_{0}}\theta_{\mathrm{r}}\mathrm{d}t=\frac{1-\kappa D^{2}}{2}+\frac {1}{2}\log\left(\frac{D^{2}}{1+(1-\kappa)D^{2}}\right),\quad\int_{t_{0}}^{1} \theta_{\mathrm{r}}\mathrm{d}t=-\frac{1}{2}\log\left(\frac{1}{1+(1-\kappa)D^{2 }}\right).\] Then we have \[\int_{0}^{1}\theta_{t}\mathrm{d}t=\frac{1-\kappa D^{2}}{2}+\log D,\quad\exp\left( \int_{0}^{1}\theta_{t}\mathrm{d}t\right)=D\exp\left(\frac{1-\kappa D^{2}}{2} \right).\] This completes the proof of Corollary B.5. ## Appendix C Proof of Theorems 4.2, 4.3 and 4.4 We start with a differential Lipschitz mapping \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) associated with constant \(C\). The following result describes the Lipschitz properties of the derivatives of composite mappings. **Lemma C.1**.: _Let \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a differential Lipschitz mapping with constant \(C\) and let \(\zeta:\mathbb{R}^{d}\to\mathbb{R}\) be a continuously differentiable function. Then_ \[\nabla(\zeta\circ T)=[(\nabla\zeta)\circ T]\nabla T,\] _where \((\nabla T)(x):\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\) be a Jacobian matrix for any \(x\in\mathbb{R}^{d}\). Furthermore, we obtain_ \[|\nabla\zeta(T(x))|\leq\|\nabla T(x)\|_{\mathrm{op}}\cdot|(\nabla\zeta)\circ( T(x))|\leq C|(\nabla\zeta)\circ(T(x))|\] _for all \(x\in\mathbb{R}^{d}\)._ Since the proof of this result is almost trivial by using the chain rule and the Lipschitz mapping \(T\), we omit it here. Through Lemma C.1, we can start the proofs of the functional inequalities which follow from Theorems 2.5 and 2.6. We first begin with the \(\Psi\)-Sobolev inequalities defined in [20]. ### Proof of Theorem 4.2 Proof.: 1. It can be seen from [20, Corollary 2.1] that for standard Gaussian measure \(\gamma_{d}\) on \(\mathbb{R}^{d}\), we have the following \(\Psi\)-Sobolev inequalities: \[\mathrm{Ent}^{\Psi}_{\gamma_{d}}(F)\leq\frac{1}{2}\int_{\mathbb{R}^{d}}\Psi^{ \prime\prime}(F)|\nabla F|^{2}\mathrm{d}\gamma_{d}\] (36) for any smooth function \(F:\mathbb{R}^{d}\to\mathcal{I}\). Let \((X_{t})_{t\in[0,1]}\) be the solution of IVP (1) so that \(X_{1}\sim p\) if \(X_{0}\sim N(0,\mathbf{I}_{d})\). Suppose that \(X_{1}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a Lipschitz mapping with constant \(C\) and let \(F:=\zeta\circ X_{1}:\mathbb{R}^{d}\to\mathcal{I}\) with \(\zeta:\mathbb{R}^{d}\to\mathcal{I}\). Then combining Lemma C.1, (36) and \(p=\gamma_{d}\circ(X_{1})^{-1}\) we have \[\mathrm{Ent}^{\Psi}_{p}(\zeta)=\mathrm{Ent}^{\Psi}_{\gamma_{d}}( F)\leq\frac{1}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(F)|\nabla F|^{2} \mathrm{d}\gamma_{d} \leq\frac{C^{2}}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta \circ X_{1})|\nabla\zeta\circ X_{1}|^{2}\mathrm{d}\gamma_{d}\] \[=\frac{C^{2}}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta(x ))|\nabla\zeta(x)|^{2}p(x)\mathrm{d}x.\] The proof is complete by Theorems 2.5 and 2.6. 2. Let random vector \(Y\sim\rho\), let \(\tilde{\rho}\) be the law of \(\Sigma^{-1/2}Y\), and define \(\tilde{p}:=\gamma_{d}*\tilde{\rho}\). Set \(\lambda_{\min}:=\lambda_{\min}(\Sigma)\) and \(\lambda_{\max}:=\lambda_{\max}(\Sigma)\). Then combining Lemma C.1, (36) and \(\tilde{p}=\gamma_{d}\circ(X_{1})^{-1}\), we have \[\mathrm{Ent}^{\Psi}_{\tilde{p}}(\zeta)\leq\frac{\exp(\lambda_{\min}^{-1}R^{2} )}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta(x))|\nabla\zeta(x)|^{2} \tilde{p}(x)\mathrm{d}x.\] Let \(p=N(a,\Sigma)*\rho\) and let \(\tilde{X}\sim\tilde{p}\) such that \[\Sigma^{1/2}\tilde{X}+a=\Sigma^{1/2}\left(X+\Sigma^{-1/2}Y\right)+a=\left( \Sigma^{1/2}X+a\right)+Y\sim p=N(a,\Sigma)*\rho,\] where \(X\sim N(0,\mathbf{I}_{d})\). Given \(\zeta:\mathbb{R}^{d}\to\mathcal{I}\) and let \(\tilde{\zeta}(x):=\zeta(\Sigma^{1/2}x+a)\) so that \[\mathrm{Ent}^{\Psi}_{p}(\zeta)=\mathrm{Ent}^{\Psi}_{\tilde{p}}(\tilde{\zeta}) \leq\frac{\exp(\lambda_{\min}^{-1}R^{2})}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime \prime}(\tilde{\zeta}(x))|\nabla\tilde{\zeta}(x)|^{2}\tilde{p}(x)\mathrm{d}x.\] Since \((\nabla\tilde{\zeta})(x)=\Sigma^{1/2}\left(\nabla\zeta(\Sigma^{1/2}x+a)\right)\), we get \[|(\nabla\tilde{\zeta})(x)|^{2}\leq\lambda_{\max}\left|\left(\nabla\zeta( \Sigma^{1/2}x+a)\right)\right|^{2}.\] Furthermore, it yields that \[\mathrm{Ent}_{p}^{\Psi}(\zeta)\leq\frac{\lambda_{\max}\exp(\lambda_{\min}^{-1}R^{ 2})}{2}\int_{\mathbb{R}^{d}}\Psi^{\prime\prime}(\zeta(x))|\nabla\ \zeta(x)|^{2}p(x)\mathrm{d}x.\] This completes the proof. ### Proof of Theorem 4.3 Proof.: 1. By using [54, Theorem 4.3], then the Gaussian measure \(\gamma_{d}\) on \(\mathbb{R}^{d}\) satisfies the following Gaussian isoperimetric inequality: \[\gamma_{d}(K_{t})\geq\Phi\left(\gamma_{d}(K)+t\right),\quad t\geq 0\] for any Borel measurable set \(K_{t}:=K+tB_{2}^{d}\) and \(K\subseteq\mathbb{R}^{d}\). Therefore, suppose \((X_{t})_{t\in[0,1]}\) be the solution of IVP (1) so that \(X_{1}\sim p\) if \(X_{0}\sim N(0,\mathbf{I}_{d})\). Moreover, suppose that \(X_{1}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a Lipschitz mapping with constant \(C\), then for any fixed \(x\in\mathbb{R}^{d}\), \[|X_{1}(x+y)-X_{1}(x)|\leq C|y|,\quad\forall y\in\mathbb{R}^{d}.\] We first show the following result: \[X_{1}^{-1}(E)+\frac{t}{C}B_{2}^{d}\subseteq X_{1}^{-1}(E_{t}),\quad E_{t}:=E+ tB_{2}^{d}\] (37) for any Borel measurable set \(E\subseteq\mathbb{R}^{d}\) and \(t\geq 0\). To obtain (37), we only need to prove that \[X_{1}\left(X_{1}^{-1}(E)+\frac{r}{C}B_{2}^{d}\right)\subseteq E_{t},\quad t\geq 0\] or, in other words, if \(x\in X_{1}^{-1}(E)+\frac{t}{C}B_{2}^{d}\), then \(X_{1}(x)\in E_{t}\) for any Borel measurable set \(K\). Furthermore, if we assume \[x\in X_{1}^{-1}(E)+\frac{t}{C}B_{2}^{d}\quad\text{so that}\quad x=\theta+\frac{t}{C}h\] for some \(\theta\in X_{1}^{-1}(E)\) and \(h\in B_{2}^{d}\), we have \(X_{1}\left(x-\frac{t}{C}h\right)\in E\). Then it yields that \[\left|X_{1}\left(x-\frac{t}{C}h\right)-X_{1}(x)\right|\leq t,\quad t\geq 0\] where \(x-\frac{t}{C}h\in X_{1}^{-1}(E)\). Therefore, \(X_{1}(x)\in E_{t}\) as desired. Finally, combining the Gaussian isoperimetric inequality and (37), it yields \[p(E_{t})=\gamma_{d}\left(X_{1}^{-1}(E_{t})\right)\geq\gamma_{d}\left(X_{1}^{-1 }(E)+\frac{t}{C}B_{2}^{d}\right)\geq\Phi\left(\gamma_{d}\left[X_{1}^{-1}(E)+ \frac{t}{C}\right]\right)=\Phi\left(p(E)+\frac{t}{C}\right).\] This proof is completed by Theorems 2.5 and 2.6. 2. Let random vector \(Y\sim\rho\), let \(\tilde{\rho}\) be the law of \(\Sigma^{-1/2}Y\), and define measure \(\tilde{p}:=\gamma_{d}*\tilde{\rho}\). Set \(\lambda_{\min}:=\lambda_{\min}(\Sigma)\) and \(\lambda_{\max}:=\lambda_{\max}(\Sigma)\). Similar to the argument of part (1), for any Borel set \(E\subset\mathbb{R}^{d}\) and \(t\geq 0\), we obtain \[\tilde{p}\left(E_{t}\right)\geq\Phi\left(\tilde{p}(E)+\frac{t}{C}\right),C:= \left(\lambda_{\min}\right)^{1/2}\exp\left(\frac{R^{2}}{2\lambda_{\min}} \right).\] Let \(p=N(a,\Sigma)*\rho\) and let \(\tilde{X}\sim\tilde{p}\) so that \[\Sigma^{1/2}\tilde{X}+a=\left(\Sigma^{1/2}X+a\right)+Y\sim p=N(a,\Sigma)*\rho\] and \(X\sim N(0,\mathbf{I}_{d})\). Then for any Borel measurable set \(E\subset\mathbb{R}^{d}\) and \(t\geq 0\), it yields that \[p(E_{t})=\tilde{p}\left(\Sigma^{-1/2}(E-a)+\Sigma^{-1/2}tB_{2}^{d}\right)\geq \tilde{p}\left(\Sigma^{-1/2}(E-a)+t\lambda_{\max}^{-1/2}B_{2}^{d}\right).\] Hence, we obtain \[p(E_{t})\geq\Phi\left(\tilde{p}\left[\Sigma^{-1/2}(E-a)\right]+\frac{t\lambda_ {\max}^{-1/2}}{C}\right).\] We obtain the desired result by applying \(\tilde{p}\left[\Sigma^{-1/2}(E-a)\right]=p(E)\). This completes the proof. ### Proof of Theorem 4.4 Proof.: 1. We will use the fact that [68, Proposition 3.1] the \(q\)-Poincare inequality holds for the standard Gaussian measure \(\gamma_{d}\) on \(\mathbb{R}^{d}\): \[\mathbb{E}_{\gamma_{d}}\left[F^{q}\right]\leq(q-1)^{q/2}\mathbb{E}_{\gamma_{d}} \left[|\nabla F|^{q}\right],\] (38) for any smooth function \(F\in L^{q}(\gamma_{d})\) with \(\mathbb{E}_{\gamma_{d}}[F]=0\). Let \((X_{t})_{t\in[0,1]}\) be the solution of IVP (1) so that \(X_{1}\sim p\) if \(X_{0}\sim N(0,\mathbf{I}_{d})\). Suppose that \(X_{1}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a Lipschitz mapping with constant \(C\) and let \(F:=\eta\circ X_{1}\). Then combining Lemma C.1, (38) and \(p=\gamma_{d}\circ(X_{1})^{-1}\) we have \[\mathbb{E}_{p}[\eta^{q}]=\mathbb{E}_{\gamma_{d}}[F^{q}]\leq(q-1)^{q/2} \mathbb{E}_{\gamma_{d}}[|\nabla F|^{q}]\leq C^{q}(q-1)^{q/2}\mathbb{E}_{p}[| \nabla\eta|^{q}].\] The proof is complete by Theorems 2.5 and 2.6. 2. Let \(Y\sim\rho\), let \(\tilde{\rho}\) be the law of \(\Sigma^{-1/2}Y\), and define \(\tilde{p}:=N(0,\mathbf{I}_{d})*\tilde{\rho}\). Set \(\lambda_{\min}:=\lambda_{\min}(\Sigma)\) and \(\lambda_{\max}:=\lambda_{\max}(\Sigma)\). The argument of part (1) gives, \[\mathbb{E}_{\tilde{p}}[\eta^{q}]\leq\exp\left(\frac{qR^{2}}{2\lambda_{\min}} \right)\lambda_{\min}^{q/2}(q-1)^{q/2}\mathbb{E}_{\tilde{p}}[|\nabla\eta|^{q}].\] Let \(p=N(a,\Sigma)*\rho\) and let \(\tilde{X}\sim\tilde{p}\) such that \[\Sigma^{1/2}\tilde{X}+a=\left(\Sigma^{1/2}X+a\right)+Y\sim p=N(a,\Sigma)*\rho\] and \(X\sim N(0,\mathbf{I}_{d})\). Let \(\tilde{\eta}(x):=\eta\left(\Sigma^{1/2}x+a\right)\) so that \[\mathbb{E}_{p}[\eta^{q}]=\mathbb{E}_{\tilde{p}}\left[(\tilde{\eta})^{q}\right] \leq\exp\left(\frac{qR^{2}}{2\lambda_{\min}}\right)\lambda_{\min}^{q/2}(q-1)^ {q/2}\mathbb{E}_{\tilde{p}}[|\nabla\tilde{\eta}|^{q}].\] Since \((\nabla\tilde{\eta})(x)=\Sigma^{1/2}\left(\nabla\eta\left(\Sigma^{1/2}x+a \right)\right)\) we have \[|(\nabla\tilde{\eta})(x)|^{q}\leq(\lambda_{\max})^{q/2}\left|\nabla\eta\left( \Sigma^{1/2}x+a\right)\right|^{q}.\] Further, we obtain \[\mathbb{E}_{p}[\eta^{q}]=\mathbb{E}_{\tilde{p}}\left[(\tilde{\eta})^{q}\right] \leq(\lambda_{\min}\lambda_{\max})^{q/2}\exp\left(\frac{qR^{2}}{2\lambda_{ \min}}\right)(q-1)^{q/2}\mathbb{E}_{p}[|\nabla\eta|^{q}].\] This completes the proof. ## Appendix D Time changes **Lemma D.1**.: _Let \((\overline{X}_{t})_{t\in[0,1)}\) be a diffusion process defined by (7) with \(\varepsilon\to 0\) and let \((\overline{Y}_{s})_{s\geq 0}\) be an Ornstein-Uhlenbeck process \((\overline{Y}_{s})_{s\geq 0}\) defined by_ \[\mathrm{d}\overline{Y}_{s}=-\overline{Y}_{s}\mathrm{d}s+\sqrt{2}\mathrm{d} \overline{W}_{s},\quad\overline{Y}_{0}\sim\nu,\quad s\geq 0. \tag{39}\] _Then \((\overline{X}_{t})_{t\in[0,1)}\) is equivalent to \((\overline{Y}_{s})_{s\geq 0}\) through the change of time formula \(t=1-e^{-s}\)._ Proof.: Let \(s=-\log(1-t)\) for any \(t\in[0,1)\). By applying (7), it yields \[\mathrm{d}\overline{X}_{1-e^{-s}}=-\overline{X}_{1-e^{-s}}\mathrm{d}s+\sqrt{2 }\mathrm{d}\overline{W}_{s},\quad\overline{X}_{0}\sim\nu,\quad s\geq 0.\] On the one hand, since (39) has a unique strong solution, it indicates \(\overline{Y}_{s}=\overline{X}_{1-e^{-s}}\) for all \(s\geq 0\). On the other hand, the infinitesimal generator of Markov process \((\overline{Y}_{s})_{s\geq 0}\) is given by \[L^{\overline{Y}}=\Delta-x\cdot\nabla. \tag{40}\] By using (7), the infinitesimal generator of \((\overline{X}_{t})_{t\in[0,1)}\) is given by \[L^{\overline{Y}}_{t}=\frac{1}{1-t}(\Delta-x\cdot\nabla). \tag{41}\] Furthermore, combining the chain rule and straightforward calculation, we obtain that processes \(\overline{X}_{t}\) and \(\overline{Y}_{s}\) have the same infinitesimal generator, which implies \(\overline{X}_{t}=\overline{Y}_{s}\) for any \(t\in[0,1),s=-\log(1-t)\) **Lemma D.2**.: _Let \((X_{t}^{*})_{t\in[0,1)}\) be the time reversal of a Follmer flow associated to probability measure \(\nu\) defined by (10) with \(\varepsilon\to 0\) and let \((Y_{s}^{*})_{s\geq 0}\) be a heat flow from probability measure \(\nu\) to the standard Gaussian measure \(\gamma_{d}\) defined by_ \[\mathrm{d}Y_{s}^{*}(x)=-\nabla\log\left\{\int_{\mathbb{R}^{d}}r\left(e^{-s}Y_{s }^{*}(x)+\sqrt{1-e^{-2s}}z\right)\mathrm{d}\gamma_{d}(z)\right\}\mathrm{d}s \tag{42}\] _where \(r(x):=(\mathrm{d}\nu/\mathrm{d}\gamma_{d})(x),Y_{0}^{*}\sim\nu\) for all \(s\geq 0\). Then \((X_{t}^{*})_{t\in[0,1)}\) is equivalent to \((Y_{s}^{*})_{s\geq 0}\) through the change of time formula \(t=1-e^{-s}\)._ Proof.: Let \(s=-\log(1-t)\) for every \(t\in[0,1)\). By (10), it yields \[\mathrm{d}X_{1-e^{-s}}^{*}(x)=-\nabla\log\left\{\int_{\mathbb{R}^{d}}r\left(e ^{-s}X_{1-e^{-s}}^{*}(x)+\sqrt{1-e^{-2s}}z\right)\mathrm{d}\gamma_{d}(z) \right\}\mathrm{d}s\] where \(X_{0}^{*}\sim\nu\) for all \(s\geq 0\). The expression above indicates that \(Y_{s}^{*}:=X_{1-e^{-s}}^{*}\) satisfies (42).
2309.14852
Influence of the optical Fe II quasi-continuum on measuring the spectral parameters of active galactic nuclei
We explore the influence of optical Fe II quasi-continuum on the measured spectral parameters in the 4150-5500 A range for the spectra of Type 1 active galactic nuclei (AGNs). We assume that the broad line region is composed of two sub-regions: the very broad line region (VBLR) and the intermediate line region (ILR). We constructed a large set of synthetic AGN spectra by taking different portions of the VBLR and ILR contributions, where initially the VBLR and ILR model spectra were constructed on the basis of prototypes of two observed spectra with dominant VBLR (i.e. ILR) emission. To investigate the influence of the optical Fe II quasi-continuum on the AGN measured spectral parameters, we fit the power-law continuum and emission lines in a set of model spectra, as commonly done for observed AGN spectra. We then compared the spectral parameters obtained after the fitting procedure with those of the model. We find that the optical Fe II quasi-continuum can be very strong in the case of spectra with strong and very broad Fe II lines and it is difficult to fully separate it from the power-law continuum. This gives the effect of a slightly underestimated H$\beta$ width and underestimated fluxes of the H$\beta$ and Fe II lines, while the continuum flux is then slightly overestimated. The most affected spectral parameters are the line equivalent widths (EWs), especially EW Fe II, which may be strongly underestimated. We discuss the possible underlying physics in the quasar main sequence, as implied by the results of our spectral modelling. We find that the set of AGN model spectra assuming different ILR and VBLR contributions can aptly reproduce the quasar main sequence, that is, the full width at half maximum (FWHM) H$\beta$ versus Fe II/H$\beta$ anti-correlation, where both parameters in this anti-correlation are strongly dependent on the ILR and VBLR contribution rate.
Luka Č. Popović, Jelena Kovačević-Dojčinović, Ivan Dojčinović, Maša Lakićević
2023-09-26T11:32:12Z
http://arxiv.org/abs/2309.14852v1
Influence of the optical Fe II quasi-continuum on measuring the spectral parameters of active galactic nuclei ###### Abstract Context: Aims:We explore the influence of optical Fe II quasi-continuum on the measured spectral parameters in the \(\lambda\lambda\)4150-5500 A range for the spectra of Type 1 active galactic nuclei (AGNs). Methods:We assume that the broad line region is composed of two sub-regions: the very broad line region (VBLR) and the intermediate line region (ILR). We constructed a large set of synthetic AGN spectra by taking different portions of the VBLR and ILR contributions, where initially the VBLR and ILR model spectra were constructed on the basis of prototypes of two observed spectra with dominant VBLR (i.e. ILR) emission. To investigate the influence of the optical Fe II quasi-continuum on the AGN measured spectral parameters, we fit the power-law continuum and emission lines in a set of model spectra, as commonly done for observed AGN spectra. We then compared the spectral parameters obtained after the fitting procedure with those of the model. Results:We find that the optical Fe II quasi-continuum can be very strong in the case of spectra with strong and very broad Fe II lines and it is difficult to fully separate it from the power-law continuum. This gives the effect of a slightly underestimated H\(\beta\) width and underestimated fluxes of the H\(\beta\) and Fe II lines, while the continuum flux is then slightly overestimated. The most affected spectral parameters are the line equivalent widths (EWs), especially EW Fe II, which may be strongly underestimated. We discuss the possible underlying physics in the quasar main sequence, as implied by the results of our spectral modelling. We find that the set of AGN model spectra assuming different ILR and VBLR contributions can aptly reproduce the quasar main sequence, that is, the full width at half maximum (FWHM) H\(\beta\) versus Fe II/H\(\beta\) anti-correlation, where both parameters in this anti-correlation are strongly dependent on the ILR and VBLR contribution rate. Conclusions: ## 1 Introduction Active galactic nuclei (AGNs) represent the most powerful sources in the universe, emitting energy across a broad wavelength band. They are divided into several groups that exhibit a range of different spectral characteristics (see e.g. Peterson, 1997; Osterbrock & Ferland, 2006; Netzer, 2013, etc.). Type 1 AGNs show different broad lines in the UV/optical spectral range, which originate very close to the central supermassive black hole (SMBH) and can offer important information on the physics in the SMBH vicinity. Additionally, broad emission lines have commonly been used for making supermassive black hole (SMBH) mass estimates (for review, see Popovic, 2020, and reference therein). One of the important parts of the spectral region is the optical one around the H\(\beta\) line. In particular, the H\(\beta\) line and continuum at \(\lambda\) 5100A are commonly used for single epoch mass estimates of SMBHs. Using the relation between mass and optical parameters, some other relations for SMBH mass estimations can be derived using the other lines apart from H\(\beta\) (especially in the UV) and the corresponding continuum. Therefore, the spectral properties of the H\(\beta\) and surrounding (continuum and line) spectra are very important for the SMBH mass determination, but also for the investigation of the other characteristics, such as AGN orientation, accretion rate, dust distribution around AGN, and so on (see e.g. Collin et al., 2006; Jarvis & McLure, 2006; Lakicevic et al., 2018, 2022; Sriram et al., 2022, etc.). Additionally, in the H\(\beta\) wavelength region, the emission of the Fe II lines is present (see e.g. Collin-Souffrin et al., 1980; Joly, 1981; Kovacevic et al., 2010; Shields et al., 2010; Marinello et al., 2016; Park et al., 2022; Gaskell et al., 2022, etc.), which is very important for the investigation of AGN physics. The origin of these lines and their characteristics have been investigated in a number of papers (for review see Gaskell et al., 2022, and reference therein). However, there are many open questions concerning the optical Fe II lines origin and their connection with other AGN properties (see e.g. Kovacevic-Dojcinovic & Popovic, 2015; Le & Woo, 2019; Marziani et al., 2021). In principle, there are two distinct cases of AGN Type 1 spectra, one with generally narrower emission lines and stronger Fe II observed in the case of Narrow Line Seyfert 1 (NLS1) galaxies, while and the other one shows the broad emission lines observed in the case of Broad Line Seyfert 1 (BLS1). The difference between these two types (with the exception of the optical Fe II strength), could be seen in the other spectral properties (Sulentic et al., 2000). Boroson & Green (1992) found that as the optical to X-ray slope and equivalent width (EW) of the optical Fe II emission increase, the EW of the [OIII] lines (near H\(\beta\)) decreases, and the full width at half maximum intensity (FWHM) of H\(\beta\) also decreases. These relationships represent the so-called eigenvector 1 (EV1) correlations, obtained via a principal component analysis of an AGN sample, which may result from a range of different effects. It seems that the combination of the AGN orientation and accretion rate can cause different spectral properties of AGNs (Shen & Ho, 2014), indicating different physical AGN characteristics. The intensity of optical Fe II emission compared with the intensity of H\(\beta\) can be a good indicator of physical processes in AGNs. In the plane of the FWHM H\(\beta\) versus the intensity ratio of Fe II/H\(\beta\), there is a trend showing that AGNs with a broader H\(\beta\) have weaker Fe II optical lines that make so called 'quasar main sequence', which may be an indicator of the AGN accretion and orientation (see Sulentic et al., 2000; Shen & Ho, 2014; Marziani et al., 2018). Using the optical Fe II strength relative to the H\(\beta\) intensity and the FWHM H\(\beta\), it is possible to select different types of AGNs on the quasar main sequence. One of the divisions of these objects is based on population A and B AGNs (or quasars), taking into consideration that population A has FWHM H\(\beta\)\(<\) 4000 km s\({}^{-1}\) and strong Fe II emission (see Marziani et al., 2018, and reference therein). Population B represents a group of AGNs with broader lines and weaker optical Fe II emission. However, there are some other spectral characteristics that can indicate different physical properties and optical Fe II origin (as c.g. contribution of starbursts; see Popovic & Kovacevic, 2011). For example, Du & Wang (2019) found a prominent correlation between the fluxes of Fe II and H\(\beta\) emission lines with H\(\beta\) lags, confirming an important role of accretion rate in driving the shortened lags between variability of the continuum at \(\lambda\)5100A and H\(\beta\). Using this, they established the scaling relation between the radius of the broad line region (BLR) and continuum luminosity using the relative strength of Fe II emission. This relation has been used for estimates of the SMBH mass and accretion rate using the single-epoch spectra of AGNs. We may thus conclude that the optical Fe II lines are often used as an indicator of physical processes. Therefore, it is important to see different aspects of the Fe II origin and its influence on the spectral characteristics in the H\(\beta\) wavelength region. One of the conclusions in Kovacevic et al. (2010) was that the Fe II lines probably originate in the outer part of the BLR, but that a very broad Fe II component that originates closer to the SMBH may also be present. Due to the superposition of a number of the optical Fe II lines, it is possible that very broad Fe II components form a quasi-continuum, which is difficult to be distinguished from the real continuum emission. This motivates us to explore the influence of the optical Fe II quasi-continuum on AGN-measured spectral parameters, assuming that the BLR is complex and that Fe II and H\(\beta\) emission can originate from two sub-regions of the BLR, one closer to the SMBH, emitting very broad lines, and one farther from the central SMBH, emitting narrower lines, which are typical for NLS1 AGNs. Therefore, for purpose of this research we constructed the set of synthetic AGN spectra and investigated the influence of the Fe II pseudo-continuum on the measurements of some spectral parameters. The paper is organised as follows. In Sect. SS2, we describe the theoretical basis of our model for BLR, construction of set of synthetic, model spectra, and the method of extraction of the spectral properties. In Sect. SS3, we give our results and in Sect. SS4 we outline the conclusions. ## 2 Method and theoretical base of modelling In this section, we describe the two-component BLR model and the procedure for construction of synthetic spectra in 4150-5500 A spectral range, following this BLR model. Afterwards, we describe the fitting procedure of model spectra, in order to compare the spectral parameters obtained from the fit with those included in the model. ### Two-component BLR model and optical Fe II emission One of the open questions considering the Fe II emission is the place of origin with respect to the Fe II optical lines (see e.g. Kovacevic et al., 2010; Barth et al., 2013; Kovacevic-Dojcinovic & Popovic, 2015). As concluded in Kovacevic et al. (2010), the Fe II emission mostly originates from the outer part of the BLR, but it seems that the very broad component, which originates from the inner part of the BLR, may be present too. The contribution of the very broad component of Fe II is difficult to estimate since it contributes to the quasi-continuum in the spectra. Therefore, here we consider the case where the optical Fe II lines originate in a complex emission region. We assume that the Fe II lines have a very broad component that is coming from the inner part of the broad line region, close to the central SMBH (i.e. the so-called very broad line region, VBLR), and a component that originates in an intermediate line region (ILR), which is an outer part of the broad line region. This model is the so-called two component BLR model, similar to the one accepted and discussed in a number of papers (see e.g. Popovic et al., 2004; Bon et al., 2006, 2009; Hu et al., 2012, 2020, etc.). This assumption is supported by several careful empirical analyses of Fe II lines, where it has been observed that Fe II lines probably have two components: one Figure 1: Construction of the ILR model spectrum. Observed spectrum of SDSS J020039.16-084554.9 (SDSS plate–mid-fiber: 0666-52149-496) after reddening and redshift correction (_top_). ILR model spectrum made by using estimated parameters from the upper spectrum (_bottom_). narrower and one broader (Veron-Cetty et al., 2004; Dong et al., 2008; Park et al., 2022). As discussed in previous investigations based on a two-component BLR model, the contributions of the VBLR and ILR can be different in different spectra. To simplify the analysis, here we assume that the portion of VBLR contribution in Fe II lines is similar to that in Balmer lines in the same spectrum. Analysing the properties of the Fe II lines in a large sample of Type 1 spectra where these lines have been carefully investigated (see the sample in Kovacevic et al., 2010; Kovacevic-Dojcinovic & Popovic, 2015), we may notice two borderline cases: 1) the spectra with narrow Balmer lines and strong narrow Fe II lines (typical for NLS1s) and 2) the spectra with very broad Balmer lines and mostly weaker, broad Fe II lines. We assumed that the first ones are dominantly emitted from the ILR, while the second ones are likely to primarily be emitted from the VBLR. Therefore, first we constructed two basic model spectra, which are expected to represent the emission from ILR (i.e. VBLR) based on the properties of the two real prototype spectra, for which we assumed a dominant contribution from one of these two sub-regions. In order to build the models for the two mentioned borderline cases, we used the sample previously analysed by Kovacevic-Dojcinovic & Popovic (2015), where Fe II lines were fitted with the Fe II template described in Kovacevic et al. (2010). This template enables the estimation of the width of Fe II lines as well as the intensities of different Fe II line groups. Following the measurements of the line properties given in Kovacevic-Dojcinovic & Popovic (2015), we found two of the most extreme spectra, with the narrowest and broadest Fe II and H\(\beta\) emission lines. Although the majority of the spectra with extremely broad Balmer lines have weak, broad Fe II lines, here we chose the broadest spectrum with a significant amount of Fe II emission to investigate the Fe II pseudo-continuum. The narrowest Fe II and H\(\beta\) lines are seen in spectrum SDSS 0666-52149-496 (plate-mjd-fiber), where the FWHM of the broad H\(\beta\) is equal to 1970 km s\({}^{-1}\) and FWHM of the Fe II lines is equal to 1500 km s\({}^{-1}\), as measured in Kovacevic-Dojcinovic & Popovic (2015). The relative intensity of H\(\beta\) to Fe II 4549 A line is H\(\beta\)/Fe II 4549 A = 4.4. The spectrum with the broadest Fe II and H\(\beta\) from that sample is SDSS 2089-53498-385 (plate-mjd-fiber), where the FWHM of the broad H\(\beta\) is equal to 9030 km s\({}^{-1}\) and FWHM of the Fe II lines is equal to 8950 km s\({}^{-1}\). The relative intensity of H\(\beta\) to Fe II 4549 A line is H\(\beta\)/Fe II 4549 A = 1.8. These two spectra are shown in the upper panels of Figs. 1-2. We assumed that the emission from ILR (i.e. VBLR) is dominated in these two spectra, so we used them as prototypes for the construction of the ILR and VBLR model spectra. The models are constructed by adding the flux of the continuum, the flux of optical Fe II lines, and the broad components of the Balmer lines (H\(\beta\), H\(\gamma\) and H\(\delta\)). The Fe II lines are reproduced using the Fe II template given in Kovacevic et al. (2010), extended for Fe II lines given in Shapovalova et al. (2012), where Fe II widths and relative intensities between Fe II line groups are taken to be as measured in prototype spectra. Balmer lines are represented with a single Gaussian for each line, in order to simplify the model. The H\(\beta\) lines have the same FWHM and relative intensity to Fe II as measured in prototype spectra, while H\(\gamma\) and H\(\delta\) lines are taken to have the same width as H\(\beta\) line, and their intensities relative to H\(\beta\) are taken to be H\(\beta\)/H\(\gamma\) = 0.5, and H\(\delta\)/H\(\gamma\) = 0.3, following Case B (Osterbrock & Ferland, 2006). We put that all considered lines in models have no shift relative to the referent wavelength. To simplify the analysis, the narrow lines (narrow Balmer lines and [O III] lines) and broad He II \(\lambda\)4687.01 A and He I \(\lambda\)4027.32 A lines are not included in the models. The continuum emission included in ILR and VBLR model spectra is the same as estimated power-law continuum level in prototype ILR and VBLR spectra. For the prototype ILR spectrum (SDSS 0666-52149-496) estimated continuum is: \(I_{\rm contILR}(\lambda)\) = 10.9\((\lambda/5693.7)^{-15}\), while in the prototype VBLR spectrum (SDSS 2089-53498-385), it is: \(I_{\rm contVLER}(\lambda)\) = 9.8\((\lambda/5697.4)^{-1.7}\). Finally, the intensities of all considered lines and continuum levels are normalised to H\(\beta\) intensity, so in both models the H\(\beta\) intensity is equal to 1. In the model construction, we used the intensities measured in the two prototype spectra for the relative intensities of the lines included in the model, instead of arbitrary relative intensities, to achieve as many realistic models, since these ratios probably vary for spectra with different line widths. The models for the ILR and VBLR spectra are shown in the lower panels of Figs. 1-2. After obtaining the two initial ILR and VBLR spectral models, we created the initial set of synthetic spectra, obtained as a linear combination of the ILR and VBLR spectral models. In this way, we reproduced the multiple spectra with different contributions of the ILR and VBLR components in the Fe II and Balmer lines. Composite model spectra (\(I_{\rm comp}(\lambda)\)) are defined as: \[I_{\rm comp}(\lambda)=p_{1}\cdot I_{ILR}(\lambda)+p_{2}\cdot I_{VBLR}(\lambda),\] where \(I_{ILR}(\lambda)\) and \(I_{VBLR}(\lambda)\) are spectra corresponding to the ILR and VBLR spectral models, respectively (shown in the lower panels of Figs. 1-2) and \(p_{1},p_{2}\) are the rate of contribution of the ILR and VBLR, respectively, assuming that \(p_{1}+p_{2}=1\). Since the ILR and VBLR model spectra are normalised to H\(\beta\) maximum (the H\(\beta\) intensity is equal to 1), when multiplying these two spectra with different coefficients, \(p_{i}\), we obtain a sum Figure 2: Construction of the VBLR model spectrum. Details are the same as in Fig. 1, but for SDSS J120407.57+341916.3 (SDSS plate-mjd-fiber: 2089-53498-385) (_top_), which is used for construction of the VBLR model spectrum (_bottom_). mary spectrum where the maximum of H\(\beta\) is equal to 1. In this way, we can explore line shapes and their ratio relative to the H\(\beta\) intensity. The pairs of coefficients (\(p_{1},p_{2}\)) take values from 0.1 to 0.9, with step 0.1 (the ratio of \(p_{1}\):\(p_{2}\) takes following values: 0.1:0.9, 0.2:0.8, 0.3:0.7,..., 0.8:0.2, 0.9:0.1). In this way, we constructed a set of nine model spectra with different contributions from the ILR and VBLR components. The He I may significantly contribute to the pseudo-continuum level at \(\sim\)4000 A, therefore, in the further analysis, we use the spectra models in the range of 4150-5500 A (see vertical dashed line in Figs. 1-2). In that range, there is no contribution from the broad He I line. However, in the case of spectra with very broad lines, the broad H\(\delta\) contributes to the pseudo-continuum level at \(\sim\)4150 A (see Fig. 2), and therefore the flux of H\(\delta\) is included in all models for wavelengths larger than \(\sim\)4150 A. The initial set of model spectra (continuum and emission lines included), obtained as a linear combination of the ILR and VBLR spectral models, are shown in Fig. 3. In Fig. 4, we compared only the emission lines (Balmer + Fe II lines) from the initial set of model spectra, without including continuum emission. It can be seen that in the case of the broadest model (ILR contribution 0.1, VBLR contribution 0.9), the Fe II and Balmer lines are blended, creating the significant pseudo-continuum. ### Construction of the large set of model spectra Additionally, we created a large set of model spectra, giving more possibilities for width variation. In order to do that, we created several ILR and VBLR model spectra, with different widths of lines, which should be combined in order to create the larger sample of synthetic spectra. The width ranges for which we created ILR and VBLR models are determined following empirical results given in Kovacevic et al. (2010), performed by fitting a low-redshifted (z \(<\) 0.7) sample of \(\sim\)300 AGNs Type 1, obtained from SDSS. In that work, the H\(\beta\) line is fitted with two-component model, assuming that the core component arises in ILR and the wing component in VBLR. The measured Doppler widths of H\(\beta\) ILR and VBLR components, obtained from the best fit, are given in Kovacevic et al. (2010) in the Table 12. Following the ranges obtained for ILR and VBLR widths from this research, we obtained the six ILR model spectra and the 11 VBLR model spectra. The six ILR model spectra have the same continuum level and relative intensities between Fe II and Balmer lines as mea Figure 4: Initial set of models (nine modelled spectra): the comparison of the emission lines (Balmer lines + Fe II), with no continuum included. The broadest model is VBLR:ILR equal to 0.9:0.1 (blue) and the narrowest is VBLR:ILR equal to 0.1:0.9 (red). Figure 3: Initial set of synthetic spectra made as linear combination of the ILR and VBLR model spectra. sured in SDSS 0666-52149-496, just for different FWHMs of all emission lines (Balmer lines and Fe II): 1500 km s\({}^{-1}\), 2000 km s\({}^{-1}\), 2500 km s\({}^{-1}\), 3000 km s\({}^{-1}\), 3500 km s\({}^{-1}\), and 4000 km s\({}^{-1}\). Similarly, we obtained the 11 VBLR model spectra with the same continuum level and relative intensities between Fe II and Balmer lines as measured in the initial VBLR model (SDSS 2089-53498-385), but including different FWHMs of all emission lines: 4500 km s\({}^{-1}\), 5000 km s\({}^{-1}\), 5500 km s\({}^{-1}\), 6000 km s\({}^{-1}\), 7600 km s\({}^{-1}\), 7500 km s\({}^{-1}\), 8000 km s\({}^{-1}\), 8500 km s\({}^{-1}\), 9000 km s\({}^{-1}\), and 9500 km s\({}^{-1}\). In this set of ILR and VBLR models, we assume that the ILR components have the same widths for Balmer lines and Fe II. The same is assumed for the VBLR components of these lines. This assumption is supported by the results given in Kovacevic et al. (2010), where significant correlations were found on a large sample between the width of Fe II lines and the widths of H\(\beta\) ILR and H\(\beta\) VBLR components. The large set of model spectra is constructed similarly to the initial set of models, as described above. We performed the linear combination of each pair of ILR and VBLR models, with coefficients p1 and p2, where the coefficients may have values in the range of [0.1,0.9], with step 0.1, and p1+p2=1. Finally, this gives 594 synthetic spectra, with FWHM of total H\(\beta\) (ILR+VBLR) in the range of 1500 - 8700 km s\({}^{-1}\). Since we assume that the portion of VBLR over ILR component is the same for H\(\beta\) and Fe II lines and that the ILR and VBLR components of both lines have the same widths, the FWHM of Fe II lines is the same as for H\(\beta\) in each spectrum and in the range of 1500 - 8700 km s\({}^{-1}\) for a large set of model spectra. In further text, the nine synthetic spectra constructed as a linear combination of the ILR and VBLR models obtained directly from the prototype, observed spectra will be called the "initial set of model spectra" (shown in Figs. 3 and 4), while the set of 594 synthetic spectra created by variation in widths of the ILR and VBLR components will be called the "large set of model spectra." ### Fitting the synthetic spectra In order to investigate the influence of the underlying Fe II and Balmer line pseudo-continuum on the measured parameters in AGN spectra, we performed the following test. We calculated several spectral parameters directly from the model: flux of the continuum at 5100 A, EWs and fluxes of H\(\beta\) and Fe II lines, and FWHM H\(\beta\). Afterwards, we determined the continuum level in these modelled spectra, as commonly done, by fitting power law to some continuum windows given in literature, as continuum windows at 4210-4230 A, 5080-5100 A, and 5600-5630 A (see Kuraszkiewicz et al. 2002). The difference between the estimated continuum level using continuum windows and the real continuum level included in the model is shown in Fig. 5. As it can be seen in Fig. 5, the difference between measured continuum and the modelled one is not large in the case of synthetic spectra with narrow lines, but in the case of spectra with very broad lines, where the contribution of VBLR dominates over the ILR, it can be significant. After we obtained the best fit of the continuum, it was subtracted from the set of model spectra, and the Fe II and Figure 5: Comparison between the estimated continuum level by the fitting procedure and the initial continuum included in models. The initial model with 0.9 ILR and 0.1 VBLR contribution is given in panels (a1) and (b1), and the initial model with 0.1 ILR and 0.9 VBLR contribution is given in panels (a2) and (b2). In panels (a1) and (a2), the continuum level estimated by fitting is assigned the red line, while the continuum included in the model is assigned the blue line. In panels (b1) and (b2), the red line represents the line flux after subtraction of the continuum determined from fit, while the blue line is the sum of the H\(\beta\)+Fe II from model. Balmer lines are fitted with the same procedure as described in Kovacevic-Dojcinovic & Popovic (2015) for the optical range. The Fe II lines were fitted with the Fe II template given in Kovacevic et al. (2010) and Shapovalova et al. (2012), and were described with nine free parameters: width, shift, intensities for six line groups, and the temperature required for the calculation of the relative intensities of the Fe II lines within one group. The Balmer lines were fitted with a double-Gaussian model: one Gaussian fits the core and the other fits the broad wings of the line. The intensities of all components in H\(\beta\), H\(\gamma\), and H\(\delta\), are the free parameters, while the widths and shifts of the core components of H\(\beta\), H\(\gamma\), and H\(\delta\) are the same, as are the widths and shifts of the wing components in these three lines. As a result of the fit, we obtained the new values for the flux of continuum at 5100 A, EWs and fluxes of H\(\beta\) and Fe II lines, and FWHM H\(\beta\). This procedure was done together for the initial set of 9 model spectra and for the large set of 594 model spectra. ## 3 Results First, we analysed how our set of model spectra is representative in parameter space of the large sample of the observed spectra. Then, we analysed the difference between spectral parameters obtained directly from the model and those measured after continuum subtraction and the fitting procedure. Afterwards, we investigated the influence of the Fe II quasi-continuum, which is mostly coming from the VBLR, on the measured spectral properties that have been used in the investigation of different correlations. In most plots, we assigned in different ways the large set of 594 model spectra and the initial set of 9 model spectra. ### Quasar main sequence and set of model spectra The quasar main sequence is represented by the distribution of objects in the plane of the FWHM H\(\beta\) versus FeII/H\(\beta\) flux ratio, which, in principle, shows Eigenvector 1 correlations (Boroson & Green, 1992). This is an important tool in understanding AGN properties (see Shen & Ho, 2014). To check whether our synthetic sample of spectra follow the quasar main sequence, we plotted the FWHM of H\(\beta\) versus the ratio of Fe II flux (in the 4434-4684 A range) and flux H\(\beta\), using parameters, as included in models. It is interesting to note that FWHM H\(\beta\) versus Fe II/H\(\beta\) anticorrelation is well reproduced with constructed set of spectral models, with different rates of ILR/VBLR contribution (see Fig. 6). In Fig. 6, the different colours of the dots represent different VBLR contributions in the model spectra (from 0.1 to 0.9). It can be seen that FeII/H\(\beta\) flux ratio is strongly anticorrelated to the VBLR contribution (see colour palette), while the FWHM H\(\beta\) correlates with it. In our set of model spectra, the majority of objects with FWHM H\(\beta\)\(>\) 4000 km s\({}^{-1}\) (Pop B) have a VBLR contribution in the range of 0.5-0.9, while objects with FWHM H\(\beta\)\(<\) 4000 km s\({}^{-1}\) (Pop A), have a VBLR contribution in the range of 0.1-0.5. The Fe II/H\(\beta\) ratio of our set of models is in the range of 0.6-1.4, while the FWHM H\(\beta\) is in the range of 1500-9000 km s\({}^{-1}\). These values are within the range of Fe II/H\(\beta\) and FWHM H\(\beta\) parameters measured in large sample of about 20000 QSO spectra from SDSS in Shen & Ho (2014) (see contour in Fig. 7). The FWHM H\(\beta\) range of large QSO spectra in Shen & Ho (2014) is 1500-11000 km s\({}^{-1}\), while the range of ratio Fe II/H\(\beta\) is 0-2.3. The range of Fe II/H\(\beta\) in our sample is narrower than one in the large QSO sample from Shen & Ho (2014) since we adopted relative intensities of H\(\beta\) and Fe II from only two prototype spectra (from SDSS 0666-52149-496 for all ILR spectral models, and from SDSS 2089-53498-385 for all VBLR spectral models). The linear combinations of ILR and VBLR spectral models result in different relative intensities for these lines in different model spectra, but they still do not cover all possibilities for the Fe II/H\(\beta\) ratio which could be seen in real AGN spectra. Also, both our prototype spectra have strong Fe II lines relative to H\(\beta\), so our models do not include the spectra with weak Fe II emission (as can be seen in Fig. 7). ### Influence of the continuum subtraction on the measured spectral parameters Here, we compare the spectral parameters obtained after continuum subtraction and the fitting procedure of the synthetic spec Figure 6: Correlation between FWHM H\(\beta\) vs. Fe II/H\(\beta\) (EV1 correlation) for a large set of 594 model spectra. The colour palette represents the contribution of the VBLR included in the model. Figure 7: EV1 parameter space for the large set of 594 model spectra (grey dots) and the initial set of model spectra (black dots) compared with the contour of the total SDSS quasar sample from Shen & Ho (2014). Two observed spectra used as prototypes for initial ILR and VBLR models (SDSS plate-mjd-fiber: 0666-52149-496 and 2089-53498-385, respectively) are designated with red dots. tra, with the same parameters input in models. In Fig. 8, it can be seen that the H\(\beta\) widths are not much affected by the procedure of continuum subtraction. The FWHM H\(\beta\) is slightly underestimated (up to 10%) comparing the model values in the case of the model spectra with very broad lines. Additionally, we measured the full width at 10% of maximal intensity (FW10%M) of H\(\beta\) obtained from best fit and compared it with FW10%M H\(\beta\) of the model. The FW10%M of H\(\beta\) line is more affected by the process of continuum subtraction than the FWHM H\(\beta\), since it can be underestimated when comparing the model up to 20%. As expected, the width of H\(\beta\) lines correlates with the VBLR contribution in model spectra, which is shown with different sizes of black points in Fig. 8. The continuum flux at 5100 A obtained after the power-law fit is compared with the continuum flux of the model in Fig. 9 (up). It can be seen that continuum flux obtained from the fit is slightly overestimated (up to 4%) compared to the model and the disagreement is stronger for spectra with broader lines. Some studies follow the approximation that the spectrum at 5100 A is continuum window, with no line contribution at that wavelength, so the flux of the spectrum at 5100 A represents the flux of the continuum. In order to test the accuracy of that assumption, we directly measured the flux of the spectra at 5100 A and compared it with the flux of the continuum model (shown in Fig. 9, bottom). We found that the disagreement is larger than in the previous case, namely, the measured fluxes are overestimated by up to 10% compared to the model values. The disagreement becomes more significant for spectra with broader lines. The difference between the EW H\(\beta\) measured after continuum subtraction and fitting procedure (EW H\(\beta_{FIT}\)) and the EW H\(\beta\) measured directly from the model (EW H\(\beta_{MODEL}\)) is demonstrated in Fig. 10. The EWs of H\(\beta\) obtained from fit are underestimated comparing the values from the model for up to 20%. It can be seen that discrepancy is larger for broader H\(\beta\) lines and larger contribution of the VBLR (see the sizes of the black points). Also, it has a stronger correlation with FW10%M H\(\beta\) than with FWHM H\(\beta\). Similarly to the case of EW H\(\beta\), the difference between the EW Fe II (in the range of 4434-4684 A) measured after the continuum subtraction (EW Fe II\({}_{FIT}\)) and EW Fe II measured directly from model (EW Fe II\({}_{MODEL}\)) is shown in Fig. 11. The EWs of Fe II obtained from fit are more underestimated compared to the values from the model than the EWs of H\(\beta\). The underestimation of the measured EW Fe II after fit procedure goes up to 45% and it is the greatest for broader lines and for the spectra with the largest VBLR contribution. Similarly, as in the case of EW H\(\beta\), the discrepancy correlates stronger with FW10%M H\(\beta\), than with FWHM H\(\beta\). The larger influence of continuum subtraction on the underestimation of EW Fe II compared to EW H\(\beta\) could also be seen in Fig. 12, where we compare the EWs of these lines with the flux of the continuum at 5100 A. In the case of the initial set of models, the correlation EW Fe II versus Flux\({}_{5100}\) even becomes negative for measured EWs after fit. The reason for this is probably that part of the Fe II flux is joined to the continuum level during continuum fitting, so overestimation of the continuum and underestimation of the Fe II flux bring this trend. Figure 8: Comparison between spectral parameters obtained from the best fit after continuum subtraction using continuum windows, with the same parameters as they are included in the model, for FWHM H\(\beta\)_(top)_ and FW10%M H\(\beta\)_(bottom)_. The black dots are values obtained from the initial set of model spectra, and their size increases as the VBLR contribution increases in the model spectra. The grey dots are values obtained from the large sample of model spectra. The solid line displays a one-to-one relationship. Figure 9: Comparison between continuum flux at 5100 Å from the model and obtained from the best fit of the power law _(top)_ along with same, but for the continuum flux measured at 5100 Å _(bottom)_. The black dots are values obtained from the initial set of model spectra, while the grey dots are values obtained from the large sample of model spectra. We investigated the influence of the continuum subtraction on the fluxes of the Fe II in 4434-4684 A range and H\(\beta\) lines in Fig. 13 (left and middle). Similarly, as for EWs, the flux of the Fe II lines is significantly underestimated after continuum subtraction comparing model values, and difference becomes even larger for objects with broader lines. The flux of H\(\beta\) lines remains similar to model values for FWHM H\(\beta\)\(<\) 4000 km s\({}^{-1}\), while for spectra with broader lines, it starts to be slightly underestimated. We analysed the influence of the pseudo-continuum on the EV1 anti-correlation, FWHM H\(\beta\) versus Fe II/H\(\beta\), as shown in Fig. 13 (right). The ratio of Fe II/H\(\beta\) is underestimated for spectra with larger widths (up to 30%), since the flux of Fe II is more affected by continuum subtraction than H\(\beta\) flux, i.e., it contributes more to the quasi-continuum than H\(\beta\). ### Modelled and measured Eddington ratio and SMBH mass estimates The H\(\beta\) line and continuum at \(\lambda\)5100A are often used for the SMBH mass estimates. For the SMBH mass determination, using a virial approximation, we have to measure the FWHM H\(\beta\) and luminosity at \(\lambda\)5100A, which indicates the dimension of the BLR (see e.g. review Popovic, 2020, and references therein). As can be seen in Fig. 9, the Fe II quasi-continuum affects the determination of the continuum level at \(\lambda\)5100 A, causing the overestimation of the continuum level and a slight underestimation of the FWHM H\(\beta\) (see Fig. 8). Here, we investigate how it reflects on the estimates of the SMBH mass (M\({}_{BH}\)) and Eddington ratio (R\({}_{Edd}\)). To explore the influence of the Fe II quasi-continuum on SMBH mass estimates, we used a common relationship for the mass determination in the case of virialisation: \[M_{BH}\sim FWHM\ H\beta^{2}\cdot R_{BLR},\] where we assume that \[R_{BLR}\sim L_{5100}^{0.5}.\] The \(R_{BLR}\) is the broad line region radius and \(L_{5100}\) is the continuum luminosity at \(\lambda\)5100 A. For our sample of synthetic spectra, \(L_{5100}\) is proportional to \(F_{5100}\). The ratio of the M\({}_{BH}\) obtained with measured parameters after the fitting procedure (M\({}_{BH~{}FIT}\)) with M\({}_{BH}\) obtained with input parameters in the model (M\({}_{BH~{}MODEL}\)) is calculated as: \[\frac{M_{BH~{}FIT}}{M_{BH~{}MODEL}}=\frac{FWHM\ H\beta_{FIT}^{2}}{FWHM\ H \beta_{MODEL}^{2}}\frac{F_{5100~{}FIT}^{0.5}}{F_{5100~{}MODEL}^{0.5}},\] where the index \(MODEL\) denotes parameters included in the model and \(FIT\) denotes parameters measured after the fitting procedure. We assume that \(R_{Edd}\sim L_{5100}/M_{BH}\)(Wu & Liu, 2004). The R\({}_{Edd~{}FIT}\)/R\({}_{Edd~{}MODEL}\) is calculated as: \[\frac{R_{Edd~{}FIT}}{R_{Edd~{}MODEL}}=\frac{F_{5100~{}FIT}\cdot M_{BH~{}MODEL}}{M_{BH~{}FIT}\cdot F_{5100~{}MODEL}}.\] Figure 11: Same as in Fig. 10, but for EW Fe II measured in the range of 4434-4684 Å. Figure 10: Correlations between different spectral parameters. _Left_: Comparison between the EW H\(\beta\) and FWHM H\(\beta\) obtained from the best fit (red dots) and from the model (black dots) for the initial set of model spectra. The same is given for a large sample of model spectra, where parameters measured after fitting are assigned with light-red, small dots, and parameters from the model with grey dots. _Middle_ and _Right_: Black dots are values obtained from the initial set of model spectra and their size increases with a larger VBLR contribution in the model spectra. The grey dots are values obtained from the large sample of model spectra. The plots of \(M_{BH~{}fit}/M_{BH~{}MODEL}\) ratio versus H\(\beta\) width (Fig. 14) show that M\({}_{BH}\) calculated using parameters obtained from fit is underestimated for spectra with broader lines, up to 15%. On the other hand, R\({}_{Edd}\) calculated with parameters from fit is overestimated up to 25% for spectra with broad lines (see Fig. 15). In both cases, correlations with H\(\beta\) width become stronger for FW10%M H\(\beta\) (Fig. 14 and Fig. 15, bottom). ## 4 Discussion Here, we discuss the possible underlying physics in the quasar main sequence implied by the results of our spectral modelling. Then, we draw attention to the possible influence of the Fe II quasi-continuum on the measured spectral parameters and its implications for drawing conclusions about AGN physics. ### Quasar main sequence: The nature The so-called quasar main sequence, which represents the anti-correlation of the FWHM H\(\beta\) versus optical Fe II line strength relative to H\(\beta\), indicates physical differences between various AGNs (see e.g. Boroson & Green 1992; Sulentic et al. 2000; Shen & Ho 2014; Marziani et al. 2018). This correlation divides AGNs into two subgroups, Pop A and Pop B (see Marziani et al. 2018), where Pop A are objects with FWHM H\(\beta\)\(<\) 4000 km s\({}^{-1}\) and a high accretion rate, while Pop B are objects with FWHM H\(\beta\)\(>\) 4000 km s\({}^{-1}\) and a lower accretion rate. The complex shape of the broad H\(\beta\) lines observed in various AGN spectra implies that these lines arise in two subregions of the BLR (VBLR and ILR), closer and further away from the SMBH (Bon et al. 2009). On the other hand, it is commonly considered that Fe II emission arises in the same region that produces the broad H\(\beta\) line (Phillips 1978; Boroson & Green 1992). In this research, we assume that both lines, Fe II and H\(\beta\), have two components: one very broad (VBLR) and one intermediate (ILR), both coming from the BLR, but from different layers. Using this assumption, we constructed a set of model spectra with different contributions of ILR and VBLR emission in H\(\beta\) and Fe II optical lines, where the relative intensities between Fe II and H\(\beta\) lines are adopted from real, observed spectra identified as dominantly ILR and dominantly VBLR emission (see Figs. 1 and 2). The adopted Fe II/H\(\beta\) ratio is larger in the ILR dominant prototype spectrum than in the VBLR dominant spectrum. The obtained set of the model spectra has parameters FWHM H\(\beta\) and FeII/H\(\beta\) in an empirically expected range (see Fig. 7), within the contour obtained using the large SDSS AGN sample (Shen & Ho 2014). It is interesting to note that this set of model spectra reproduced the expected anticorrelation FWHM H\(\beta\) versus FeII/H\(\beta\). Moreover, we found that this anticorrelation is strongly dependent on the rate of the VBLR and ILR contributions in the model spectra (Fig. 6). In our set of model spectra, the models with VBLR contribution in the range of 0.1-0.5 mostly have FWHM H\(\beta\)\(<\) 4000 km s\({}^{-1}\), that is, they belong to Pop A, while the models with VBLR contribution in the range of 0.5-0.9 mostly have FWHM H\(\beta\)\(>\) 4000 km s\({}^{-1}\), that is, they belong to Pop B. This result opens up two questions: first, we consider what causes a different ratio of Fe II/H\(\beta\) in the ILR and VBLR regions; second, what controls the different rate of the ILR and VBLR contributions in various AGN spectra. The model of locally optimally emitting clouds (LOC), given in Baldwin et al. (1995), predicts that the lines predominantly arise in regions in which physical conditions for their emission are optimal. Therefore, it is possible that H\(\beta\) and Fe II lines dominantly arise from two LOCs, ILR, and VBLR. Since the VBLR is closer to SMBH than ILR, it is possible that different physical conditions in these two regions affect the atomic processes that might produce Fe II lines, such as radiative excitation, collisional excitation, fluorescent excitation by Ly\(\alpha\), and so on, leading to their different efficiency in ILR and VBLR. On the other hand, it is commonly accepted that Fe II and H\(\beta\) emission lines are dominantly produced by photoionisation (see Gaskell et al. 2022, and references therein). Namely, on the front side of the gas clouds, the hydrogen would be ionised, and H\(\beta\) would be produced in the process of the recombination. The Fe II emission would arise dominantly from the ionisation front at the farther side of clouds, where hydrogen is mostly neutral, in the process of collisional excitation (Gaskell 2009; Gaskell et al. 2022). In this scenario, the Fe II emission could only be produced in gas clouds thick enough for an ionisation front to happen but also with a large enough column density. In clouds of lower column density, only H\(\beta\) would be produced, but no Fe II emission. The VBLR could be made of a range of clouds of different thicknesses, all producing H\(\beta\), but with only the thicker one (i.e. high column density) would produce Fe II (Joly 1987). This results in a smaller Fe II/H\(\beta\) intensity ratio emitted from VBLR. On the other hand, in the farther away ILR, the ionisation parameter would be smaller and Fe II emission would be higher (Wills et al. 1985). Thus, the different physical conditions in the VBLR and ILR regions might lead to different Fe II/H\(\beta\) intensity ratios. The observed spectrum is then the sum Figure 12: Correlations between line EWs and continuum flux. Correlation between EW H\(\beta\) and continuum flux for parameters obtained from fit (red dots) and for parameters included in the model (black dots) for the initial set of model spectra _(top)_. The same just for EW Fe II 4434-4684 Å and continuum flux _(bottom)_. The size of the dots denotes the contribution of the VBLR in the model spectra, where the largest dots represent the largest VBLR contribution. of ILR and VBLR contributions in emission lines, which gives the quasar main sequence, namely: the EV1 parameter space. The increase in the VBLR contribution in Fe II lines consequently increases the width of the lines, but decreases the Fe II/H\(\beta\) ratio. It is in accordance with the state that ionisation parameters increase from Pop A to Pop B (Marziani et al., 2001). However, it is an interesting question what controls the rate of the VBLR/ILR contribution in the AGN spectra, especially in the Fe II and Balmer lines. We propose that the VBLR/ILR contribution is controlled by both inclination and accretion rate, similarly to what was noticed in Shen & Ho (2014) who found that the main sequence is controlled by these parameters. The dominant ILR emission in the observed spectra can be caused by a large inclination, that is, the VBLR component could be obscured. On the other hand, a high level of accretion might produce the physical conditions of large radiation pressure, which prevent the formation of the Fe II emission (and also H\(\beta\)) close to the central SMBH, so the line-emitting region is formed farther away from the central SMBH. In that case, we can observe narrower Fe II emission lines. In the case where the VBLR is dominant in the spectra, the accretion rate is not very high, resulting in lower radiation pressure, since the emission is predominantly coming from the region that is close to the central SMBH. Also, the inclination should be such that we can see the dipper in the centre of an AGN. Figure 14: Comparison between M\({}_{BH}\) obtained using parameters measured after fitting procedure and M\({}_{BH}\) calculated with parameters from model, for spectra with different FWHM H\(\beta\)_(top)_ and FW10%M H\(\beta\)_(bottom)_. The symbol denotation is the same as in Fig. 9. Figure 13: Correlations between line fluxes versus FWHM H\(\beta\), and FWHM H\(\beta\) versus Fe II/H\(\beta\) (EV1 correlation). The symbol denotation is the same as in Fig. 10 (left). Figure 15: Same as in Fig. 14, but for the Eddington ratio, R\({}_{Edd}\). The symbol notation is the same as in Fig. 9. ### Optical Fe II quasi-continuum in AGN spectra In the two-component BLR model, the VBLR component contributes to the broad line wings, while the ILR component dominates the line core. In the case of the spectra with a dominant VBLR component, an nearly complete broad H\(\beta\) profile could be well observed, but the broad wings of Fe II lines overlap and form the Fe II quasi-continuum, making it difficult to separate it from the real continuum. In the case of the ILR-dominant spectra, the VBLR components in Fe II are too weak to have a significant contribution to the Fe II quasi-continuum. In analysing the set of model spectra, our results imply that Fe II quasi-continuum affects the process of line and continuum fitting and estimation of the spectral parameters. In the spectra with dominant VBLR contribution, the width of the H\(\beta\) is slightly underestimated, especially when measured at the level of the H\(\beta\) wings (FW10%M). On the other hand, the estimated continuum flux is slightly overestimated since part of the line flux is included. Generally, the measured properties of the H\(\beta\) lines, such as line flux and EW, are less affected by the process of continuum subtraction than the flux and EW of Fe II, which implies that Fe II lines contribute much more to the quasi-continuum than the broad H\(\beta\) line. The fluxes of the emission lines are underestimated, but line EWs are the most sensitive to the influence of the quasi-continuum, which is a consequence of the flux underestimation and continuum overestimation. In general, it seems that the spectral parameter that is most sensitive to quasi-continuum presence is the EW Fe II. Therefore, we should be very careful with measured spectral parameters in the case of spectra with strong and broad Fe II lines, especially with respect to the EW of Fe II. Some important AGN properties, such as M\({}_{BH}\) and R\({}_{Edd}\), which are calculated using the measured spectral parameters of the FWHM H\(\beta\) and continuum luminosity, are consequently affected by the Fe II quasi-continuum contribution. For spectra with strong and broad Fe II lines, the M\({}_{BH}\) could be slightly underestimated and R\({}_{Edd}\) could be overestimated, which should be taken into account. The influence of the Fe II quasi-continuum on measured parameters may affect some conclusions obtained from AGN variability, such as the measured dimension of the Fe II emission region by reverberation. Namely, if the Fe II quasi-continuum is incorporated in measured continuum flux used in reverberation mapping technique, it would cause uncertainty of the measured time lags for the Fe II emission lines, namely, it would unable its measuring. We expect that this effect would be specially present in spectra with broad and strong Fe II lines in which Fe II quasi-continuum significantly contributes to the real continuum. It is interesting to note that Fe II time lags are measured in only about 10% of AGNs for which the reverberation mapping was applied; whereas in the other studies, the Fe II time lags could not be measured properly. For example, Kuehn et al. (2008) carried out a reverberation mapping for Ark 120, which is an object with broad and strong Fe II lines, and they were not able to measure a clear time lag for the Fe II lines. Additionally, Hu et al. (2015) found a tentative correlation between the Fe II time lag and intensity ratio of Fe II and H\(\beta\) that might be affected by Fe II quasi-continuum. Du & Wang (2019) found that R\({}_{BLR}\) - L\({}_{5100}\) relation (see Bentz et al. 2013) should be corrected for a factor that includes the Fe II/H\(\beta\) intensity ratio (see their Fig. 5 and Eq. 5) that may also be influenced by the Fe II quasi-continuum. ## 5 Conclusions We modelled the optical spectral region around H\(\beta\) and Fe II optical emission, which is frequently used to point out some spectral characteristics of AGNs. We considered the fact that the Fe II and H\(\beta\) emission regions have two components: one very broad (VBLR) and one intermediate (ILR), both coming from BLR. Using this assumption, we constructed the set of synthetic spectra with different contributions of ILR and VBLR emission in H\(\beta\) and Fe II optical lines, using a linear combination of the ILR and VBLR model spectra. The VBLR model spectrum has a significantly larger width and a smaller Fe II/H\(\beta\) ratio compared to the ILR model spectrum, as taken from the two real, observed spectra with dominant VBLR, namely, the ILR emission. We analysed the obtained set of model spectra in context of the quasar main sequence and the influence of the ILR/VBLR contribution on the EV1 parameter space. In order to investigate the influence of the Fe II pseudo-continuum on the measured parameters in AGN spectra, we subtracted the continuum emission from model spectra by fitting the power law. Then we fit the Fe II and Balmer lines. In this way, we were able to compare the parameters obtained after continuum subtraction and fitting procedures with those included in the model. Summarising the main results, we can outline the following conclusions. 1. The observed broad-line AGN spectra in optical could be aptly modelled as the sum of the emission from two regions, ILR and VBLR, where each of these two regions emits spectra with different physical properties, reflecting the specific physical conditions in these regions. 2. The set of model spectra constructed as different rates of the ILR/VBLR contribution could reproduce the quasar main sequence, namely, FWHM H\(\beta\) versus Fe II/H\(\beta\) anticorrelation. Both parameters in this anticorrelation are strongly dependent on the ILR/VBLR rate. 3. The quasar main sequence is probably controlled by several effects: a) physical conditions in ILR and VBLR regions, which could cause different efficiency of some of atomic processes that produce H\(\beta\) and Fe II lines, as well as different ionisation parameters in thin and thick clouds, making the Fe II/H\(\beta\) ratio larger in ILR than in VBLR; b) the inclination and Eddington ratio, which control the contribution of the ILR or VBLR emission in each spectrum. 4. In the case of the strong VBLR contribution, the VBLR components of the Balmer lines and Fe II lines make up the quasi-continuum in AGN spectra, where the Fe II quasi-continuum strongly dominates over the H\(\beta\) quasi-continuum. 5. The Fe II quasi-continuum is strong in the case of the broad and strong Fe II lines and it is difficult to separate it from the real continuum. It may affect the process of the continuum and line fitting, causing an underestimation of the width of H\(\beta\) and especially the Fe II and H\(\beta\) fluxes, as well as an overestimation of the measured continuum level. The most affected spectral parameters are line EWs, especially EW Fe II, which may be strongly underestimated. 6. Some estimated AGN properties could be affected by Fe II quasi-continuum in extreme cases, such as M\({}_{BH}\), as it may be underestimated, and R\({}_{Edd}\), which may be overestimated. Also, it is important to be careful with clues obtained with the reverberation mapping technique, in the case of strong and broad Fe II lines, which may partly contribute to the measured continuum and may affect the results in that way. ###### Acknowledgements. This work is supported by the Ministry of Education, Science and Technological Development of Serbia (451-03-47/2023-01/200002 and 451-03-47/2023-01/200162). L. C. Popovic thanks the support by Chinese Academy of Sciences President's International Fellowship Initiative (PIFI) for visiting scientist.
2309.15594
Rich Magnetic Phase Diagram of Putative Helimagnet Sr$_3$Fe$_2$O$_7$
The cubic perovskite SrFeO$_3$ was recently reported to host hedgehog- and skyrmion-lattice phases in a highly symmetric crystal structure which does not support the Dzyaloshinskii-Moriya interactions commonly invoked to explain such magnetic order. Hints of a complex magnetic phase diagram have also recently been found in powder samples of the single-layer Ruddlesden-Popper analog Sr$_2$FeO$_4$, so a reinvestigation of the bilayer material Sr$_3$Fe$_2$O$_7$, believed to be a simple helimagnet, is called for. Our magnetization and dilatometry studies reveal a rich magnetic phase diagram with at least 6 distinct magnetically ordered phases and strong similarities to that of SrFeO$_3$. In particular, at least one phase is apparently multiple-$\mathbf{q}$, and the $\mathbf{q}$s are not observed to vary among the phases. Since Sr$_3$Fe$_2$O$_7$ has only two possible orientations for its propagation vector, some of the phases are likely exotic multiple-$\mathbf{q}$ order, and it is possible to fully detwin all phases and more readily access their exotic physics.
Nikita D. Andriushin, Justus Grumbach, Jung-Hwa Kim, Manfred Reehuis, Yuliia V. Tymoshenko, Yevhen A. Onykiienko, Anil Jain, W. Andrew MacFarlane, Andrey Maljuk, Sergey Granovsky, Andreas Hoser, Vladimir Pomjakushin, Jacques Ollivier, Mathias Doerr, Bernhard Keimer, Dmytro S. Inosov, Darren C. Peets
2023-09-27T11:47:54Z
http://arxiv.org/abs/2309.15594v2
# Rich Magnetic Phase Diagram of Putative Helimagnet Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) ###### Abstract The cubic perovskite SrFeO\({}_{3}\) was recently reported to host hedgehog- and skyrmion-lattice phases in a highly symmetric crystal structure which does not support the Dzyaloshinskii-Moriya interactions commonly invoked to explain such magnetic order. Hints of a complex magnetic phase diagram have also recently been found in powder samples of the single-layer Ruddlesden-Popper analogue Sr\({}_{2}\)FeO\({}_{4}\), so a reinvestigation of the bilayer material Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), believed to be a simple helimagnet, is called for. Our magnetization and dilatometry studies reveal a rich magnetic phase diagram with at least six distinct magnetically ordered phases and strong similarities to that of SrFeO\({}_{3}\). In particular, at least one phase is apparently multiple-**q**, and the **qs** are not observed to vary among the phases. Since Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) has only two possible orientations for its propagation vector, some of the phases are likely exotic multiple-**q** order, and it is possible to fully detwin all phases and more readily access their exotic physics. + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding author: Corresponding Corresponding: Corresponding: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding Corresponding: Corresponding author: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding Corresponding: Corresponding the former is also proposed to host a three-dimensional hedgehog-lattice phase. In contrast to the small multiple-\(\mathbf{q}\) bubbles typically seen in noncentrosymmetric materials, multiple-\(\mathbf{q}\) phases in centrosymmetric materials may occupy much of the magnetic phase diagram. The centrosymmetric skyrmion materials may also host exotic multiple-\(\mathbf{q}\) orders beyond skyrmion- and hedgehog-lattice phases, for instance the vortex state with stripes of topological charge recently reported in GdRu\({}_{2}\)Si\({}_{2}\)[27]. The cubic perovskite SrFeO\({}_{3}\) has a particularly intriguing \(H\)-\(T\) phase diagram with at least five distinct magnetic phases for \(\mathbf{H}\parallel[111]\) alone [28]. Two of these five phases have been identified [24]: A double-\(\mathbf{q}\) skyrmion-lattice phase and a quadruple-\(\mathbf{q}\) phase producing a three-dimensional lattice of hedgehogs and anti-hedgehogs. SrFeO\({}_{3}\) is the three-dimensional (\(n=\infty\)) member of a Ruddlesden-Popper family of layered materials, including the single-layer analogue Sr\({}_{2}\)FeO\({}_{4}\), bilayer compound Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), and triple-layer material Sr\({}_{4}\)Fe\({}_{3}\)O\({}_{10}\)[29], of which single crystals of only Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) have been grown. This latter material has been reported to be a well-magnet with a slightly elliptical helix [30; 31] whose spins lie perpendicular to the tetragonal [110] direction; its (\(\xi\,\xi\,1\)) propagation vector (with \(\xi=0.142\) and antiferromagnetic stacking of bilayers) is the quasi-two-dimensional analogue of the (\(\xi\,\xi\,\xi\)) in cubic SrFeO\({}_{3}\) with \(\xi=0.128\)[32]. This close similarity is particularly remarkable given that Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) is an insulator below 330 K while SrFeO\({}_{3}\) is a metal. The insulating behavior arises from freezing of a checkerboard charge modulation which breaks the symmetry between adjacent Fe ions [33]. The associated lowering of the lattice symmetry could in principle allow DM interactions, but the small changes in atomic positions and highly similar propagation vectors suggest that DM interactions play no significant role. Sr\({}_{2}\)FeO\({}_{4}\) was very recently reported to exhibit elliptical _cycloidal_ order with the similar \(\mathbf{q}\) vector (\(\xi\,\xi\,0\)), \(\xi=0.137\)[34], while the magnetism in Sr\({}_{4}\)Fe\({}_{3}\)O\({}_{10}\) has not been reported. The work on Sr\({}_{2}\)FeO\({}_{4}\) identified a transition within the ordered phase at 10 K, a shoulder in the magnetization at 30 K under a 3.5 T field, a spin-flop transition near 5 T, and a transition to ferromagnetic order between 5 and 8 GPa, indicating a complex magnetic phase diagram. The complexity found in SrFeO\({}_{3}\) and Sr\({}_{2}\)FeO\({}_{4}\) suggests that the \(H\)-\(T\) phase diagram of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) should be investigated in detail. In this work, we explore the magnetic phase diagram of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) using magnetization and dilatometry measurements, finding a similarly rich phase diagram. The parallels with SrFeO\({}_{3}\) suggest exotic multiple-\(\mathbf{q}\) order, and we are able to constrain the possibilities for several phases. ## II Experimental Large single crystals of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) were prepared by floating-zone growth as described previously [30; 35]. The oxygen content was maximized by annealing under 5 kbar of oxygen while gradually cooling from 450 \({}^{\circ}\)C [33], or for some powder samples by annealing at 6 kbar at 550 \({}^{\circ}\)C, and was verified to be O\({}_{>6.99}\) by thermogravimetric analysis and structure refinements. High sample quality was confirmed by diffraction -- previous synchrotron powder diffraction found these samples to be phase pure [33], resonant x-ray diffraction found rocking curves typically 0.05-0.10\({}^{\circ}\) wide on smaller crystals, while neutron rocking curves on larger crystals were 1-2\({}^{\circ}\) wide. The sample for neutron powder diffraction was prepared by standard solid-state synthesis, and contained SrFeO\({}_{3}\) as an impurity phase. Magnetization measurements were performed by vibrating sample magnetometry (VSM) in a Quantum Design Magnetic Property Measurement System (MPMS-VSM) or in a Cryogenic Ltd. Cryogen-Free Measurement System (CFMS) using the VSM module, in zero-field-warming, field-cooled-cooling, and field-cooled-warming conditions. The ac susceptometry option was used for frequency-dependent measurements in a 0.5 Oe ac field. Four- or five-quadrant \(M\)-\(H\) loops were measured at several temperatures in the CFMS. The single crystals were mounted to either a plastic (CFMS) or quartz rod (MPMS) sample holder using GE varnish. Specific heat measurements were performed in a Quantum Design Physical Property Measurement System (PPMS) with the sample secured using Apiezon N grease. Dilatometry measurements were performed using a tilted-plate capacitive dilatometer with a sensitivity to relative length changes of \(\sim 10^{-7}\)[36], which was mounted on an Oxford Instruments \({}^{4}\)He flow cryostat equipped with a superconducting magnet capable of fields up to 10 T. The sweep rate of the magnetic field was chosen to be between 0.05 T/min and 0.25 T/min. For accurate monitoring and control of the dilatometer and sample temperature we used a Cernox thermometer attached to the dilatometer cell close to the sample. Measurements of magnetostriction and thermal expansion were made on single crystals for length changes parallel or perpendicular to the crystallographic [110] or [001] directions for magnetic fields oriented in both of these directions. The longitudinal and transverse components of the striction tensor found in this way allow the distortions and volume effects of the crystal lattice to be calculated. This allows one to identify all magnetic transitions accompanied by lattice effects through dilatometry, which can hint at modifications of the magnetic structure. Single-crystal neutron diffraction was performed on the E5 diffractometer at the BER-II reactor at the Helmholtz-Zentrum Berlin (HZB), Germany. The wavelength 2.38 A was selected using the (002) reflection from a pyrolytic graphite (PG) monochromator, and higher-order contamination (\(\lambda/2\)) was prevented through the use of a PG filter. A position-sensitive \({}^{3}\)He detector of dimension 90\(\times\)90 mm\({}^{2}\) was used. Samples were mounted in four-circle geometry on a closed-cycle refrigerator, and collimators and slits were set such that each sample was fully illuminated. Data were integrated using the racer program [37], which uses the parameters describing the shape of strong peaks to improve the precision in the description of weaker ones, minimizing the relative standard deviation. Further measurements in fields applied along [110] and [001] were performed at beamline E4 at the BER-II reactor at the HZB, using a 2D detector and neutrons of wavelength 2.437 A. Powder neutron diffraction was measured with 1.8857-A neutrons in applied magnetic fields up to 6 T at the HRPT beamline at the Paul Scherrer Institute (PSI), Villigen, Switzerland, and up to 6.5 T at the E6 diffractometer at the BER-II reactor at the HZB using 2.42-A neutrons selected by a PG monochromator. The effectiveness of detwinning the magnetic order (i.e. selecting a single-domain magnetic state) in a field \(\mathbf{H}\parallel\)[110] was checked using the IN5 time-of-flight beamline at the Institute Laue-Langevin (ILL), Grenoble, France, using a neutron wavelength of 4.8 A. The sample was cooled to 1.8 K in the maximum 2.5-T field possible at this beamline, then measured in zero field at this temperature. Data were integrated from \(-0.05\) to 0.05 meV to capture elastic scattering, while out-of-plane momentum integration was set to \(\pm 0.04\) reciprocal lattice units (r.l.u.). Throughout this paper, crystal orientations refer to the high-temperature tetragonal \(I4/mmm\) cell, rather than the doubled charge-ordered \(Bmmb\) cell. The helical propagation vector is (\(\xi\ \pm\xi\ 1\)) in the tetragonal cell, or (\(\sqrt{2}\xi\ 0\ 1\))/(0 \(\sqrt{2}\xi\ 1\)) in the \(Bmmb\) cell. The charge order has a correlation length along [001] on the order of a unit cell [33], but the magnetic order produces sharp peaks in neutron powder diffraction [31], so a magnetic domain must include a considerable number of structural domains and feel an effectively tetragonal lattice. To further characterize the magnetic order, muon spin rotation/relaxation measurements (\(\mu\)SR) using positive muons (\(\mu^{+}\)) were performed on a single crystal mounted on the low-background insert of a helium-flow cryostat in the LAMPF spectrometer installed at the M15 beamline at TRIUMF, in Vancouver, Canada. In this setup, muons that do not stop in the sample are vetoed with a high efficiency. The crystalline \(c\) axis was parallel to both the incident muon beam and its spin. Decay positrons were detected in a pair of scintillation detectors up- (B) and downstream (F) of the sample. The muon spin polarization is then monitored by the experimental asymmetry of the count rates, \(A=(B-F)/(B+F)\). For more details, see our earlier report [38]. ## III Magnetization Magnetization measurements were performed on Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) as a function of field \(\mathbf{H}\) and temperature \(T\) for applied magnetic fields along [100], [110], and [001]; field-cooled data for all three directions are shown in Fig. 1 together with their derivatives. The first transition encountered on cooling, which we refer to as \(T_{\mathrm{N}}\), is at roughly 111 K, consistent with previous reports. However, it is immediately clear that there is an additional transition within the magnetically ordered phase in field for all field orientations, starting around 70 K at low field and moving to lower temperature as field is increased. There is also some evidence, most clearly seen in the derivatives, that the first transition encountered may be split. It is Figure 1: Temperature dependent field-cooled magnetic susceptibility \(\mathbf{M}/\mathbf{H}\). Plotted for applied magnetic fields along (a) [100], (b) [110], and (c) [001] directions. The respective derivatives are plotted in panels (d-f), in which the datasets have been offset vertically for clarity. In (a) and (d), gray vertical lines show where the peaks in the derivative occur at low field. also striking that the magnetization at low temperatures changes drastically in field. Zero-field-cooled magnetization data are presented in Fig. 2(a) for \(\mathbf{H}\parallel[110]\) and in Fig. 2(b) for \(\mathbf{H}\parallel[001]\). ZFC data were not collected for \(\mathbf{H}\parallel[100]\). The features seen in the FC data are also visible here, but several new features appear. The ZFC data diverge significantly from the FC data below \(\sim\)30 K for intermediate [110] fields, as shown in Fig. 2(a), indicating a freezing of spin components or domains that would otherwise be field-trained by a sufficiently strong field. Lower fields are not strong enough to field-train the magnetic order, and higher fields suppress the ZFC-FC splitting to lower temperature and reduce the splitting. In Fig. 3, two circuits are shown which take opposite paths through the \(H\)-\(T\) phase diagram, starting from 5 K and 0.5 T under zero-field-cooled conditions, and neither returns to its initial magnetization value. In both circuits a single-domain state is obtained. In both cases, a large step is seen on increasing the field to 5 T, but our \(M(H)\) data (shown below in Fig. 5) indicate that if we stayed at 5 K, a decreasing field would follow the same curve, since 14 T is insufficient to detwin the magnetic order for temperatures up to at least 10 K. The circuits in Fig. 3 exceed this temperature at high field, detwinning the magnetic order. A smaller difference between ZFC and FC data is also observed for fields \(\mathbf{H}\parallel[001]\). We do not see evidence of field training into a single-domain state for this orientation, so it is not clear what is being frozen or trained. Above \(\sim\)6 T, a peak appears around 60 K in the ZFC data for \(\mathbf{H}\parallel[110]\), which disperses to slightly higher temperatures as the field is increased. This enhanced response to the applied field suggests a phase transition, likely out of a frozen-in low-field state. Differences between ZFC and FC data also appear at some phase transitions, where they most likely arise from hysteresis between cooling (FC) and warming (ZFC) data. Similar hysteresis has been seen previously in the Figure 3: Demonstration of field training. Circuits through the \(H\)–\(T\) phase diagram at low temperature and low field, showing the effect of field training. Insets show the paths taken through the phase diagram. Figure 2: Comparison of field-cooled and zero-field-cooled magnetization data. Plotted for selected fields parallel to (a) [110] and (b) [001]; shading indicates the difference. The datasets in (a) have been offset vertically for clarity. 60 and 110 K transitions in SrFeO\({}_{3}\)[28]. An additional dip visible around 14 K in all data taken on warming is associated with a change in the cryostat's cooling mode and does not arise from the sample. Taken together with the FC data, these ZFC data make it clear that Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) has a rather complex \(H\)-\(T\) phase diagram. ## IV Dilatometry Dilatometry experiments assess thermal expansion -- the change in the unit cell as a function of temperature -- as well as expansions due to other parameters such as magnetic field. Magnetoelastic coupling induces forced magnetostriction upon the application of an external magnetic field, while below the ordering temperature, spontaneous magnetostriction can manifest. We applied magnetic fields in two configurations: parallel to the length measurement direction (longitudinal field) and orthogonal to it (transversal field). In contrast to forced magnetostriction in the paramagnetic phase, magnetostriction below the Neel temperature exhibits anisotropy in helimagnets, usually leading to a divergence of the transversal and longitudinal datasets in the ordered phase. This effect is distinct from magnetic detwinning and stems from the inherent anisotropy of magnetoelastic coupling. Changes of the sample length along the [110] direction caused by strong magnetoelastic coupling were studied in fields along the [110] direction (longitudinal) and the [001] direction (transversal). Thermal expansion data were recorded upon increasing temperature after zero-field cooling (1, 6, and 8 T) and after field training (0 T) and are shown in Fig. 4(a). Their derivatives, representing the coefficient of linear expansion \(\alpha\), are presented in Fig. 4(b). The measurement curves are nearly parallel for both directions but with a pronounced difference in absolute value, and they only converge at the magnetic ordering temperature \(T_{\mathrm{N}}=110\) K. From the absolute values in Fig. 4(a), lattice parameter changes on the order of \(5\times 10^{-5}\) can be estimated. Assuming a constant unit-cell volume, the sample expansion in the [110] direction upon entering the ordered phase would correspond to a contraction of the \(c\) lattice parameter. In the coefficient of linear expansion \(\alpha\), phase transitions are indicated by kinks; the ordering temperature also manifests clearly in this way. At lower temperatures, a first kink in \(\alpha(T)\) for 6 and 8 T at 25 K correlates with the recovery seen in the zero-field-cooled magnetization for fields above 4 T in Fig. 2. This transition is identified with arrows in Fig. 4(b). In addition, others are visible for the longitudinal \(\mathbf{H}\parallel\)[110] case for all measured external fields. Anomalies at 63-70 K correspond to the transitions seen in this temperature range in the magnetization. Field-dependent magnetostriction measurements in the longitudinal setup (\(\mathbf{H}\parallel\)[110]) are shown in Fig. 4(c). The first field sweep in each case is associated with a clear irreversible length reduction of about \(2\times 10^{-6}\) at 2.9-4.5 T, while in all following field sweeps at the same temperature the increasing-field curves nearly exactly track the decreasing-field curves with no irreversibility. This can be explained with a possible field training of the magnetic structure, for instance a domain-selection process. A strong kink is observed around 4 T, consistent with the transition in the magnetization results and the detwinning field observed here. The discussed anomalies and phase transitions agree quite well with the magnetization data. Figure 4: Dilatometry results on Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\). (a) Zero-field-cooled and zero-field (ZF) field-trained [110] thermal expansion data taken on warming for transverse (\(\mathbf{H}\parallel\)[001], bold lines) and longitudinal (\(\mathbf{H}\parallel\)[110], pale lines) field conditions at selected fields. (b) Linear expansion coefficient \(\alpha\), _i.e._ derivative of the data in panel (a). Offset is applied for visual clarity. (c), (d) Longitudinal and transverse magnetostriction data measured after ZFC. Measurements of the magnetostriction in the transverse setup (\(\mathbf{H}\parallel[001]\)) in Fig. 4(d) show a clear transition around 0.7 T which does not have obvious signatures in the magnetization. These transitions can thus be concluded to be lattice-driven or -influenced. ## V Field training In Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) there are two equivalent directions for the helical propagation vector, (\(\xi~{}\pm\xi~{}1\)), and the sample is expected to form a multidomain state with roughly equal contributions of both if cooled in zero field. As was discussed for instance in connection with ZnCr\({}_{2}\)Se\({}_{4}\)[39], it is often possible to detwin helical magnetism by applying a magnetic field perpendicular to the plane of the spins corresponding to one of the equivalent propagation vectors. The helix associated with that propagation vector can readily add a third component along the field to become conical with minimal impact on its ordered components, but other orientations of the helix are destabilized. The single-domain state thus prepared usually remains stable when the field is removed, due to the energy cost of nucleating domain walls. In Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) detwinning requires a field along [110]. To test for detwinning behavior and determine the required field strength, we measured magnetization as a function of field (4-quadrant \(M\)-\(H\) loops) in this field orientation. Selected data are plotted in Fig. 5(a), and more temperatures are plotted as \(M/H\) in Fig. 5(b). Derivatives are plotted in Fig. 5(c). As can be seen, there is a clear transition around 3 T, for both positive and negative field sweep directions, closely resembling the spin-flop transition reported recently in powder samples of the single-layer analogue Sr\({}_{2}\)FeO\({}_{4}\)[34]. At most temperatures this transition is accompanied by an irreversible detwinning transition -- the \(M/H\) values found before first reaching this field cannot be obtained again by field sweeps alone. This magnetic detwinning was verified by neutron scattering, as shown in Fig. 7. This sample was cooled in a field of 2.5 T applied along the [1\(\bar{1}\)0] direction, then measured in zero field. The magnetic reflections along the field were \(\sim\)3 times more intense than those perpendicular to the field, consistent with the partial detwinning expected for a field somewhat below the 3-4 T transition. The ability to detwin the magnetic order means that besides field-cooled and zero-field-cooled conditions, it is possible to measure the sample in its _single-domain state_ obtained by field training. Figure 5: Field-dependent magnetization data at selected temperatures. Plotted as (a) \(M(H)\) and (b) \(M(H)/H\) for \(\mathbf{H}\parallel[110]\) and as (d) \(M(H)\) and (e) \(M(H)/H\) for \(\mathbf{H}\parallel[001]\). Derivatives are plotted for (c) \(\mathbf{H}\parallel[110]\) and (f) \(\mathbf{H}\parallel[001]\). Vertical offsets have been added to the data in panels (b), (c), (e), and (f) for clarity. Knowing that 3-4 T is sufficient to detwin the magnetism at most temperatures, we took additional data with a third field history. For these field-trained data, shown in Fig. 6(a), the sample was cooled from well above \(T_{\mathrm{N}}\) in a field of 5 T, typically to a temperature of \(\sim\)50 K, before cooling to base temperature in zero field, upon which the sample was measured on warming in an applied field. A comparison of ZFC, FC, and field-trained data at 1 T is shown in Fig. 6(b), and the derivatives in Fig. 6(c). The field-trained data are vastly different from the other datasets over most of the temperature range, indicating detwinning of the magnetism. The field-trained curves rejoin the other field histories in a sharp transition roughly 7 K below \(T_{\mathrm{N}}\). In tests of the detwinning, we found that detwinning was preserved if we warmed to temperatures below this transition and cooled again, but detwinning was lost if we warmed into this transition. Such a transition would be explainable as either relaxation through fluctuations, or through the system entering a small bubble of a multiple-\(\mathbf{q}\) phase just below \(T_{\mathrm{N}}\). ac susceptometry curves (Fig. 8) closely follow the dc magnetization curves, do not shift with frequency, and do not have clear features in the imaginary component, excluding fluctuations, and the field dependence in the \(M\)-\(H\) loops does not suggest improved detwinning at higher fields, so this is most likely a multiple-\(\mathbf{q}\) phase. Since there are only four possible \(\mathbf{q}\) orientations in this system -- \(\pm(110)\) and \(\pm(\overline{1}10)\) -- assuming the \(\mathbf{q}\) itself does not change and no component parallel to \(\mathbf{q}\) develops, this phase can only be double-\(\mathbf{q}\). Figure 8: ac susceptibility of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\). Real (upper) and imaginary (lower) components of the temperature-dependent ac susceptibility of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) at zero applied field, for several frequencies. There is no clear feature in the loss, and no evidence of frequency dependence. Figure 6: Effect of field training on magnetization. Magnetization data collected under field-trained conditions with \(\mathbf{H}\parallel[110]\), to prepare a single-domain state. (a) Field-trained data measured on warming at several applied fields. At 2 and 2.5 T, the sample was cooled in 5 T to base temperature; for the other datasets, the 5 T field was reduced to 0 T at 50 K before continuing to cool to base temperature. (b) A comparison of FC, ZFC, and field-trained data measured in \(\mu_{0}H=1\) T. (c) Derivatives of curves in (b). Figure 7: Effect of domain selection on magnetic Bragg peaks. Elastic neutron scattering intensity in the magnetic satellites around the structurally forbidden (005) reflection in zero field at 1.5 K after cooling in a field \(\mu_{0}\mathbf{H}\parallel[1\overline{1}0]\) of 2.5 T. We also measured \(M(H)\) loops for \(\mathbf{H}\parallel[001]\), as shown in Figs. 5(d-f). This field orientation shows a similar phase transition at very similar fields, but it is sharper and more pronounced. No detwinning is observed, but none would be expected since in this case the field is at equal angles to the planes in which the spins lie in the two domains. The surprising apparent anisotropy of this transition resembles that found previously for the spin-flop transition in Sr\({}_{2}\)FeO\({}_{4}\)[34], and identifying this transition may shed additional light on the single-layer material. That detwinning occurs near this transition suggests that the higher-field phases may not twin, that strong fluctuations of the order are found near this transition, or that very different magnetic structures are obtained at higher fields. ## VI Specific heat Since clear transitions are seen in the magnetization and dilatometry data below \(T_{\mathrm{N}}\), the specific heat was measured to determine the entropy associated with these transitions. As can be seen in Fig. 9, there is no clear signature of additional thermodynamic phase transitions below \(T_{\mathrm{N}}\). This indicates that the additional transitions are either broad crossovers or are associated with very small changes in entropy. In particular, there is clearly no spin species or spin component that orders or disorders at these transitions. The \(c_{P}/T\) suggests a buildup of entropy below 20 K, presumably magnetic, perhaps associated with the freezing transition seen in the difference between ZFC and FC magnetization. This did not respond to a field of 7 T along [001]. ## VII Muon spin rotation The implanted muon is a very sensitive local magnetic probe, which in particular can demonstrate clearly whether the helical order becomes commensurate in any of the magnetic phases. In zero applied field in a magnetically ordered solid, the muon experiences a spontaneous field from the magnetic order. The muon spin precesses about any transverse component of this field, and, in the simplest case, the Fourier spectrum has a single resonance at the corresponding Larmor frequency. In a helimagnet with a long pitch or incommensurate wavevector, muons stopping at different positions along the helix experience different local fields, and in the continuous limit, the spectrum approaches a broad sinusoidal distribution[41]. The local field distribution is not very sensitive to the precise ordering wavevector or details of the ordered structure. It is, however, a volume average probe that can reveal phase separation phenomena[42] that may be difficult to detect by other means. In our data in zero applied field, the muon spin relaxes slowly in the paramagnetic state -- see Fig. 10(a) -- however, the relaxation appears exponential rather than the Gaussian expected from nuclear dipoles, which are static on the timescale of \(\mu\)SR. In fact, there are few nuclear moments in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), the most important being the \(\sim 7\%\)-abundant \({}^{87}\)Sr. The exponential relaxation should thus be due primarily to the fluctuating fields of the Fe moments. This is confirmed by the temperature dependence of the relaxation rate \(\lambda\) obtained from single-exponential fits [Fig. 10(a) inset], which shows a clear increase as the Fe spins slow on the approach to the Neel transition. The temperature dependence \(\lambda(T)\) is stronger than the bulk static uniform magnetic susceptibility (green curve: \(\mathbf{H}\parallel[001]\), 0.2 T). This is unsurprising, since \(\lambda\) is a local property and determined by an integral over all \(q\), including the ordering wavevector, while the \(q=0\) response will be suppressed by the occurrence of strong antiferromagnetic correlations in the paramagnetic state. Below \(T_{\mathrm{N}}\) the magnetic order gives rise to a static internal field at the muon site, changing the relaxation dramatically as seen in Fig. 10(b). Deep in the ordered state at 2 K, a large internal field causes rapid precession of a large fraction of the spin polarization. However, this precession is nearly invisible due to extremely rapid relaxation of the spontaneous oscillations. At such a low temperature, the relaxation is probably also _static_ in nature, reflecting a broad distribution of internal fields. This is consistent with helimagnetic order for all temperatures below \(T_{\mathrm{N}}\). Measurements in an applied transverse field (not shown) confirm that the full volume is magnetically ordered. Fitting the rapidly damped oscillations, which are confined to the first \(\sim\)100 ns of data, reveals a frequency (field) of roughly 50 MHz (0.37 T) at low temperature. Although this field is quite large, it is much smaller than the fields seen by Mossbauer spectroscopy at the \({}^{57}\)Fe nucleus [40], but this is expected due to the much stronger hyperfine coupling in the latter. The temperature dependence of the fitted frequency is shown in Fig. 10(c) together with a curve scaled from the Mossbauer data. The muon frequencies are roughly consistent with the Mossbauer temperature dependence, confirming that the Figure 9: Specific-heat data on Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\). (a) Specific heat and (b) specific heat divided by temperature for Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), measured in zero applied field and in a field of 7 T along [001]. The transitions found at low field in the magnetization are marked. The inset offers an expanded view around 70 K, where the transition suppressed by 7 T is also marked. internal field (proportional to the ordered moment) rises rapidly below \(T_{\rm N}\). The rapid damping of the precession reflected in the large error bars and scatter in Fig. 10(c) precludes detection of more subtle distinguishing features of the ordered phases. ## VIII Phase diagrams It is possible to extract phase transitions from the magnetization and dilatometry data, most readily from extrema in their derivatives, to generate \(H\)-\(T\) phase diagrams for various field orientations. Our data allow us to present such phase diagrams for \(\mathbf{H}\parallel[100]\) in Fig. 11(a), \(\mathbf{H}\parallel[110]\) in Fig. 11(b), and \(\mathbf{H}\parallel[001]\) in Fig. 11(c). Several features only appear under field-training, which was only possible for [110] fields, or under zero-field-cooling conditions, which were not measured for \(\mathbf{H}\parallel[100]\), and field sweeps were also not measured for [100], so the [100] and [001] phase diagrams should be viewed as incomplete. However, there are some surprising similarities. In particular, the transition at \(\sim\)3 T is nearly isotropic, and the transition that starts at 70 K is suppressed by field in a nearly identical manner, independent of field orientation. Isotropic phase transitions are not expected in a highly anisotropic layered crystal lattice, or in light of the previously reported elliptical helix propagating along (110) [31]. The decrease in magnetization at the 70 K transition is comparable to that at \(T_{\rm N}\), and the change in slope in \(M(H)\) around 3 T is a factor of 2 at many temperatures and is clearly seen in dilatometry, indicating that these are unambiguously intrinsic, bulk transitions. That the former is not clearly seen in the specific heat indicates that it is either a broad crossover or not associated with a large change in entropy. Perhaps these transitions correspond to energy scales in the magnetic interactions or spin reorientations, but detailed diffraction studies in field, and for different field orientations, are required to clarify this issue. A weak suppression of \(T_{\rm N}\) for any field direction is less surprising, since an applied field will eventually destabilize helical order. Dilatometry points are shown in the phase diagrams as open symbols, with triangles pointing in the direction of the field or temperature sweep and diamonds used for magnetostriction transitions which were consistent for both sweep directions. These points largely agree with those from magnetization, as already discussed, but there are a few inconsistencies. In particular, the boundary between phases I and II evolves into the VI-V boundary in the dilatometry measurements, rather than the IV-V boundary. There are also points around 90 K for \(\mathbf{H}\parallel[110]\) and around 0.75 T for \(\mathbf{H}\parallel[001]\) which do not correspond to features in the magnetization. The latter is evidently not related to the magnetic order since it also appears at 0.8 T at 140 K in the paramagnetic phase, while the former could conceivably be structural in origin. Shading in Fig. 11(b) indicates the approximate maximum extent of field-training, based on a judgement of to what field the last vestiges of this effect can still be observed in \(M(H)\) and its derivative (gray triangles). This onset of field training corresponds roughly to the onset of a difference between FC and ZFC data at low temperature, and to the 3-4 T transition at intermediate temperatures. No field training is observed in phase III, and it is unclear whether phase V supports twinning, but the inability of a 14 T field to detwin the magnetism up to 10 K implies that phase IV and presumably phase VI can twin. These phases can be detwinned at higher temperatures or by cooling into them in field. With only two directions for the propagation vector, it is difficult to produce three distinct combinations to explain the phases at zero field. Possibilities include a subtle structural change due to magneto-lastic coupling, order of orbitals or charge multipoles, temperature-dependent changes to the propagation vector, ordering of an overlooked component of the spin, or some form of exotic multiple-\(\mathbf{q}\) order such as those proposed theoretically in other contexts in Refs. [43, 44, 45, 46] but not yet demonstrated experimentally. A subtle orthorhombic distortion associated with charge order in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) is observed below \(\sim\)330 K, Figure 10: \(\mu\)SR results on Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\). \(\mu\)SR asymmetry in zero applied field in (a) the paramagnetic state, and (b) the magnetically ordered state. Data on high-purity nonmagnetic Ag is included as a baseline. (c) Temperature dependence of the oscillation frequency, compared against scaled Mössbauer data [40]. but with an extremely short correlation length along the \(c\) direction [33]. The sharp magnetic Bragg reflections in both single-crystal and powder diffraction imply a long magnetic correlation length in all directions. This means that every magnetic domain will average over many structural domains, and the material will be effectively tetragonal from the point of view of the magnetism. This is particularly true once the magnetism is detwinned and the entire sample is a single magnetic domain. The iron is close to Fe\({}^{3+}\) (high spin), which has no orbital polarization and is spherically symmetric. Its excess positive charge is predominantly delocalized on the oxygen cages, which should preclude any orbital or charge multipole ordering. Any significant magnetoelastic coupling should make the transitions within the ordered state visible in the specific heat data, particularly in field, which they are not, and a structural component to the transition would have been seen in Ref. [33]. These transitions are presumably magnetic. We thus investigated the temperature and field dependence of the propagation vector. ## IX Magnetic order Looking first at the low-field phases, we note that the magnetic state of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) has been previously reported as an elliptical helix based on neutron diffraction [31]. Since we have identified an unexpectedly complex phase diagram for all field orientations, one immediate question is whether there is any obvious change in the propagation vectors, ellipticity, or intensities of magnetic reflections, which would give a hint as to the nature of the magnetic phase transitions. We have seen above that the \(\mu\)SR results are consistent with helical magnetism at all temperatures below \(T_{\rm N}\), so we now turn to diffraction. The diffracted intensity in zero applied magnetic field was tracked versus temperature for a single-crystal sample at E5 and for powder samples at E6 and HRPT [the latter is shown in Fig. 13(a)], and the magnetic Bragg peaks remain at their incommensurate positions. As Figure 11: Phase Diagrams. \(H\)–\(T\) phase diagrams for Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) extracted from the magnetization (closed symbols) and dilatometry data (open symbols) for fields along (a) [100], (b) [110], and (c) [001]. Triangles point in the direction of the field or temperature sweep, other symbols represent transitions that do not depend on sweep direction. Field sweeps were not performed for \({\bf H}\parallel[100]\), so these data were not sensitive to the 3–4 T transition. Shading in (b) indicates the approximate region in which field training can be discerned. Figure 12: Temperature- and field-dependent incommensurability from neutron diffraction. Diffracted magnetic intensity in zero field at E5, with the transitions from the low-field \({\bf H}\parallel[110]\) magnetization marked. The integrated intensity in the \((\xi\,\overline{\xi}\,\overline{1})\) reflection shows no signature of a transition within the magnetically ordered state. Inset: the incommensurability is reduced slightly on warming toward \(T_{\rm N}\), and appears insensitive to the magnetization transitions. shown in Fig. 12, there are no sharp changes in the intensity of the magnetic reflections with temperature, and in particular there is no signature of the transitions found in the magnetization. The temperature and field dependence of the incommensurability measured on powder samples at E6 and HRPT and a single crystal measured at E4, shown in the inset to Fig. 12, are smooth and on the scale of variations among samples or beamlines. That the incommensurability appears to be insensitive to all magnetization transitions indicates that there are no significant changes to the underlying \(\mathbf{q}\) vectors with temperature or with magnetic field up to at least 6.5 T. This suggests that, as in SrFeO\({}_{3}\), these phases are distinguished by different combinations of \(\mathbf{q}\) vector; however, as mentioned above, Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) has three magnetic phases at low field and at least two more above \(\sim\)3 T, but a maximum of only two independent \(\mathbf{q}\) vectors. It remains unclear what distinguishes phases I and II. To investigate how the higher-field phases differ from the low-field phases, diffraction was performed in magnetic fields \(\mathbf{H}\parallel[001]\) and \([110]\) at E4 and on powder at HRPT -- the latter is shown in Fig. 13(b). The volume of reciprocal space blocked by the magnet (E4 and E6), the random field orientation on the powder sample (HRPT), and possible field-induced preferred orientation effects all limit what can be said about the high-field phases, but the changes in position of the magnetic reflections were again minimal, as seen in Fig. 13(b) and summarized in the inset to Fig. 12. Intensities in the magnetic peaks changed across the 3-4 T transition in a manner suggestive of a reduction in the in-plane component of the ordered moment. Based on the previously reported elliptical helix [31], this would indicate a higher ellipticity. However, the change across this transition appears to be relatively abrupt and step-like, and it remains unclear why the ellipticity should be quantized. Clarifying the nature of the higher-field phases and their relationship to the low-field phases, as well as fully identifying the low-field phases, will require a detailed single-crystal diffraction study in a magnet capable of applying at least 4 T along [110], to detwin the low-field phases and access the high-field phases. We note, however, that a similar-looking transition at 3-4 T was found in magnetization data on powders of the single-layer analogue Sr\({}_{2}\)FeO\({}_{4}\), where it was reported as most likely a spin-flop transition [34]. This transition is presumably also relatively isotropic, despite the strong structural anisotropy. Sr\({}_{2}\)FeO\({}_{4}\) is only available in powder form, so clarifying the nature of the 3-4 T transition in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) will likely also provide strong hints as to the magnetic phase diagram of Sr\({}_{2}\)FeO\({}_{4}\). Our inability to detwin phase III makes it a prime candidate for double-\(\mathbf{q}\) order analogous to the skyrmion-lattice phase I\({}_{\mathrm{c}}\)[47] in SrFeO\({}_{3}\). In contrast, both phases I and II in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) could be detwinned, indicating that they both break the four-fold rotational symmetry of the lattice. Any multiple-\(\mathbf{q}\) order in either of these phases would need to be extremely exotic, but it also remains unclear how to realize two independent single-\(\mathbf{q}\) phases with helical order alone. The ordering of an overlooked spin component would be possible, particularly in phase I, since the loss of this order would be expected to enhance \(M/H\) on warming. However, previous refinements of the magnetic order were performed on single crystals at low temperature, and should have detected this. The helical order at low temperature has been reported to be elliptical [33], so the ellipticity could change, as suggested across the 3-4 T transition, but no clear change is seen with temperature. Above the 3-4 T transition, while it is possible to freeze the magnetic order at low temperatures and prevent detwinning (distinguishing phase IV-A from IV-B), the higher-field phases otherwise seem to be largely detwinned. The peak in the magnetization separating phases V and VI could perhaps arise from fluctuations as the magnetic order reorients itself in some way. However, we have not observed a clear change across the IV-V boundary with diffraction, and our magnetic fields were not high enough to access phase VI, so differences among the higher-field phases remain unclear. Identify Figure 13: Effect of temperature and field on magnetic Bragg reflections. Evolution of magnetic neutron intensity (HRPT) with (a) temperature and (b) magnetic field. Insets highlight the strongest magnetic peaks for an impurity phase of SrFeO\({}_{3}\) (6.1\({}^{\circ}\)) and Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) (7.7\({}^{\circ}\)). Datasets in the paramagnetic phase at 150 K and 0 T (a) or 6 T (b) are included for reference. ing these phases will require detailed high-field diffraction measurements on single crystals. ## X Summary and Outlook The magnetic phase diagram of Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) is surprisingly complex, and highly reminiscent of that of SrFeO\({}_{3}\). This is despite SrFeO\({}_{3}\) having four distinct directions for its propagation vector pointing along {111}, while there are only two such directions possible in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\). The high-temperature phase III cannot be detwinned by field, making it evidently a double-**q** phase, possibly analogous to the low-temperature skyrmion-lattice phase I\({}_{\rm c}\) in SrFeO\({}_{3}\). However, it remains unclear what distinguishes phases I and II. The transition at 3-4 T, likely analogous to the "spin-flop" transition in Sr\({}_{2}\)FeO\({}_{4}\)[34], may be related to the ellipticity of the helical order. The other transitions and the identities of the remaining phases remain unclear. The phase diagram of SrFeO\({}_{3}\), despite some similarities, provides limited insight here -- its quadruple-**q** phase II\({}_{\rm c}\) is impossible in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), and its phases III\({}_{\rm c}\), IV\({}_{\rm c}\), and V\({}_{\rm c}\) have not been identified. At higher fields, there is very little diffraction data on either material to provide insight. Since Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) has only two possible propagation directions for its helical order, with spin orientations in orthogonal planes, perfect detwinning of the magnetic order is possible, and we have shown that this is readily achieved at accessible temperatures and fields. This is in contrast to SrFeO\({}_{3}\), in which it is not possible to fully detwin all magnetic phases with a magnetic field. Fully determining the magnetic phases in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) will be more straightforward and is likely to provide insight for SrFeO\({}_{3}\), allowing better targeting of future measurements as that material's phase diagram is elucidated. The single-layer analogue Sr\({}_{2}\)FeO\({}_{4}\) is possibly more relevant to the current work, but less is known of its magnetic structures. This is largely because it decomposes far below the liquidus [29], making crystal growth impossible thus far. A spin-flop transition reported in that material in field [34], which must be relatively isotropic since this was measured on powder, closely resembles the 3-4 T transition seen here. In Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) this transition appears to be connected with a relatively sharp change in the ellipticity of the helical order, but such a relatively abrupt change in a parameter which ought to be continuous is surprising, suggesting that our understanding of the low-field phases is incomplete. Diffraction on single crystals should be performed to nail down the phases in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), which will in turn allow inferences as to the magnetic phase diagram of Sr\({}_{2}\)FeO\({}_{4}\). It is worth commenting here that while SrFeO\({}_{3}\) is too symmetric to support DM interactions, the charge disproportionation in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) should lead to a lattice distortion which would allow them. Yet, the strong similarities in the magnetic order and phase diagrams among the three better-studied members of this family indicate that DM interactions play no significant role. We would thus anticipate a similar phase diagram and similar magnetic order in the triple-layer analogue Sr\({}_{4}\)Fe\({}_{3}\)O\({}_{10}\), which to our knowledge has not been investigated. The helical and multiple-**q** order found in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) and Sr\({}_{2}\)FeO\({}_{4}\), and likely also present in Sr\({}_{4}\)Fe\({}_{3}\)O\({}_{10}\), must arise from the same competition among exchange interactions, without DM, even if DM interactions are allowed. In light of its surprisingly complex magnetic phase diagram, Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\) calls for more detailed investigation to identify its magnetic phases and phase transitions. The diffraction, in particular, should be revisited at high fields and under field-trained conditions, and transport properties may reveal signatures of topological protection that would help clarify which phases are multiple-**q**. It would also be worth revisiting the [100] and [001] phase diagrams in a vector magnet, which would allow field-training into a single-domain state before measuring. While it is not yet possible to identify most of the magnetic phases found in Sr\({}_{3}\)Fe\({}_{2}\)O\({}_{7}\), its magnetic phase diagram is clearly much richer than previously imagined, and it will likely yield several exotic magnetically ordered phases. ###### Acknowledgements. The authors are grateful for experimental assistance from the groups of M. Jansen and R. Kremer. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through individual grants PE 3318/3-1 (Project No. 455319354), IN 209/7-1 (Project No. 401179363), and IN 209/9-1 (Project No. 434257385); through projects C01 and C03 of the Collaborative Research Center SFB 1143 (Project No. 247310070); through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Materials -- _ct.qmat_ (EXC 2147, Project No. 390858490); and through the Collaborative Research Center TRR 80 (Project No. 107745057). Neutron measurements were carried out at the E4, E5, and E6 instruments at the BER-II research reactor, operated by the Helmholtz-Zentrum Berlin fur Materialien und Energie. This work is based in part on experiments performed at the Swiss spallation neutron source SINQ, Paul Scherrer Institute, Villigen, Switzerland. The authors also acknowledge the Institut Laue-Langevin, Grenoble (France) for providing neutron beam time [48].
2307.16531
The Ontology of Compositeness Within Quantum Field Theory
In this work, we attempt to define a notion of compositeness compatible with Quantum Field Theory. Considering the analytic properties of the S-matrix, we conclude that there is no satisfactory definition of compositeness compatible with Quantum Field Theory. Without this notion, one must claim that all bound states are equally fundamental, that is, one cannot rigorously claim that everyday objects are made of atoms or that atoms are made of protons and neutrons. I then show how an approximate notion of compositeness may be recovered in the regime where the mass of a bound state is close to a multi-particle threshold. Finally, we see that rejecting compositeness solves several of the "problems of everyday objects" encountered in an undergraduate metaphysics course.
Toby Peterken
2023-07-31T09:50:54Z
http://arxiv.org/abs/2307.16531v2
# The Ontology of Compositeness Within Quantum Field Theory ###### Abstract In this work, we attempt to define a notion of compositeness compatible with Quantum Field Theory. Considering the analytic properties of the S-matrix, we conclude that there is no satisfactory definition of compositeness compatible with Quantum Field Theory. Without this notion, one must claim that all bound states are equally fundamental, that is, one cannot rigorously claim that everyday objects are made of atoms or that atoms are made of protons and neutrons. I then show how an approximate notion of compositeness may be recovered in the regime where the mass of a bound state is close to a multi-particle threshold. Finally, we see that rejecting compositeness solves several of the "problems of everyday objects" encountered in an undergraduate metaphysics course. ## 1 Introduction * 2 Why should we take the unobservable seriously? * 3 What is a satisfactory definition of compositeness? * 4 The incompatibility of compositeness and QFT * 4.1 A warm-up from perturbation theory * 4.2 The full, non-peturbative argument * 4.3 Consequences of rejecting compositeness * 5 Possible objections * 5.1 Objection 1: The usefulness of phenomenological models * 5.2 Objection 2: Quantum numbers as fundamental building blocks * 5.3 Objection 3: Bethe-Salpeter wavefunctions and the Weinberg Compositeness Criterion * 6 An Approximate Notion of Compositeness * 6.1 Can the Dennett Criterion save compositeness? * 7 The problem of ordinary objects * 7.1 Problem Of The Many * 7.2 Trogs * 7.3 Material Constitution * 8 Conclusion * A Expanding a bound state in terms of Bethe-Salpeter wavefunctions * B Introduction I began questioning the nature of compositeness within Quantum Field Theory (QFT) when writing my first literature review. When talking about different particles, papers classified them as conventional hadrons, exotic hadrons, hadronic molecules, and so on. I could not find a convincing explanation of the difference between these categories of particles. As bound states manifest themselves as poles in a scattering amplitude [1], their properties (such as mass, width, or even just their existence) cannot be calculated using a perturbative framework. An alternative approach would be to use lattice field theory for these calculations (these techniques are covered in many books such as [2] or in relevant review articles such as [3]). As an example, (which is only chosen as it is related to what I was reading at the time): in lattice field theory calculations the existence and mass of a stable bound state can be found from an appropriate time-dependent (Euclidean) correllator. A bound state of mass \(M\) causes the correllator to exhibit the following time dependence, up to discretization and volume effects: \[C(t,\mathbf{P})=\langle\sigma(t,\mathbf{P})\sigma^{\dagger}\rangle\propto e^{-t\sqrt{ M^{2}+\mathbf{P}^{2}}}+\text{ scattering states} \tag{1}\] where \(\sigma\) is an interpolation operator with a specified set of quantum numbers of the single-particle state. The details are unimportant, except that all stable bound states show the same behaviour, independent of if they would be considered conventional hadrons or not. At the time it felt as if these methods obscured the difference between the different categories of particles - in hindsight, I would say it emphasized the similarities. Eventually, I came to the conclusion that the different types of bound states are actually just a form of cataloguing: conventional hadrons are those whose quantum numbers can be produced from a simple combination of valence quarks whereas hadron-hadron molecules have the quantum numbers of a two-hadron channel, and have a mass that is just below this two-particle threshold [4, 5]. However, cataloguing based on phenomenological characteristics does not always give a good picture of the underlying reality. Biology is full of different categories, but there are always exceptions and ambiguous cases (as exemplified by the name of the podcast "No Such Thing as a Fish" [6] or that at first people didn't believe in the existence of a duck-billed platypus due to it not fitting into the pre-established categories of animals [7] - despite the overwhelming evidence of 129 Phineas and Ferb episodes [8]). Categorization is useful for sorting our observations, but not so useful for providing a fundamental understanding. In this work, I explore the notion of compositeness and its relation to QFT, and come to the potentially unsavoury conclusion that _no satisfactory, exact, and rigorous definition of compositeness exists that is compatible with QFT._ I start by justifying why it is reasonable to take the unobservable aspects of a physical theory seriously. I then go on to elucidate what I require from a satisfactory definition of compositeness. I then present two arguments as to why this satisfactory definition is not compatible with QFT and explain a few corollaries of this result, the first argument is best considered as a warm-up using perturbation theory, and the second argument as the main result of this work. I then go through a range of possible objections and show that they don't save the notion of compositeness. I then show how it is possible to define an approximate notion of compositeness and how it can be included in the ontology of higher-level theories such as molecular or solid-state physics. Finally, I relate this to the problem of ordinary objects [9] as encountered in introductory metaphysics courses and show that many of these problems dissolve if compositeness is rejected. I assume the audience of this work is familiar with QFT, at the level of a graduate course, with some knowledge of its applications to particle physics and some of the formal aspects of scattering theory such as the LSZ procedure and the analytic structure of the S-matrix (chapter 7 of ref. [10] should suffice). ## 2 Why should we take the unobservable seriously? Questions about interpreting scientific theories, or about the ontological status of certain aspects of a scientific theory (that is questions about if and how the features of a scientific theory exist) often seem to be ignored by working physicists - at least in their professional work. In fact, personally, I have found many people to dismiss such questions as 'too philosophical' and not really worth thinking about. In this section, I explain why I feel this dismissal is often too premature. QFT is hopefully not a theory that merely relates the center of mass energy of a hadron collider to some numbers on the screen; hopefully it is not just a theory relating free particles from infinitely in the past to free particles infinitely in the future - even if this is what is directly observable and well defined via the LSZ procedure. Something happens, in the real physical world, between starting an experiment at the Large Hadron Collider (LHC) and seeing numbers on a screen (or rather, something is going on between the asymptotic past and future states). We may never be able to directly observe what is going on, we may never be certain about what is going on, we may never have a unique theory1 to tell us what is going on. But _something_**is** going on. It would be remarkable if a theory that gave such accurate observational predictions was also completely wrong about everything else. A general introduction to the questions of scientific realism can be found in ref. [12; 13]. Footnote 1: Here I use the word ‘unique’ differently to physicists. In physics, two theories are considered equivalent if they always give the same observational outcomes but here I take a much stronger definition, two theories are unique if all content (observable and unobservable) is the same. In this sense, a single mathematical formalism can give rise to several distinct theories depending on how it is interpreted. More details on this are given in chapter 2 of ref. [11]. There is a nice analogy (taken directly from the introduction of ref. [14]) that highlights that this dismissal can often happen inconsistently, with people much less willing to take the realism of quantum mechanics as seriously as other theories. When we look at distant galaxies - so distant we will probably never get to them and the only thing we can do is look from afar - all we can see is a sort of hazy glow. Using our understanding of galactic structure we can infer that these galaxies are made from hundreds of billions of stars, that these stars will have planets around them, and that some of these planets will have atmospheres. Even though we will never see these planets, I believe they exist and I believe they do have atmospheres and I have not yet met a physicist who would claim to doubt this either. We believe in these atmospheres, on the ground that they are inferred from taking our best scientific theory seriously. Why are they given a privilege when quantum fields are not? Even though the unobservability of extra-galactic atmospheres is due to practical limitations, no one will observe them in my lifetime and so when making the individual assessment of their existence, I can still only infer from the current best theory. I now do the same with QFT. What is a satisfactory definition of compositeness? I take the following to be necessary requirements for a satisfactory definition of compositeness. These requirements are asserted in a loose manner as these are taken to be minimal requirements and having an overly precise definition would lead to this work rejecting a definition of compositeness that is too specific. * **For \(X\) to be a composite object made of \(A\) and \(B\) we need to be able to refer to \(A\) and \(B\), in a well-defined way, whilst \(X\) is in existence.** For example, a research group is made out of a collection of people, I can refer to both the research group and the members of the research group in a completely unambiguous way. The members do not define the group, specific individuals can join and leave the group; however the group, at any given time, contains a set of individuals which can unambiguously be referred to. In non-relativistic quantum mechanics, the hydrogen atom is a composite object containing an electron and a proton. We can unambiguously refer to the coordinates of the electron and proton separately to the whole atom. * that is, compositeness is not reflexive2**. If an atom is made of a proton and an electron, then it would be absurd to say that the proton contained an atom as then an atom would contain a proton which would contain an atom ad infinitum. If you have a desire to save the notion of compositeness due to the historically successful explanatory power of reductionism, then this axiom is needed to save reductionist explanations from circularity. Footnote 2: Within this I also exclude the possibility of a circle of compositeness. We cannot have \(A\) containing \(B\), \(B\) containing \(C\) and then \(C\) containing A. * **Reality cannot depend on arbitrary choices.** In QFT, there are a huge range of choices I could make: gauge, renormalization scheme, I can rewrite the Lagrangian in a range of different ways, I could split up the Hamiltonian into a 'free' and 'interacting' part in a range of different ways and so on. If I want to make a meaningful statement about the external world, then that statement can't depend on any of these arbitrary choices made. ## 4 The incompatibility of compositeness and QFT In this section we justify the following statement: _No satisfactory, exact (in the sense of being non-perturbative) and rigorous definition of compositeness exists compatible with QFT._ ### A warm-up from perturbation theory To explore compositeness exactly, any core argument cannot rest upon perturbation theory. That being said, it lays the groundwork for most working physicists and we will see that hidden in perturbative calculations was a prophecy of the argument in the next section. Take the standard example of the self-energy of the electron (a complete calculation can be found in 18.2 of [15]), the 2-point function for an electron travelling with fixed 3-momentum \(\mathbf{p}\) is: \[C(t,\mathbf{p})=\langle\psi_{e}(p^{0}=\sqrt{\mathbf{p}^{2}+m_{e}^{2}},\mathbf{p})\bar{\psi }_{e}(x=0)\rangle. \tag{11}\] At leading order, this is given by the propagator of the free electron field as shown in figure 1. At higher orders, the case is not so simple, the full electron propagator gets contributions from other fields in the theory, as shown in figure 2. In fact, if the Weak Force is included then there are graphs in which the electron field is replaced by the neutrino! The problem here is _not_ that the physical electron contains a superposition of a range of different fields, none of the criteria given in section 3 rule a superposition out of a well-behaved definition of compositeness. The problem lies in the fact that the relative contribution from the different diagrams depends on a range of different choices. For the most striking example of this, compare the on-shell renormalization scheme, where all higher-order diagrams get removed, to the MS scheme, where higher-order loop diagrams do contribute. If we took these perturbative diagrams seriously when defining compositeness we would find that the physical electron contains a W-boson - until we switch to the on-shell scheme and find that the contribution from this diagram cancels (when the electron is on-shell). A final, but tangential, point I want to emphasize is that the physical electron (associated with a single particle state) is not a priori the same as either the electron field or the electron degree of freedom in the Lagrangian nor is the interpolating operator the same as either physical electron or the electron field. Figure 1: At leading order, the fully dressed electron propogator just consists of the propagator for the bare electron field. Figure 2: At higher order the physical electron gets contributions from a range of different free particles ### The full, non-peturbative argument In this section, I present what I consider to be the main result of this work: the full non-peturbative argument showing the incompatibility of QFT and compositeness. An outline of the argument is given below and will be further justified throughout the rest of this section. P1) All S-matrix poles have the same ontological status. That is, all S-matrix poles exist in the same way and don't carry enough information to create a hierarchy of fundamentality. P2) Two objects, \(X\) and \(Y\), are each individually associated with an S-matrix pole. C) The objects \(X\) and \(Y\) therefore have the same ontological status. Neither is privileged over the other, the existence of one cannot depend upon or be contained within the other, and neither can be composed of the other..\(\therefore\). To justify the first premise, note that a pole only carries 3 pieces of information [5]. None of these provide the relevant information needed to create any form of ontological hierarchy. The location of the pole gives the mass and width of the bound state or resonance, the residue of the pole determines the coupling strength to a given channel, and all S-matrix poles are believed to be first order and hence the order gives no extra information (no full proof has been found but real axis poles are associated with physical states and necessarily simple and these can, under certain choices of the coupling, become resonances and all poles should have the same structure). The second premise is merely a matter of computation. Hadron Spectroscopy calculations have confirmed the existence of S-matrix poles associated with a range of low-mass states [3]. There is, however, no reason to believe that there is (in principle) an upper mass limit to the methods of Hadron spectroscopy. With a big enough computer, one could calculate the mass or other interesting quantity of everyday objects directly from the underlying field theory. Not everything that could be colloquially called an object is associated with an S-matrix pole (I go into more detail about what could constitute an object in section 6.1). However, I do claim that any collection of matter that is somehow stuck together is associated with a pole in the S-matrix: specific atoms, molecules, chairs and people are all associated with a pole. ### Consequences of rejecting compositeness By rejecting compositeness we have to accept that all bound states (or rather all poles of the S-matrix) have the same ontological status in QFT. Electrons, photons, Pions, atoms, molecules, chairs and people are all as fundamental as each other3. Footnote 3: I have purposefully excluded quarks from this list due to asymptotic confinement - even in pure QCD they are not stable. Without a well-defined notion of'made of', we cannot rigorously say 'atoms are made of a proton and an electron' or that 'chairs are made of atoms' or even that 'an Ikea chair is made of a selection of screws and bits of wood!'4, all of these objects are equally fundamental in QFT. Footnote 4: although it is indeed made _from_ these things in the sense that by combing screws and wood results in a chair Possible objections In this section, I present a range of objections to the argument above and show how these objections ultimately fail to save compositeness as a well-defined notion. ### Objection 1: The usefulness of phenomenological models Phenomenological models based on the assumption of compositeness are widespread and it seems that every time a particle is discovered it gets categorized as a hadronic molecule, penta-quark, exotic hadron, or so on. At the time of writing, the most recent announcement of new particles from the LHCb experiment immediately announced them as penta and tetra quarks [16; 17; 18]. The usefulness of compositeness in categorizing particles (not to mention the usefulness of compositeness in atomic physics, chemistry and so on) suggests that there must indeed be some underlying truth to the idea. In response to this, firstly, none of the models based on compositeness are exact or rigorous. The quark model can explain the quantum numbers of a particular bound state but can't be used to explain other quantities such as mass or form factor. These models also haven't been derived directly from the underlying field theory [19] and are just (well-motivated) constructions. Without the rigor that would necessarily accompany an _ab inito_ derivation, these models can't be taken seriously to provide a deep understanding into the nature of compositeness. I discuss the effectiveness of approximate composite models later in section 6. Secondly, I would also claim that such models beg the question. There is more than sufficient freedom in constructing composite models that they cover the phenomenological landscape5. This lack of falsifiability makes it hard to view agreement with observation as evidence for compositeness. These composite models are useful for categorization, but not for understanding the underlying physics. Footnote 5: especially since they only explain the quantum numbers ### Objection 2: Quantum numbers as fundamental building blocks Along similar lines to composite models, we could take the quantum numbers to elucidate the constituents of a given bound state. For example, a helium nucleus has baryon number 4 and so is made from 4 baryons, it has charge +2 so it must contain 2 positive and 2 neutral baryons - i.e. 2 protons and 2 neutrons. It is not necessarily concerning that the constraints imposed by the set of quantum numbers don't uniquely specify the constituents, a bound state could always be a superposition of different constituents. This would, however, lead to a notion of compositeness that is reflexive. As an extreme example, a neutron star would be considered a composite state of many neutrons, but in reverse, a neutron would be a composite object made of a neutron star and a (slightly smaller) anti-neutron star. It might be possible to break this reflexivity by choosing a set of particles that span all conserved quantum numbers and conclude that all bound states are composite objects of this subset of particles. Although, I would argue that this choice is arbitrary - even if there are intuitive choices for this set of particles - defining the set would still be an external choice imposed upon the underlying field theory. Even if a consensus on which particles get chosen to be part of this fundamental set of building blocks could be reached, it wouldn't completely save our everyday notion of compositeness. Fully rejecting compositeness would lead to all bound states being ontologically equivalent; by instead choosing a fixed subset of fundamental particles - say we choose the particles stable in the full standard model: protons, neutrons and electrons in this set - then all bound states would be made out of these and only these. A chair could be considered a bound state of protons, neutrons and electrons, but there would still be no satisfactory way to conclude a chair is made of atoms or molecules (or even quarks) without running into all the problems above. ### Objection 3: Bethe-Salpeter wavefunctions and the Weinberg Compositeness Criterion This section contains two objections, even though they are separate, the mathematics behind them is very similar and hence my response to them will be related. I will illustrate these objections using Quantum Electrodynamics, however, none of the specifics of the theory are actually relevant. In the \(e^{+}e^{-}\) channel, the full interacting Hamiltonian \(H\) contains a near-threshold6 bound state, Positronium. We denote a Positronium state with fixed 3 momentum \(\mathbf{P}\)\(|B,\mathbf{P}\rangle\) (the B is for **B**ound state as \(\mathbf{P}\) is stolen by momentum), this obeys the eigenvalue equation: Footnote 6: The near-threshold doesn’t actually affect the overall logic, but only near-threshold bound states can be well described by perturbation theory. \[H|B,\mathbf{P}\rangle=\sqrt{M_{B}^{2}+\mathbf{P}^{2}}|B,\mathbf{P}\rangle. \tag{10}\] We can use this to define a set of Bethe-Salpeter (BS) wavefunctions for the constituent particles. If \(\phi_{e^{+}}(x)\) and \(\phi_{e^{-}}(x)\) are interpolating operators with the quantum numbers of a positron and electron respectively then the momentum-space BS wavefuntion is defined as: \[\Phi^{\mathbf{P}}_{e^{+},e^{-}}(\mathbf{q},\mathbf{q}^{\prime})\delta(\mathbf{P}-\mathbf{q}-\mathbf{q} ^{\prime})=\int d^{3}\mathbf{x}d^{3}\mathbf{y}\ e^{i\mathbf{q}\cdot\mathbf{x}}e^{i\mathbf{q}^{ \prime}\cdot\mathbf{y}}\langle 0|\phi_{e^{+}}(t=0,\mathbf{x})\phi_{e^{-}}(t=0,\mathbf{y})|B, \mathbf{P}\rangle. \tag{11}\] These can be constructed for any set of particles that has the combined total quantum numbers of the desired bound state. These BS wavefunctions can be interpreted as encoding the momentum of the individual constituents7. Footnote 7: They are not the standard wavefunctions from NRQM as they cannot be given a probabilistic interpretation. Decomposing the full Hamiltonian into a free and interacting part, \(H=H_{0}+V\), the eigenstates of \(H_{0}\) form a basis that is the Fock space constructed from free particles. We can expand the bound state above in terms of these free multi-particle states: \[\begin{split}|B,\mathbf{P}\rangle&=\int d^{3}\mathbf{\bar{ q}}\ \Phi^{\mathbf{P}}_{e^{+}e^{-}}(\mathbf{P}-\mathbf{\bar{q}},\mathbf{\bar{q}})\ |e^{+}(\mathbf{P}-\mathbf{\bar{q}})e^{-}(\mathbf{\bar{q}}) \rangle_{0}\\ &\quad+\int d^{3}\mathbf{\bar{q}}d^{3}\mathbf{\bar{q}}^{\prime}\ \Phi^{\mathbf{P}}_{e^{+}e^{-} \gamma}(\mathbf{P}-\mathbf{\bar{q}}-\mathbf{\bar{q}}^{\prime},\mathbf{\bar{q}},\mathbf{\bar{q}}^{ \prime})|e^{+}(\mathbf{P}-\mathbf{\bar{q}}-\mathbf{\bar{q}}^{\prime})e^{-}(\mathbf{\bar{q}}) \gamma(\mathbf{\bar{q}}^{\prime})\rangle_{0}+\ldots\end{split} \tag{12}\] Where we have included the subscript \({}_{0}\) as a reminder that the Fock states are eigenstates of the free Hamiltonian. The BS wavefunctions are therefore the probability amplitude for the bound state to contain a certain set of free particles. The proof of this result can be found in appendix A. In short, the BS wavefunction tells us the behaviour of the constituent parts. Further details of this idea can be found in refs. [1, 20, 21]. A similar objection is related to an idea from Weinberg which has come to be known as the Weinberg Compositeness Criterion - further details can be found in [22, 23]. In the above example, it is perfectly possible for the eigenstates of the free Hamiltonian to include a free positronium state \(|B,\mathbf{P}\rangle_{0}\) - even though this is not usual for peturbative QED calculations. He defines a quantity \(Z\) as the overlap between the interacting bound-state and the equivalent free state: \[Z:=|\langle B,\mathbf{P}|B,\mathbf{P}\rangle_{0}|^{2}\quad\implies\quad 1-Z=\int_{ \text{multi-particle states, }\alpha}d\alpha\ |\langle B|\alpha\rangle_{0}|^{2}. \tag{10}\] If \(Z\approx 0\) then the bound state couples mainly to the multi-particle states and so is composite and if \(Z\approx 1\) then it couples strongly to the single particle state and is elementary. For weakly bound states (such that the binding energy is small) he showed that this \(Z\) can be related to the S-wave scattering length: \[a_{0}\propto\frac{2(1-Z)}{2-Z} \tag{11}\] making compositeness directly observable. In summary, it seems like it is possible to expand a bound state in terms of free particle states, with this expansion containing information about the composite structure. Furthermore, the compositeness of a bound state can, for weakly bound states, be calculated from the scattering length. Both of these objections rely on a decomposing the Hamiltonian into free and interacting pieces. Despite the excellent successes of peturbative QED calculations, this decomposition is often not mathematically well defined. More rigorous treatments of scattering theory, such as that given in chapter 9 of [1], bypass this decomposition altogether and instead the "free-ness" of multi-particle states in the asymptotic past is defined in terms of transformation properties and the inner-product. There are two responses to the objections above, firstly there is freedom in choosing the free Hamiltonian8 which results in the constituents of a given bound state being dependant on this choice and secondly I will show that this result is reflexive and that we can just as easily expand the electron in term of a Fock space that includes positronium. Footnote 8: Although not mentioned in referenced papers, Weinberg textbook [24] does enforce that the free Hamiltonian does have the same spectrum as the interacting Hamiltonian To show how much freedom we have when decomposing the Hamiltonian we can choose to construct the free single-particle sector out of the single-particle states of the full Hamiltonian. Again, sticking with the positronium channel of QED, the free Hamiltonian can be constructed as9: Footnote 9: Many of the factors are a choice of normalisation, I use the choice \(\langle\mathbf{p}|\mathbf{p}^{\prime}\rangle=2\omega_{\mathbf{p}}(2\pi)^{3}\delta(\mathbf{p}- \mathbf{p}^{\prime})\) \[H_{0}^{single}=\int\frac{d^{3}\mathbf{p}}{2(2\pi)^{3}}\ \left(|e^{+},\mathbf{p} \rangle\langle e^{+},\mathbf{p}|+|e^{-},\mathbf{p}\rangle\langle e^{-},\mathbf{p}|+|B,\bm {p}\rangle\langle B,\mathbf{p}|+\dots\right) \tag{12}\] where the multi-particle Fock Space is constructed from tensor products of these states. Different Fock states are orthogonal and hence the BS wavefunction expansion given in equation 5.3 will take the form \[|B,\mathbf{P}\rangle=|B,\mathbf{P}\rangle+0\times\ldots \tag{5.7}\] This construction would automatically set Weinberg's compositeness factor in equation 5.4 to \(Z=1\). Using the expression for the scattering length in equation 5.5, \(Z=1\) would seem to imply that \(a_{0}=1\). The freedom we have in \(H_{0}\), which affects the value of \(Z\), has an effect on the observed scattering length. Although I don't go into the full derivation here, this oddity can be reconciled. Weinberg's derivation is perturbative and makes assumptions about the relative strength of the \(2\to 2\) vertex relative to the \(2\to 1\) vertex (equation 29 of [23]). The strength of these vertices is also dependent on the choice made when defining \(H_{0}\) and therefore equation 5.5 is only valid for particular decompositions of the Hamiltonian. The reflexivity in these constructions comes from noting that being able to expand (fully-dressed) positronium in terms of (bare) electrons and (bare) positrons doesn't rule out the possibility of expanding the (fully-dressed) electron in terms of (bare) positronium and other states. Alternatively, the expansion of the (fully-dressed) electron will contain (bare) positrons and then the expansion of (fully-dressed) positrons will contain (bare) electrons. The repeated use of brackets indicating which quantities are bare or fully-dressed, may seem overkill, but is needed to emphasise the difference between the two types of quantities. If we reject the association of bare states with physical particles the BS wavefunction expansion can't be used to provide information about constituents. In order to view the expansion as pertaining to compositeness, we must accept some association between bare and fully-dressed states. For the second response, the value of \(Z\) for positronium doesn't necessarily constrain the value of \(Z\) for the electron. In equation 19 of [23], Weinberg shows that \(Z\) obeys the relation: \[1-Z=\int d\alpha\frac{\mid_{0}\langle\alpha|V|B\rangle\mid^{2}}{(E_{\alpha}+B )^{2}} \tag{5.8}\] and therefore if the bound state is well below the relevant multi-particle threshold then \(Z\approx 1\) - that is composite objects are most prevalent just below threshold. Firstly, this doesn't enforce \(Z\equiv 1\). Considering the electron as a bound state that contains Positronium, the electron would be noticeably below the multi-particle threshold however would still partly consist of positronium (just less so than positronium would consist of an electron). Secondly, comparing the \(Z\) values for different states in this way, saying that \(1-Z\) scales as the inverse of binding energy, assumes the leading order 3-point vertex is constant in all cases. This assumption does not necessarily hold over the energy scales we are discussing - after all, a major result of renormalization is the scale dependence of the coupling. ## 6 An Approximate Notion of Compositeness The world looks composite of course: Ikea chairs are clearly made of planks of wood and screws and condensed matter physics has made amazing progress in explaining the properties of materials in terms of the atoms they are made of. Individual atoms in a crystal lattice have even been photographed [25]. In terms of measurable results, what matters is not association with a particular S-matrix pole, but interactions with the measuring device. Taking the electromagnetic force as an example - it is after all the force that most directly affects our experience - the interaction depends on quantities like charge density. If a slightly mischievous deity were to replace every electron in an Ikea chair leg with a muon - and constantly interfere with and control the motion of these muons such that they obey the same dynamics as the electrons - then the chair would look the same to any shopper that walks past. The muonic chair would have the same interaction with the electromagnetic field as a standard chair and hence to any instrument detecting electromagnetic radiation (like our eyes), the two chairs would be indistinguishable. Compositeness is most useful when the object is weakly bound, that is the binding energy \(B\) is much less than the total rest mass \(M\)10. I claimed earlier that compositeness is well-defined in non-relativistic quantum mechanics, which has no requirements on the interaction strength (and hence no requirements on the binding energy). Compositeness breaks down when QFT becomes necessary, that is, the relativistic regime. However, the expected speed of the constituents of a bound state scale with the binding energy. This can be seen classically by considering two oppositely charged objects placed infinitely far apart, as they come together and orbit each other, their speed will increase with the charge. Alternatively, in non-relativistic quantum mechanics, the expected energy of the electron in a coulomb potential scales as \(Z^{2}e^{4}\). Footnote 10: We saw when discussing the Weinberg composites criterion that \(Z\approx 0\) when the composite particle is near threshold. When \(B=0\) (or negative) then the object is not bound at all and is actually two separate objects, just below the multi-particle threshold there is a sliding scale of how composite something appears. Starting with an everyday example of screwing some Ikea chair legs together, the binding energy is on the order of maybe a couple of Joules but the rest mass is on the order of \(10^{18}\)J. As the binding gets stronger we enter the realm of condensed matter and chemistry, here the "atoms" can still be resolved from scattering experiments or electron microscopy [26]. Getting stronger, objects like hadronic molecules show some signs of compositeness but this must be otherwise inferred [5]. Finally, the strongly bound quarks "inside" a pion are almost entirely best understood as a metaphor for categorizing the quantum numbers of the hadron. An explicit example of this can be found in ref. [21], the author shows that the Schrodinger equation for the electron-positron constituents of a positronium bound state via an expansion in terms of the momenta of the individual electrons and positrons (which scales as \(p\sim\alpha m_{e}\) where \(\alpha\) is the QED coupling). ### Can the Dennett Criterion save compositeness? For observational purposes, the localized charge densities in a molecule or a sheet of metal are identifiable with atoms. The work of a condensed matter physicist will not be directly affected by this work - atoms would still be a physical part of their models. The Dennett Criterion nicely encapsulates when something can be considered a real part of a particular model of the world (the criterion was formalised by Wallace in [27] and was based off Dennett [28]): _A macro-object is a pattern, and the existence of a pattern as a real thing depends on the usefulness -- in particular, the explanatory power and predictive reliability -- of theories which admit that pattern in their ontology._ his doesn't quite save compositeness. Figure 3 shows a series of frames made of small black and white squares. In most of them you should be able to make out a larger checkerboard pattern, some are more obvious than others. When developing a theory that predicts the location of the little black and white squares, it would seem a good idea to start with the larger checkerboard. According to the Dennet Criterion, the acceptance of this checkerboard pattern as real depends on the margin of error you want from the theory. A fundamental theory (which should have no margin for error if it describes everything) will have to go deeper and will not contain this checkerboard pattern as part of its ontology. ## 7 The problem of ordinary objects There are well-established philosophical problems that accompany our common sense understanding of what defines the term "object" - a summary of these problems can be found here [9]. I show that, at least for some of these problems, rejecting compositeness and defining an "object" as a pole in an S-matrix11 resolves some of these problems. Footnote 11: The word ”object” has a much broader range of validity, but this definition creates an ontological difference between different types of objects ### Problem Of The Many P1) Call the chair you are sitting on "Chris". Now consider an object consisting of all of Chris except for one particular plastic molecule, called Molly. The new chair, with Molly removed, is called Chris Jr P2) Chris, Chris Jr and Molly all exist. C) You are sitting on (at least) 3 different objects.. Figure 3: Six different objects, in each case the ability to describe the pattern as a checker-board varies. Whether or not we want to take the existence of the pattern as real depends on our desired margin of error [28] The problem here is that if an object like a chair is just a collection of atoms, then you can choose to group the constituent atoms up in any way you like - this leads to any ordinary object being spatially coincident with an exteremely large number of different sets of objects. By rejecting compositeness as a well defined concept, it no longer makes sense to abstractly isolate and name one of the specific molecules. ### Trogs You are walking through a forest and you see a frog hopping merrily along by a tree. P1) Both the frog and the tree are just an arrangement of atoms P2) You define a new object, called a trog, which consists of the frog and the tree. A trog is also just an arrangement of atoms. C) Trogs exist in the same way as trees and frogs. \(\therefore\). My initial objection is, maybe predictably, with premise 1, frogs and trees are poles in an S-matrix and hence cannot be rigorously thought of as being an arrangement of atoms. A trog, however, is not a pole in an an S-matrix and although I am not necessarily against extending the meaning of the word "object" to include entities like trogs, the fact that a trog is not an S-matrix pole means it exists in a different way to trees and frogs. An alternative way of seeing this is to note that the argument above rests on the fact that trees, frogs and trogs are all just arrangements of atoms - and all arrangements of atoms exist in the same sort of way - and it is this fact that must be rejected without a rigorous definition of compositeness. ### Material Constitution P1)A piece of clay is made into a statue, both the statue and the piece of clay exist. If both of these exists then then the piece of clay is equivalent to the statue P2) Statues and pieces of clay have different properties and if they have different properties then they cannot be equivalent. C)There is a contradiction between these two points. \(\therefore\). In many ways this problem highlights the difficulty of relating particulars and universals - something I won't get into - however looking at this problem with the framework we have built up is insightful. As the piece of clay gets moulded there is not a continuous process from clay to statue, instead, the piece of clay jumps from pole to pole. Each of these jumps causes one object to stop existing and a new one to start existing. There is still the difficulty of categorizing some of the poles as pieces of clay and some as statues - but as we saw at the start of this paper, categorizations based on phenomenology is not a good path to fundamental understanding. Conclusion To sum up, compositeness is not a rigorous notion within the framework of Quantum Field Theory. Taking the mathematical structure of QFT seriously, we find that all bound states are ontologically equivalent and that different ways of trying to define the constituent parts of a bound state are either arbitrary or reflexive. An approximate notion of compositeness can be recovered: as the mass of a bound state approaches the multi-particle threshold, the different quantum number densities (flavour, charge etc) approach the sum of the densities of the two constituents. As these densities determine the interaction, the bound state interacts almost as if it were a set of separate constituent parts. This may allow compositeness in higher-level disciplines such as chemistry or solid state physics, but it doesn't recover compositeness in a rigorous or exact way. Finally, we saw the consequences of rejecting compositeness on the philosophical problems ordinary objects and found that many of the problems get resolved. Although defining the term "object" to refer to a pole in an S-matrix is maybe too restrictive, it does highlight a difference between different uses of the term. Firstly I want to thank my PhD supervisor, Dr Maxwell T Hansen, both for many conversations about the technical details and for his support in allowing me to pursue research outside of the focus of my doctorate. I also want to thank Dr David Wallace for the early conversations that really helped get this work off the ground and to Dr Paul Hoyer for answering many questions about his work on compositeness in gauge theories. Finally, I want to give a special thank you to Joe Ingram for hating this idea so much that I eventually wrote a paper. ## Appendix A Expanding a bound state in terms of Bethe-Salpeter wavefunctions In this appendix, we prove equation 5.3 which shows that the fully-interacting bound state with momentum \(\mathbf{P}\), \(|B,\mathbf{P}\rangle\) can be expanded as a sum of free particle states with the BS wavefunction denoting the contribution of that specific state. We only calculate the details for the \(e^{+}e^{-}\) terms but the calculation generalizes nicely. All interpolation operators will be inserted at equal time, which we will set to be \(t=0\). The most general expansion of the bound state is of the form: \[|B,\mathbf{P}\rangle=\int d^{3}\vec{q}\ f(\mathbf{P}-\vec{\mathbf{q}},\vec{\mathbf{q}})|e^{+} (\mathbf{P}-\vec{\mathbf{q}}),e^{-}(\vec{\mathbf{q}})\rangle_{0}+\text{other particle content}\] (A.1) where \(f\) is some arbitrary function of the momenta of the two particles. Momentum conservation requires that the total momenta of the two-particle state is equal to the momenta of the bound state. Therefore the inner product with a free state of two particles with arbitrary momentum is: \[{}_{0}\langle e^{+}(\mathbf{q})e^{-}(\mathbf{q}^{\prime})|B,\mathbf{P}\rangle= \int d^{3}\vec{\mathbf{q}}\ f(\mathbf{P}-\vec{\mathbf{q}},\vec{\mathbf{q}})\ _{0}\langle\phi_{+}(\mathbf{q})\phi_{-}(\mathbf{q}^{\prime})|e^{+}(\mathbf{P}-\vec{\mathbf{q} }),e^{-}(\vec{\mathbf{q}})\rangle_{0} \tag{10}\] \[= \int d^{3}\vec{\mathbf{q}}\ f(\mathbf{P}-\mathbf{q},\mathbf{q})(2\pi)^{6}\delta( \mathbf{P}-\vec{\mathbf{q}}-\mathbf{q})\delta(\vec{\mathbf{q}}-\mathbf{q}^{\prime})\] (11) \[= f(\mathbf{q},\mathbf{q}^{\prime})(2\pi)^{6}\delta(\mathbf{P}-\mathbf{q}-\mathbf{q}^ {\prime}). \tag{12}\] We have chosen a non-relativistic normalization of states in order to simplify the presentation. We now need to relate \(f\) to the BS wavefunction. Starting with the definition of the momentum-space BS-wavefunction as given in equation 10: \[\Phi^{\mathbf{P}}_{e^{+},e^{-}}(\mathbf{q},\mathbf{q}^{\prime})\delta(\mathbf{P}-\mathbf{q}-\mathbf{q }^{\prime})=\int d^{3}\mathbf{x}d^{3}\mathbf{y}\ e^{i\mathbf{q}\cdot\mathbf{x}}e^{i\mathbf{q}^{ \prime}\cdot\mathbf{y}}\langle 0|\phi_{e^{+}}(\mathbf{x})\phi_{e^{-}}(\mathbf{y})|B,\mathbf{P}\rangle \tag{13}\] This is equivalent to the definition given in [1] except we have Fourier transformed to momentum space and included an extra delta to account for having two free coordinates12. The RHS has an implicit delta function coming from translation invariance and hence this is included on the left as well. Footnote 12: In the reference, they pick a coordinate system where the particles are at \(\pm x/2\), or equivalently they both have equal momenta The interpolation operators are inserted at equal time, which we set to \(t=0\), as the energy is determined by the particle content of the state. Insert a complete set of non-interacting states between the interpolation operators and the bound state. The only non-interacting states that couple to the interpolation operators include an \(e^{+},\ e^{-}\) pair \[\Phi^{\mathbf{P}}_{e^{+},e^{-}}(\mathbf{q},\mathbf{q}^{\prime})\delta(\mathbf{P}- \mathbf{q}-\mathbf{q}^{\prime})= \int\frac{d^{3}\vec{\mathbf{q}}}{(2\pi)^{3}}\frac{d^{3}\vec{\mathbf{q}}^ {\prime}}{(2\pi)^{3}}\int d^{3}\mathbf{x}d^{3}\mathbf{y}\ e^{i\mathbf{q}\cdot\mathbf{x}}e^{i \mathbf{q}^{\prime}\cdot\mathbf{y}} \tag{14}\] \[\times\langle 0|\phi_{e^{+}}(\mathbf{x})\phi_{e^{-}}(\mathbf{y})|e^{+}( \bar{q})e^{-}(\bar{q}^{\prime})\rangle_{0}\ _{0}\langle e^{+}(\bar{q})e^{-}(\bar{q}^{ \prime})|B,\mathbf{P}\rangle.\] As the states are free we have the following result \[\langle 0|\phi_{e^{+}}(\mathbf{x})\phi_{e^{-}}(\mathbf{y})|e^{+}(\bar{q})e^{-}(\bar{q}^{ \prime})\rangle_{0}=e^{-i\mathbf{x}\cdot\vec{\mathbf{q}}}e^{-i\mathbf{y}\cdot\vec{\mathbf{q}}^ {\prime}} \tag{15}\] where the interpolation operators are assumed to have unit normalization. This becomes \[\Phi^{\mathbf{P}}_{e^{+},e^{-}}(\mathbf{q},\mathbf{q}^{\prime})\delta(\mathbf{P}- \mathbf{q}-\mathbf{q}^{\prime})= \int\frac{d^{3}\vec{\mathbf{q}}}{(2\pi)^{3}}\frac{d^{3}\vec{\mathbf{q}}^ {\prime}}{(2\pi)^{3}}\int d^{3}\mathbf{x}d^{3}\mathbf{y}\ e^{i(\mathbf{q}-\vec{\mathbf{q}}) \cdot\mathbf{x}}e^{i(\mathbf{q}^{\prime}-\vec{\mathbf{q}}^{\prime})\cdot\mathbf{y}} \tag{16}\] \[\times\ _{0}\langle e^{+}(\vec{\mathbf{q}})e^{-}(\vec{\mathbf{q}}^{\prime})|B, \mathbf{P}\rangle\] \[=\ _{0}\langle e^{+}(\mathbf{q})e^{-}(\mathbf{q}^{\prime})|B,\mathbf{P}\rangle\] (17) \[\bigg{(}= \delta(\mathbf{P}-\mathbf{q}-\mathbf{q}^{\prime})\ _{0}\langle e^{+}(\mathbf{q})e^{-}(\mathbf{P}- \mathbf{q})|B,\mathbf{P}\rangle\bigg{)}. \tag{18}\] Comparing this to equation A.4 we find that: \[f(\mathbf{q},\mathbf{q}^{\prime})=\Phi^{\mathbf{P}}_{e^{+},e^{-}}(\mathbf{q},\mathbf{q}^{\prime})\] (A.11) and thus the bound state can be written as an expansion in terms of free particle states where the coefficient is proportional to the BS wavefunction.
2309.09195
SplitEE: Early Exit in Deep Neural Networks with Split Computing
Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size. To overcome the issue, various approaches are considered, like offloading part of the computation to the cloud for final inference (split computing) or performing the inference at an intermediary layer without passing through all layers (early exits). In this work, we propose combining both approaches by using early exits in split computing. In our approach, we decide up to what depth of DNNs computation to perform on the device (splitting layer) and whether a sample can exit from this layer or need to be offloaded. The decisions are based on a weighted combination of accuracy, computational, and communication costs. We develop an algorithm named SplitEE to learn an optimal policy. Since pre-trained DNNs are often deployed in new domains where the ground truths may be unavailable and samples arrive in a streaming fashion, SplitEE works in an online and unsupervised setup. We extensively perform experiments on five different datasets. SplitEE achieves a significant cost reduction ($>50\%$) with a slight drop in accuracy ($<2\%$) as compared to the case when all samples are inferred at the final layer. The anonymized source code is available at \url{https://anonymous.4open.science/r/SplitEE_M-B989/README.md}.
Divya J. Bajpai, Vivek K. Trivedi, Sohan L. Yadav, Manjesh K. Hanawal
2023-09-17T07:48:22Z
http://arxiv.org/abs/2309.09195v1
# SplitEE: Early Exit in Deep Neural Networks with Split Computing ###### Abstract. Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size. To overcome the issue, various approaches are considered, like offloading part of the computation to the cloud for final inference (split computing) or performing the inference at an intermediary layer without passing through all layers (early exits). In this work, we propose combining both approaches by using early exits in split computing. In our approach, we decide up to what depth of DNNs computation to perform on the device (splitting layer) and whether a sample can exit from this layer or need to be offloaded. The decisions are based on a weighted combination of accuracy, computational, and communication costs. We develop an algorithm named SplitEE to learn an optimal policy. Since pre-trained DNNs are often deployed in new domains where the ground truths may be unavailable, and samples arrive in a streaming fashion, SplitEE works in an online and unsupervised setup. We extensively perform experiments on five different datasets. SplitEE achieves a significant cost reduction (\(>50\%\)) with a slight drop in accuracy (\(<2\%\)) as compared to the case when all samples are inferred at the final layer. The anonymized source code is available at [https://anonymous.4open.science/r/SplitEE_M-B989/README.md](https://anonymous.4open.science/r/SplitEE_M-B989/README.md). + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: copyrighted: + Footnote †: copyrighted: copyrighted: + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyright: copyrighted: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright computational and offloading costs. The computational costs capture the cost of running DNN layers on edge, and offloading cost captures the cost of communicating the DNN output from the splitting layer to the cloud. We define a reward function with a weighted difference between confidence and the cost incurred. We use the multi-armed bandit framework and set our objective as minimizing the expected cumulative regret, defined as the difference between the cumulative reward obtained by an oracle and that obtained by the algorithm. SplitEE is based on the classical Upper Confidence Bound (UCB) (Cheng et al., 2015) algorithm. We also develop a variant of SplitEE that takes into account the additional information available at the intermediary levels in the form of side observations. We refer to this variant as SplitEE-S. We use the state-of-art Early-Exit DNN ElasticBERT for natural language inference as a test bed to evaluate the performance of our algorithm. ElasticBERT is based on the BERT backbone and trains multiple exits on a large text corpus. We extensively evaluate the performance of SplitEE and SplitEE-S on five datasets _viz_. IMDb, Yelp, SciTail, QQP and SNLI to cover three types of classification tasks -sentiment classification, entailment classification, and semantic equivalence classification. We first prepare an Early-Exit DNN by fine-tuning it on a similar kind of task to perform inference on an equivalent task with a different distribution in an unsupervised online manner. For instance, we fine-tune ElasticBERT on SST-2, a sentiment classification dataset and then evaluate SplitEE on Yelp and IMDb which have review classification tasks. SplitEE finds an optimal splitting layer such that samples could be inferred locally only if the given sample meets the confidence threshold. In this way, SplitEE only infers 'easy' samples locally forcing less load on mobile devices and offloads 'hard' samples. SplitEE observes the smallest performance drop of \(<2\%\) in the accuracy and \(>50\%\) reduction in cost as compared to the case when all samples are inferred at the final exit. During inference, these DNNs might be applied to a dataset having different latent data distribution from the dataset used to train the DNN. The optimal splitting layer might be different depending on the latent data distribution. Hence SplitEE adaptively learns the optimal split point by utilizing the confidence available at the exit attached to the splitting layer and computational cost. Our main contributions are as follows: * We introduce early exits in split computing and introduce a learning model. In our model, the decision is to find the split point as well as whether to exit or offload from the split point. * To find the optimal split point, we develop upper-confidence-based algorithms SplitEE that decide the splitting layer on the fly without requiring any ground truth information and achieve sub-linear regret. * We optimize the utilization of resources on edge and the cloud without sacrificing much accuracy by only inferring easy samples on edge devices. * Using five distinct datasets, we empirically verify that our algorithms significantly reduce costs with a small drop in accuracy compared to the base-lines and state-of-the-art algorithms. ## 2. Related Works In this section, we discuss the previous works on Split Computing, Early-Exit DNNs and the utilization of DNNs on Mobile devices. ### Split Computing in DNNs Neurosurgeon (Shi et al., 2017) searches for the best splitting layer in a DNN model by minimizing the cost associated with a splitting layer. Split Computing is applied with different approaches. BottleNet (Bottleft et al., 2016) and Bottleftft (Srivastava et al., 2017) introduce a bottleneck in split computing where part of the DNN in the mobile device will encode the sample into a reduced size. The reduced-sized sample is then offloaded. The sample is then decoded and inferred on the cloud. There are multiple training methodologies to encode the input on the mobile device. BottleNet++ (Bottleft et al., 2017) and (Bottleft et al., 2018) perform cross-entropy-based training, Matsubara (Matsubara, 2017) perform knowledge-distillation based training, CDE (Srivastava et al., 2017) and Yao (Yao et al., 2017) perform reconstruction-based training and Matsubara (Matsubara, 2017) perform head network distillation training method to effectively encode the input to offload efficiently. ### Early-exit Neural networks Early-exit DNNs are employed on various tasks. In image classification BranchyNet (Srivastava et al., 2017), among other earlier research, uses classification entropy at each associated exit following each layer to determine whether to infer the sample at the side branch. If the exit point's entropy is below a predetermined threshold, the choice to exit is made. Similarly, SPINN (Shi et al., 2017) and SEE (Srivastava et al., 2017) also use the estimated confidence measure at the exit branch to determine whether to exit early. However, the confidence estimate here is the likelihood of the most likely class. Besides exiting early, works like FlexDNN (Chen et al., 2015) and Edgent (Edsen et al., 2016) focus mainly on the most appropriate DNN depth. Other works, such as Dynexit (Dyneit, 2016), focus on deploying the multi-exit DNN in hardware. It trains and deploys the DNN on Field Programmable Gate Array (FPGA) hardware while Paul _et al._(Paul et al., 2016) explains that implementing a multi-exit DNN on an FPGA board reduces inference time and energy consumption. In the NLP domain, DeeBERT (Denee et al., 2016), ElasticBERT (Denee et al., 2016) and BERxiT (Denee et al., 2016) are transformer-based BERT models. DeeBERT is obtained by training the exit points attached before the last module to the BERT backbone separately while ElasticBERT trains the backbone with attached exits jointly with the final exit. BERxiT proposes a more advanced fine-tuning strategy for the BERT model with attached exits. PABEE(Pabh et al., 2016) and Pece-BERT(Pabh et al., 2016) suggest an early exit depending on the agreement between early-exit classifiers up to a Figure 1. Efficient edge-cloud co-inference setup where part of the layers are executed on the edge device with an option to exit (infer a sample) at the split point and remain on the cloud to infer at the final layer. fixed patience threshold. LeeBERT (Lee et al., 2017) on the other hand applies knowledge distillation across all exit layers rather than just distilling the knowledge prediction from the final layer. ### DNNs in Mobile Devices Pacheco (Pacheco, 2018) utilize both multi-exit DNN and DNN partitioning to offload mobile devices via multi-exit DNNs. Similarly, EPNet (Beng et al., 2017) learns when to exit considering the accuracy-overhead trade-off but in an offline fashion. LEE (Shi et al., 2017), DEE (Shi et al., 2017) and UEE-UCB (Beng et al., 2018) utilize the multi-armed bandit framework to learn the optimal exit. However, they do not have the option to offload and infer only at mobile devices after finding the optimal exit. LEE and DEE provide efficient DNN inference tasks for mobile devices in scenarios such as service outages. Both LEE and DEE assume that the utility is revealed which depends on the ground truth labels. LEE and DEE use the classical UCB1 (Beng et al., 2017) algorithm to learn the optimal exit. UEE-UCB learns the optimal exit in a setup similar to ours, however, it does not have the option to offload. It finds the optimal splitting layer and infers all the samples through the mobile device. It also assumes that the intermediary layers follow the strong dominance property. Following are major differences between our setup in comparison with the previous setups: 1) We take into account both the computational and communication costs in addition to accuracy in deciding the splitting layer, whereas the previous works on split computing considered only the communication cost, while the early exit work considered only computational costs along with the accuracy. 2) Our work is completely in an unsupervised online setup as it does not require any ground truth information. 3) For each sample, we use the contextual information (confidence) to decide whether to exit or offload at the splitting layer dynamically. Table 2 provides a direct comparison to state-of-arts. ## 3. Problem Setup We are given a pre-trained DNN with \(L\) layers with attached exits after every layer. We index the layers using the set \([L]=\{1,2,\ldots L\}\). We consider classification tasks with a target class \(\mathcal{C}\). For an input \(x\) and layer \(i\in[L]\), let \(\hat{P}_{i}(c)\) denote the estimated probability that \(x\) belongs to class \(c\in\mathcal{C}\). Let \(C_{i}=\max_{c\in\mathcal{C}}\hat{P}_{i}(c)\) denote the confidence of estimated probability class. Input \(x\) is processed sequentially through the DNN. The DNN could be split at any layer \(i\in[L]\), where the layers \(1,2,\ldots,i\) are on the mobile device and the remaining layers, i.e., \(i+1,i+2,\ldots,L\) are on the cloud. In our setup for each sample following two-stage decisions has to be made 1) Where to split the DNN? 2) Whether to exit from the splitting layer offload to the cloud. The decision on where to split the DNN does not depend on the individual samples but on the underlying distribution. Whereas the decision to offload or exit is based on each sample as follows: If the split is at the \(i\)th layer, \(C_{i}(x)\) is computed and compared against a pre-defined threshold \(\alpha\). If \(C_{i}(x)\geq\alpha\), the sample exits and is inferred on the mobile device at the splitting layer, otherwise it is offloaded and inferred at the final layer on the cloud. The cost of using the DNN up to layer \(i\) could be interpreted as the computational cost of processing the sample till layer \(i\) and performing inference. Let \(\gamma_{i}\) be the cost associated with the split performed at the \(i\)th layer. We set \(\gamma_{i}\propto i\) as the amount of computation that depends on the depth of the splitting layer in the DNN. We denote the cost of offloading from mobile to cloud as \(o\). The value of \(o\) across all layers depends on the size of the input and the transmission cost (_e.g._ Wi-Fi, 5G, 4G and 3G). We define the reward when the splitting is performed at the \(i\in[L]\) layer as \[r(i)=\left\{\begin{array}{ll}C_{i}-\mu\gamma_{i}&\quad\text{if}\quad C_{i} \geq\alpha\text{ or }i=L\\ C_{L}-\mu(\gamma_{i}+o)&\quad\text{otherwise,}\end{array}\right. \tag{1}\] where \(\mu\) is a conversion factor to express the cost in terms of confidence. \(\mu\) is input by the users depending on their preference for accuracy and computational costs. The reward could be interpreted as follows: if the DNN is confident on the sample in the prediction obtained from the \(i\)th layer, then the reward will be the gain in confidence subtracted by the cost of moving the sample till \(i\)th layer and inferring. If not, then the sample is offloaded to the cloud for inference, where the confidence of \(C_{L}\) is achieved at the last layer, and an additional offloading cost \(o\) is incurred. Observe that if \(i=L\), all the computations are executed on the edge device, and the sample is inferred at \(L\)th layer (without offloading). We define \(i^{*}=\arg\max_{i\in[L]}\mathbb{E}[r(i)]\) which is defined as for \(i\in[L-1]\) as \[\mathbb{E}[r(i)]=\mathbb{E}[C_{i}-\mu\gamma_{i}|C_{i}\geq\alpha] \cdot P[C_{i}\geq\alpha]\\ +\mathbb{E}[C_{L}-\mu(\gamma_{i}+o)|C_{i}<\alpha]\cdot P[C_{i}< \alpha], \tag{2}\] and for the last layer \(L\), it is a constant given as \(\mathbb{E}(r(L))=C_{L}-\mu\gamma_{L}\). The goal is to find an optimal splitting layer \(i^{*}\) such that sample will be inferred at \(i^{*}\) or be offloaded to the cloud for inference. We model the problem of finding the optimal splitting layer as a multi-armed bandit problem (MAB). We define the action set as layer indices in the DNN \(\mathcal{A}=[L]\). Following the terminology of MABs, we also refer to elements of \(\mathcal{A}\) as arms. Consider a policy \(\pi\) that selects arm \(i_{t}\) at time \(t\) based on past observations. We define the cumulative regret of \(\pi\) over \(T\) rounds as \[R(\pi,T)=\sum_{t=1}^{T}\mathbb{E}[r(i^{*})-r(i_{t})] \tag{3}\] where the expectation is with respect to the randomness in the arm selection caused by previous samples. A policy \(\pi^{*}\) is said to be sub-linear if average cumulative regret vanishes, i.e., \(R(\pi^{*},T)/T\to 0\). We experimentally prove that both variants of Algorithm 1 achieves sub-linear regret. ## 4. Algorithm In this section, we develop an algorithm named Split computing with Early Exit (SplitEE). The algorithm is based on the 'optimism' in the face of the uncertainty principle' and uses the upper confidence bounds. In this variant, the inference is performed only at the splitting layer, and the decision to offload or exit is based on confidence in this inference. In the following subsection, we develop another variant named SplitEE-S that makes inferences at each layer and not just at the splitting layer. ### SplitEE The input to this variant is the confidence threshold (\(\alpha\)), the exploration parameter (\(\beta\)), the number of layers (\(L\)), and the computational cost for each layer \(\gamma\) which could be split as \(\gamma=\lambda_{1}+\lambda_{2}\) where \(\lambda_{1}\) could be interpreted as the processing cost whereas \(\lambda_{2}\) is the inference cost at the attached exit. Since we are not utilizing the exits attached to the layer before the chosen splitting layer, hence in this variant, \(\lambda_{2}\) will only be accumulated for the splitting layer selected. ``` 1:Input:\(\alpha\) (threshold), \(\beta\geq 1,L,\ \text{cost}\ \gamma_{i}\ \forall i\in[L]\) 2:Initialize:\(Q(i)\gets 0,N(i)\gets 0\). 3:Initialize by playing each arm once. 4:for\(t=|L|+1,|L|+2,\ldots\)do 5: Observe an instance \(x_{t}\) 6:\(i_{t}\leftarrow\arg\max_{i\in[L]}\left(Q(i)+\beta\sqrt{\frac{\ln(t)}{N(i)}}\right)\) 7: Pass \(x_{t}\) till layer \(i_{t}\), apply threshold \(\alpha\) and observe \(C_{i_{t}}\) 8:if\(C_{i_{t}}\geq\alpha\)then 9: Infer at layer \(i_{t}\) and exit 10:\(r_{t}(i_{t})\gets C_{i_{t}}(x_{t})-\mu y_{i_{t}}\), \(N_{t}(i_{t})\gets N_{t-1}(i_{t})+1\) 11:\(Q_{t}(i_{t})\leftarrow\sum_{j=1}^{t}r_{j}(k)\mathbbm{1}_{\{k=i_{t}\}}/N_{t}(i_ {t})\) 12:else 13: Offload to the last layer. Observe \(C_{L}\) 14:\(r_{t}(i_{t})\gets C_{L}(x_{t})-\mu(Y_{t}+o)\), \(N_{t}(i_{t})\gets N_{t-1}(i_{t})+1\) 15:\(Q_{t}(i_{t})\leftarrow\sum_{j=1}^{t}r_{j}(k)\mathbbm{1}_{\{k=i_{t}\}}/N_{t}(i_ {t})\) 16: Infer at the last layer 17:endif 18:endfor ``` **Algorithm 1** SplitEE The pseudo-code of the SplitEE is given in Algorithm 1. The algorithm works as follows: It plays each arm once for the first \(L\) instances to obtain the rewards \(Q(i)\) and the counters \(N(i)\) for each layer once. Then it plays an arm \(i_{t}\) that maximises the UCB index (line 6) in succeeding rounds. The weighted sum of the empirical average of the rewards \(Q(i)\) and confidence bonuses is used to create UCB indices. If the confidence at layer \(i_{t}\) is above the threshold \(\alpha\), the sample exits the DNN; else it is offloaded to the cloud with additional cost \(o\). Following analysis of UCB1(Cab discarding all exits, the ElasticBERT-base model's backbone with learnt weights remains. For more details of the training procedure, we refer to (Kumar et al., 2018). After having a pre-trained model backbone, we attach task-specific exits (_e.g._ classification heads) after all transformer layers along the backbone and fine-tune it using a labeled dataset (from a similar domain to the evaluation dataset). Sentence-level representations for sentence-level tasks are learned using the \([CLS]\) token. After each transformer layer, this token representation is connected to the classification heads. A sketch of the entire training procedure is provided in figure 2. \(w_{1}\), \(w_{2}\), \(w_{3}\) and \(w_{4}\) represent token embeddings of the given input sequence. The head is attached to produce a representation that can be compared with the task label to compute the loss. Cross-entropy loss is the loss function that we select. Using learnable weights, the attached classification heads transform the \([CLS]\) token's \(q\)-dimensional vector representation into a probability vector for direct comparison with the task label. ### Experimental setup In this section, we explain the experimental setup and details of SplitEE. We have three major steps in the experimental setup which are summarized below: **i) Training the backbone:** Initially, we train the ElasticBERT-base model with MLM and SOP heads attached after every transformer layer of the BERT-base model. After training on a large text corpus, we remove the MLM and SOP heads from the ElasticBERT model, leaving only the backbone. We directly import weights of the learned backbone, hence this part does not need any computation. **ii) Fine-tuning and learning weights (Supervised):** In the backbone obtained by step (i), we attach task-specific exits (heads) after each transformer layer, and to learn weights for these heads, we perform supervised training. Since we assume that we do not have labels for the evaluation task. We utilize a labeled dataset with a similar kind of task but with a different distribution or from a different domain. For example, we evaluate SplitEE on IMDb and Yelp datasets which are review classification datasets and learn the weights for heads using the SST-2 dataset which has a similar task of sentiment classification but with different latent data distribution. **iii) Online learning of optimal splitting layer (Unsupervised):** Finally we use weights from step (2) to learn the optimal splitting layer in an unsupervised and online setup for the evaluation tasks. We perform this step after the model has been deployed and ready for inference. We perform experiments on single NVIDIA RTX 2070 GPU. Part (i) does not require any computation as we can directly import the weights from the backbone. Part (ii) takes a maximum of 10 GPU hour of runtime (on the MNLI dataset). Part (iii) is not computationally involved and could be executed in \(<\) 1 hour of CPU runtime and does not requires GPU support on a single run. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **E. Data** & **\#Samples** & **FT Data** & **\#Samples** \\ \hline IMDb & 25K & SST-2 & 68K \\ \hline Yelp & 560K & SST-2 & 68K \\ \hline SciTail & 24K & RTE & 2.5K \\ \hline QQP & 365K & MRPC & 4K \\ \hline SNLI & 550K & MNLI & 433K \\ \hline \end{tabular} \end{table} Table 1. Information about the size of datasets. FT data is the dataset used to prepare the ElasticBERT backbone for the corresponding task and #Samples is the number of samples in the dataset. E.data is the evaluation dataset. Figure 4. Cost (in \(10^{4}\times\lambda\) units) for different offloading cost (SplitEE) Figure 3. Accuracy for different offloading costs (\(o\)) (SplitEE) We evaluated SplitEE on five datasets covering three types of classification tasks. The datasets used for evaluation are: 1. **Review classification on IMDb (Kumar et al., 2017) and Yelp (Bang et al., 2017)**: IMDb is a movie review classification dataset and Yelp consists of reviews from various domains such as hotels, restaurants etc. For these two datasets, ElasticBERT is finetuned on **SST-2 (Stanford Sentiment classification)** dataset which is also a sentiment classification dataset. 2. **Entailment classification on SciTail:** SciTail is an entailment classification dataset created from multiple questions from science and exams and web sentences. To evaluate SplitEE on SciTail, it is finetuned on **RTE(Recognizing Textual Entailment)** dataset which is also an entailment classification dataset. 3. **Entailment classification on SNLI(Stanford Natural Language Inference) (Multi-class)**: SNLI is a collection of human-written English sentence pairs manually labelled for balanced classification with labels _entailment, contradiction_ and _neutral_. For evaluation of this dataset, ElasticBERT is finetuned on **MNLI(Multi-Genre Natural Language Inference)** which also contains sentence pairs as premise and hypothesis, the task is the same as for SNLI. 4. **Semantic equivalence classification on QQP(Quora Question Pairs)**: QQP is a semantic equivalence classification dataset which contains question pairs from the community question-answering website Quora. For this task, we finetuned ElasticBERT on **MRPC(Microsoft Research Paraphrase Corpus)** dataset which also has a semantic equivalence task of a sentence pair extracted from online news sources. Details about the size of these datasets are in table 1. Observe from the table that the size of the dataset used for fine-tuning is much smaller as compared to the size of the corresponding evaluation dataset. We do not split the evaluation dataset. Except for IMDb and Yelp, other datasets are a part of ELUE (Kumar et al., 2017) and GLUE (Kumar et al., 2017) benchmark datasets. We attach exits after every transformer layer in the ElasticBERT model. The predefined threshold \(\alpha\) is directly taken from the ElasticBERT model which utilizes the validation split of fine-tuning data to learn the best threshold. The choice of action set depends on the number of layers of the DNN being used. The action set is \(\mathcal{A}=[L]\) and for ElasticBERT \(L=12\). We have two types of costs: Computational cost and Offloading cost. As explained in section 3, the computational cost is proportional to the number of layers processed i.e. \(\gamma_{i}=\lambda i\) where \(\lambda\) could be interpreted as per-layer computational cost. We can split \(\lambda=\lambda_{1}+\lambda_{2}\), where \(\lambda_{1}\) and \(\lambda_{2}\) resemble the per-layer processing cost and per-layer inference cost respectively. We relate \(\lambda_{1}\) and \(\lambda_{2}\) in terms of the number of matrix multiplications required to process and infer. We observe that \(\lambda_{2}=\lambda_{1}/6\) (5 matrix multiplications are needed for processing and 1 for inferring). Hence the cost for SplitEE-S will be \(\lambda i\) and \(\lambda_{1}i+\lambda_{2}\) for SplitEE if \(i\)th layer is chosen as the splitting layer. Since offloading cost is also user-defined and depends on the communication network used (e.g. 3G, 4G, 5G and Wi-Fi). Hence in the experiments, we provide results on different offloading costs \(o\) from the set \(\{\lambda,2\lambda,3\lambda,4\lambda,5\lambda\}\) as it is user-defined. With increasing stages of broadband mobile communication powers, we observe that offloading cost is at most five times the per-layer computational cost. For more details on how to compute the offloading cost, we refer to (Kumar et al., 2017). For table 2, we use the fixed offloading cost as \(o=5\lambda\) (highest offloading cost). Without loss of generality, we choose \(\lambda=1\) for conducting all the experiments. The cost accumulated is however Figure 5. Accuracy for different offloading costs (\(o\)) (SplitEE-S) Figure 6. Cost (in \(10^{4}\times\lambda\) units) for different offloading cost (SplitEE-S) left in terms of \(\lambda\) which is user-specific as all the cost is now in terms of \(\lambda\). The trade-off factor \(\mu\) ensures that the reward function gives similar preferences to cost as well as accuracy. For the algorithm, we choose \(\mu=0.1\) to directly compare confidence and cost. We repeat each experiment 20 times and in each run the samples are randomly reshuffled and then fed to the algorithm in an online manner. In each round, the algorithm chooses a splitting layer and accumulates the regret if the choice is not optimal. We plot the expected cumulative regret in figure 7. The accuracy and cost reported for SplitEE and SplitEE-S is computed considering the chosen splitting layer prediction in each round (i.e. for every sample) and then per-sample averaged for 20 runs. ### Baselines **1)** **DeeBERT**: Similar to our setup, we fine-tune DeeBERT and then perform inference on the evaluation dataset. DeeBERT prepares the early exit model in two steps: (1) It learns the general weights and embeddings for the BERT backbone using the loss function attached only at the final layer, this part is similar to BERT fine-tuning. (2) After freezing the weights, it attaches a loss function after every transformer layer except the final layer. Note that DeeBERT does not have the option to offload. DeeBERT uses the entropy of the predicted vector as confidence. We fine-tune the entropy threshold in a similar fashion as used by DeeBERT. Since it does not make any difference, hence we keep the confidence of DeeBERT as the entropy of the predicted vector. Other parameters are kept the same as used by DeeBERT. **2)** **ElasticBERT**: is also based on the BERT-base model, the only difference is ElasticBERT is jointly trained by attaching MLM and SOP heads after every transformer layer, Once the model is trained, it removes the heads leaving the backbone. More details are in section 5.1 and figure 2. All the parameters are kept the same as the ElasticBERT setup. **3) Random selection**: In random selection, we select a random exit point and then process the sample till chosen exit, if the confidence at chosen exit is above the threshold then exit, else offload. Then we calculate the cost and accuracy. We report the average accuracy and cost by running the above procedure 20 times. **4) Final exit**: In this case, we process the sample till the final layer for inference. This setup has a constant cost of \(\lambda L\). This baseline is similar to the basic inference of neural networks. We also utilize this setup for benchmarking. ### Need for offloading As explained in section 5.2, the maximum possible offloading cost is 5-times the per-layer computational cost. Hence if a sample is not gaining sufficient confidence for classification till a pre-specified layer, we might want to offload it to the cloud. Previous methods process the sample throughout the DNN until it gains sufficient confidence. We observed that processing a sample beyond 6th layer was accumulating more processing cost than the offloading cost. While experimenting, we marked that on average DeeBERT processes 51% samples and ElasticBERT processes 35% samples beyond 6th exit layer. These many samples accumulate a large computational cost for edge devices. Since edge devices have fewer resources available, both DeeBERT and ElasticBERT might exploit these resources in terms of battery lifetime depletion and device lifetime. While our setup decides on a splitting layer as a sample arrives. The sample is processed till chosen layer and if the sample gains sufficient confidence it exits the DNN else it offloads to the cloud for inference reducing cost drastically. Additionally, offloading helps in increasing accuracy as the last layer provides more accurate results on samples that were not gaining confidence initially. ### SplitEE and SplitEE-S From figures 3, 4, 5 and 6, we observe that SplitEE and SplitEE-S have comparable performances. However, observe that SplitEE does not utilize confidences of exits prior to chosen splitting layer. Hence we can directly process the sample to the splitting layer reducing the inference cost after each exit. SplitEE-S uses the confidence available at all exits on the edge device to update rewards for multiple arms. The difference between the two is more evident in the regret curve (see fig.7). SplitEE-S curve saturates much earlier than SplitEE. We also observed from the experiments that SplitEE has a larger impact on the reshuffling of the dataset while SplitEE-S is more robust to the reshuffling of the dataset as it needs less number of samples to learn the optimal splitting layer. Since in a real-world scenario, the size of the evaluation dataset might be small and we might need to adapt to changes in the distribution of data fast, in this case, SplitEE-S can be used. However, if the major concern is cost then SplitEE will work better as it further reduces the inference cost (see table 2). Note that DeeBERT and ElasticBERT also incur the inference cost at all exits up to which a sample is processed. ### Analysis with different offloading cost Being user-defined, we analyse the behaviour of accuracy and cost on different offloading costs. Except for the QQP dataset, we observe a drop in accuracy as we increase the offloading cost. We explain the drop as more offloading cost will force more samples to exit early by choosing a deeper exit in DNN. More samples exiting early will make less accurate predictions. Hence initially, when offloading cost is small it exits most samples from initial exits and offloads very few samples. When we increase offloading cost again it just goes deeper to gain confidence for those samples which were offloaded earlier. In terms of cost, it is evident that the cost of SplitEE will go up as we increase the offloading cost. For the QQP dataset, we observe a reverse behaviour as there are very less samples that offload for QQP. We observed that there were many samples that exited the initial layers with miss-classifications (which is also an explanation for the lower cost of ElasticBERT). As we increase the offloading cost SplitEE looks for deeper exit layers to split hence a gain in accuracy. Still, we are always better in terms of cost as well as accuracy when compared to ElasticBERT as shown in figure 4, 6, 3 and 5. Detailed results are in table 2. ### Regret Performance We repeat each experiment 20-times. Each time, a randomly reshuffled data is fed in an online manner to the algorithm. In each step, the algorithm selects a splitting layer and accumulates the regret if the choice is not optimal. In figure 7, we plot the expected cumulative regret along with a 95% confidence interval. We choose the exploration parameter \(\beta=1\). While each plot shows the results for a specific dataset. SplitEE and SplitEE-S outperforms the considered alternatives, yielding a lower cumulative regret and achieving sub-linear regret growth. We also observe that the SplitEE-S achieves lower regret than the SplitEE. This is because the side information provides the algorithm with additional information about the environment, which can be used to learn the optimal splitting layer quickly. As a result, the algorithm with side information can converge to the optimal policy more quickly. As observed from the figure 7, the regret starts saturating after the first 2000 samples for SplitEE and after 1000 samples for SplitEE-S. ## 6. Results In table 2, we report the accuracy and cost across different datasets as well as different models. SplitEE achieves smallest performance drop with a performance drop of (\(<2\%\)) against the final-exit and largest depreciation in cost (\(>50\%\)) as compared to final-exit. For the SciTail dataset, we are getting same accuracy as the final layer. This behaviour is observed as for most of the samples in SciTail, the gain in confidence is not sufficient in the initial layers, hence SplitEE offloads most of the samples and achieves similar performance. It achieves a smaller cost than other baselines since DeeBERT and ElasticBERT process every sample to deeper exits to meet the confidence threshold and accumulate more cost. We observed that in QQP \(15-20\%\) samples were misclassified with high confidence. Hence ElasticBERT exits many samples at initial layers but with a miss-classification incurring a lower cost. However, we have gained accuracy from the final layer. The lower accuracy at final layer is the effect of overthinking1 during inference. in general, the higher costs of DeeBERT and ElasticBERT could be explained as they process the sample till deeper exits until the sample's confidence is above a given threshold. However, SplitEE suggests offloading if the sample does not gains sufficient confidence till the splitting layer. Accuracy of SplitEE is also consistently higher as we also utilize the final layer for inference in conjunction with the splitting layer. Since the accuracy of the final exit is better than that of intermediate ones, SplitEE achieves higher accuracy than other baselines. Footnote 1: overthinking in inference is similar to over-fitting in training. ## 7. Conclusion We addressed the problem of using DNNs in resource-constraint edge devices like Mobile and IoTs. We proposed a new using mobile-cloud co-inference by combining Split computing and Early exits both of which are independently proposed to address the problem of deploying DNNs in resource-constrained environment. In our approach, part of DNN is deployed on the resource-constraint edge device and the remaining on the cloud. In the last layer of DNN implemented on the edge device, we make the inference, and depending on confidence in the inference, the sample either makes an exit or offloads to the cloud. The main challenge in our work is to decide where to split the DNN so that it achieves good accuracy while keeping computational and communication costs low. We developed a learning algorithm named SplitEE to address these challenges using the multi-armed bandit framework by defining a reward that takes into account accuracy and costs. Also, in our setup ground truth labels are not available. Hence SplitEE works in an unsupervised setting using confidence in prediction as a proxy for accuracy. Our experiments demonstrated that SplitEE achieves a significant reduction of cost (up to 50 %) with a slight reduction in accuracy (less than 2 %). We also developed a variant of SplitEE that exploits the side observation to improve performance. Our work can be extended in several ways. One SplitEE assumed that the threshold used to decide whether to exit or offload is fixed \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Model/Data & \multicolumn{2}{c}{IMDB} & \multicolumn{2}{c}{Yelp} & \multicolumn{2}{c}{SciTail} & \multicolumn{2}{c}{SNLI} & \multicolumn{2}{c}{QQP} \\ \hline & Acc & Cost & Acc & Cost & Acc & Cost & Acc & Cost & Acc & Cost \\ \hline Final-exit & 83.4 & 30.0 & 77.8 & 161.0 & 78.9 & 28.3 & 80.2 & 659.2 & 71.0 & 436.6 \\ Random-exit & -1.4 & -31.3\% & -1.2 & -38.0\% & -0.7 & -31.8\% & -2.0 & -41.5\% & -0.1 & -14.8\% \\ DeeBERT & -6.1 & -43.3\% & -2.5 & -59.0\% & -3.6 & -5.3\% & -3.5 & -38.9\% & -6.7 & -50.1\% \\ ElasticBERT & -2.5 & -62.3\% & -2.1 & -62.1\% & -0.1 & -40.2\% & -2.7 & -61.4\% & -0.2 & -57.9\% \\ SplitEE & -1.3 & **-66.6**\% & -1.1 & **-68.3**\% & 0.0 & -49.2\% & -1.6 & **-65.8**\% & -0.1 & **-59.1**\% \\ SplitEE-S & **-1.2** & -64.3\% & **-1.1** & -65.2\% & **0.0** & **-50.5**\% & **-1.7** & -62.5\% & **+0.1** & -55.1\% \\ \hline \end{tabular} \end{table} Table 2. Main Results: Results on different baselines across different datasets. Cost is left in terms of \(10^{4}\times\lambda\) units. \(\lambda\) is user-defined. The offloading cost is taken \(5\lambda\) (worst-case). Figure 7. Regret for different models based on offline validation. However, this can be adapted based on the new samples and can be made a learnable parameter. Also, in our work, we looked at an optimal split across all the samples. However, that can also be adaptive based on the sample. Each sample is of a different difficulty level and deciding split based on its difficulty can further improve the prediction accuracy while still keeping the cost low. ###### Acknowledgements. Manjesh K.Hanawal thanks funding support from SERB, Gov of India, through Core Research Grant (CRG/2022/008807) and MATRICS grant (MTR/2021/000645).
2309.13873
Guaranteed Privacy-Preserving $\mathcal{H}_{\infty}$-Optimal Interval Observer Design for Bounded-Error LTI Systems
This paper furthers current research into the notion of guaranteed privacy, which provides a deterministic characterization of the privacy of output signals of a dynamical system or mechanism. Unlike stochastic differential privacy, guaranteed privacy offers strict bounds on the proximity between the ranges of two sets of estimated data. Our approach relies on synthesizing an interval observer for a perturbed linear time-invariant (LTI) bounded-error system. The design procedure incorporates a bounded noise perturbation factor computation and observer gains synthesis. Consequently, the observer simultaneously provides guaranteed private and stable interval-valued estimates for a desired variable. We demonstrate the optimality of our design by minimizing the $\mathcal{H}_{\infty}$ norm of the observer error system. Furthermore, we assess the accuracy of our proposed mechanism by quantifying the loss incurred when considering guaranteed privacy specifications. Finally, we illustrate the outperformance of the proposed approach to differential privacy through simulations.
Mohammad Khajenejad, Sonia Martinez
2023-09-25T04:55:48Z
http://arxiv.org/abs/2309.13873v2
Guaranteed Privacy-Preserving \(\mathcal{H}_{\infty}\)-Optimal Interval Observer Design for Bounded-Error LTI Systems ###### Abstract This paper furthers current research into the notion of _guaranteed privacy_, which provides a deterministic characterization of the privacy of output signals of a dynamical system or mechanism. Unlike stochastic differential privacy, guaranteed privacy offers strict bounds on the proximity between the ranges of two sets of estimated data. Our approach relies on synthesizing an interval observer for linear time-invariant (LTI) bounded-error systems. The design procedure incorporates a bounded noise perturbation factor computation and an observer gain synthesis. The observer simultaneously provides guaranteed private and stable interval-valued estimates for the desired variable. We demonstrate the optimality of our design by minimizing the \(\mathcal{H}_{\infty}\) norm of the observer error system. Lastly, we assess the accuracy of our proposed mechanism by quantifying the loss incurred when considering guaranteed privacy specifications, and illustrate our approach outperforming to differential privacy through simulations. ## I Introduction The preservation of data privacy and security has become a pivotal concern in the oversight of cyber-physical systems (CPS) and their public credibility. Malicious actors can expand the scope of their attacks by extracting valuable information from the numerous physical, control, and communication components of the system; inflicting harm upon both the CPS and its users. While this data may initially be hidden, such information may be inferred by the examination of other mixed data, which made available either unintentionally or to provide a system-wide service. Consequently, a significant endeavor is underway to develop resilient control strategies that ensure data security within these systems [1]. This manuscript contributes to this area of research by examining a new concept of guaranteed privacy and its application in dynamic system estimation. _Literature Review._ Numerous information-theoretic notions have been proposed to measure the concept of privacy, and these definitions can be put into practice when dealing with the analysis of real-time data streams [2]. A main approach to this is _differential privacy_[3], originally proposed for the protection of databases of individual records subject to public queries. A system handling sensitive inputs achieves differential privacy through the randomization of its responses. This randomization is carefully designed to ensure that the distribution of publicly disclosed outputs remains relatively insensitive to the data contributed by any individual participant. This concept has been broadened and applied across various domains, including machine learning and regression [4, 5, 6], control, estimation, and verification [7, 8], multi-agent systems (consensus, message passing) [9, 10, 11], as well as optimization and games [12, 13, 14]. Considering dynamic settings, differential privacy has been applied to filtering, assuming either that the statistical characterizations of uncertainties are known [15] or that there is no disturbance [16]. However, these approaches are not applicable to bounded-error settings where uncertainties are only assumed to be bounded (set-valued) with unknown distributions. In such settings, interval observers [17, 18] are capable of providing guaranteed and uniformly bounded state estimates. The work in [19] proposed a differentially private mechanism to augment an existing interval observer for LTI systems. This was done via an input perturbation mechanism, by which stochastic bounded-support noise was added to each individual's data prior to sending it to the observer. The existence of such initial stable observer (i.e, a stabilizing gain) was assumed to be granted. Moreover, after the injection of the additional stochastic perturbation, neither the correctness, i.e., the former property of the observer, nor its stability were re-evaluated. While [19] provided a first design method that is inclusive of differential privacy, the question of guaranteed-private-stable and optimal design was left unaddressed, which this paper contributes toward. _Contributions._ We start by refining a new notion of guaranteed privacy, which characterizes privacy in terms of how close the ranges of two set-valued estimates of the published data are. As opposed to stochastic differential privacy, guaranteed privacy is deterministic, i.e., provides hard bounds for the distance between _any_ two possible values belonging to the guaranteed set of estimates of the published data. Then, we synthesize an interval observer for LTI bounded-error systems, though designing a bounded noise perturbation factor, as well as an observer gain. The observer simultaneously returns guaranteed private and stable interval-valued estimates of the desired variable. Further, we show that our design is optimal, in the sense that it minimizes the \(\mathcal{H}_{\infty}\) norm of the observer error system. Finally we study the accuracy of our proposed mechanism by quantifying the loss due to considering guaranteed privacy specifications. ## II Preliminaries In this section, we introduce basic notation, as well as preliminary concepts and results used in the sequel. _Notation._\(\mathbb{R}^{m},\mathbb{R}^{m\times p}\), and \(\mathbb{R}^{m}_{\geq 0}\) denote the \(m\)-dimensional Euclidean space, the sets of \(m\) by \(p\) matrices, and nonnegative vectors in \(\mathbb{R}^{m}\), respectively. Also \(\mathbf{1}_{m},\mathbf{0}_{m}\), and \(\mathbf{0}_{m\times p}\) denote the the vectors of ones and zeros in \(\mathbb{R}^{m}\), and the matrix of ones in \(\mathbb{R}^{m\times p}\), respectively. Further, \(\mathbb{D}^{m\times 0}_{>0}\) denotes the space of \(n\) by \(n\) diagonal matrices with positive diagonals. Given \(M\in\mathbb{R}^{m\times p}\), \(M^{\top}\) represents its transpose, \(M_{ij}\) denotes \(M\)'s entry in the \(i^{\text{th}}\) row and the \(j^{\text{th}}\) column, \(M^{\oplus}\triangleq\max(M,\mathbf{0}_{m\times p})\), \(M^{\ominus}=M^{\oplus}-M\), \(|M|\triangleq M^{\oplus}+M^{\ominus}\), and \(\sigma_{\max}(M)\triangleq\max_{x}\|Mx\|_{2}\) s.t. \(\|x\|_{2}\triangleq\sum_{i=1}^{p}x_{i}^{2}=1\) denotes the maximum singular value of \(M\). Furthermore, for \(a,b\in\mathbb{R}^{n},a\leq b\) means \(a_{i}\leq b_{i},\forall i\in\{1,\ldots,m\}\), while \(\operatorname{diag}(D^{1},\ldots,D^{N})\) denotes a block diagonal matrix with diagonal blocks \(D^{1},\ldots,D^{N}\). Further, a function \(\theta:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is of class \(\mathcal{K}\) (resp. \(\mathcal{K}_{\infty}\)) if it is continuous, and strictly increasing (resp. if is of class \(\mathcal{K}\) and also unbounded). Moreover, \(\kappa:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) if of class \(\mathcal{KL}\) if for each fixed \(k\geq 0\), \(\kappa(\cdot,t)\) is of class \(\mathcal{K}\) and for each \(s\geq 0\), \(\kappa(s,t)\) decreases to zero as \(t\rightarrow\infty\). Finally, given any arbitrary sequence \(\{s_{k}\}_{k=0}^{\infty}\), \(\|s\|_{\ell_{2}}\triangleq\sqrt{\Sigma_{k=0}^{\infty}\|s_{k}\|_{2}^{2}}\) and \(\|s\|_{\ell_{\infty}}\triangleq\sup_{k\in\mathbb{K}}\|s_{k}\|_{2}\) denote its \(\ell_{2}\) and \(\ell_{\infty}\) signal norms, respectively. **Definition 1** (Intervals).: _An \(m\)-dimensional interval \(\mathcal{I}\triangleq[\underline{z},\overline{z}]\subset\mathbb{R}^{m}\), is the set of all real vectors \(z\in\mathbb{R}^{m}\) that satisfy \(\underline{z}\leq z\leq\overline{z}\). Moreover, \(\operatorname{diam}(\mathcal{I})\triangleq\|\overline{z}-\underline{z}\|_{ \infty}\triangleq\max_{i\in\{1,\cdots,m\}}|\overline{z}_{i}-\underline{z}_{i}|\) denotes the diameter or interval width of \(\mathcal{I}\), while \(c\triangleq\frac{z+\overline{z}}{2}\) is the center of \(\mathcal{I}\). Finally, \(\mathbb{IR}^{n}\) denotes the space of all \(n\)-dimensional intervals, also referred to as interval vectors._ **Proposition 1**.: _[_20_, Lemma 1]_ _Let \(A\in\mathbb{R}^{p\times m}\) and \(\underline{x}\leq x\leq\overline{x}\in\mathbb{R}^{m}\). Then, \(A^{\oplus}\underline{x}-A^{\ominus}\overline{x}\leq Ax\leq A^{\oplus}\overline {x}-A^{\ominus}\underline{x}\). As a corollary, if \(A\) is non-negative, \(A\underline{x}\leq Ax\leq A\overline{x}\)._ ## III Problem Formulation ### _System Assumptions_ Consider a set of \(N\) linear time invariant discrete-time bounded-error systems (agents) with the following dynamics: \[x_{k+1}^{i} =A^{i}x_{k}^{i}+\sum_{j\neq i}A^{ij}x_{k}^{j}+W^{i}w_{k}^{i}, \tag{1}\] \[y_{k}^{i} =C^{i}x_{k}^{i}+V^{i}v_{k}^{i},\] where \(k\in\mathbb{K}\triangleq\mathbb{N}\cup\{0\},i\in\{1,\ldots,N\},x_{0}^{i}\in[ \underline{x}_{0}^{i},\overline{x}_{0}^{i}]\), \(x_{k}^{i}\in\mathbb{R}^{n^{i}}\) is the state vector of the agent \(i\) and \(w_{k}^{i}\in\mathcal{I}_{w}^{i}\triangleq[\underline{w}^{i},\overline{w}^{i}] \subset\mathbb{R}^{n^{i}_{w}}\) is a bounded process disturbance. Furthermore, at time step \(k\), every system (agent) \(i\) takes (originates) a distinct privacy-sensitive vector-valued measurement signal \(y_{k}^{i}\in\mathbb{R}^{m^{i}}\), which is affected by \(v_{k}^{i}\in\mathcal{I}_{v}^{i}\triangleq[\underline{v}^{i},\overline{v}^{i} ]\subset\mathbb{R}^{n^{i}_{w}}\), a bounded sensor (measurement) noise signal. Finally, \(A^{i}\in\mathbb{R}^{n^{i}\times n^{i}},W^{i}\in\mathbb{R}^{n^{i}\times n^{i} _{w}},C^{i}\in\mathbb{R}^{m^{i}\times n^{i}}\) and \(V^{i}\in\mathbb{R}^{m^{i}\times n^{i}_{w}}\) are known constant matrices, while \(A^{ij},j\neq i\), represent coupling matrices that capture the influence of the other agents on the agent \(i\). Unlike to the work in [19], we do not impose any restrictions on \(W^{i}\). The global system dynamics can be constructed by the agents and with \(n\triangleq\sum_{i=1}^{N}n^{i}\) states and \(m\triangleq\sum_{i=1}^{N}m^{i}\) outputs, as the following plant: \[\mathcal{G}:\begin{cases}x_{k+1}&=Ax_{k}+Ww_{k},\\ y_{k}&=Cx_{k}+Vv_{k}\\ \end{cases}, \tag{2}\] where \(x_{0}\) is unknown, but satisfies \(\underline{x}_{0}\leq x_{0}\leq\overline{x}_{0}\). Moreover, \[\xi_{k}\triangleq[(\xi_{k}^{1})^{\top},\ldots(\xi_{k}^{N})^{ \top}]^{\top},\ \forall\xi\in\{x,y,w,v\},\] \[J\triangleq\operatorname{diag}(J^{1},\ldots,J^{N}),\quad\forall J \in\{W,C,V\},\] \[A\triangleq\begin{bmatrix}A^{1}&A^{1,2}&\ldots&A^{1,N}\\ A^{2,1}&A^{2}&\ldots&A^{2,n}\\ \vdots&\vdots&\ddots&\vdots\\ A^{N,1}&A^{N,2}&\ldots&A^{N}\\ \end{bmatrix},\] and the data \(\underline{x}_{0},\overline{x}_{0},A,C,W,V\) are assumed to be public information. Furthermore, the bounded general state and measurement noise signals satisfy \(\underline{\nu}\leq\nu_{k}\leq\overline{\nu},\forall\nu\in\{w,v\}\), where \(\underline{\nu}\triangleq[(\underline{\nu}_{k}^{1})^{\top},\ldots(\underline{ \nu}_{k}^{N})^{\top}]^{\top},\overline{\nu}\triangleq[(\overline{\nu}_{k}^{ 1})^{\top},\ldots(\overline{\nu}_{k}^{N})^{\top}]^{\top}\). There is an operator, whose objective is to obtain interval-valued estimates of \(x_{k}\), i.e., \(\underline{x}_{k}\leq\overline{x}_{k}\), in an optimal manner, such that \(\underline{x}_{k}\leq x_{k}\leq\overline{x}_{k}\). As a consequence, the operator releases interval-valued estimates of the aggregated data given by: \[z_{k}\triangleq\Gamma x_{k}=\sum_{i=1}^{N}\Gamma^{i}x_{k}^{i},\] where \(\Gamma^{i}\) can be any arbitrary matrices. The estimates of \(x_{k}\) are constructed from signals \(y_{k}\) perturbed by some intentionally added bounded noise (separate from the existing measurement noise). The variable \(z_{k}\) represents an alternative and arbitrary output of the system, e.g., a selection (subset) of the individual states or outputs or a linear combination of these, the information of which is valuable for a third purpose. Because of this, \(z_{k}\) are treated differently from \(y_{k}\), even though they are related to each other via \(x_{k}\). Moreover, the operator aims to ensure that the publicly released interval estimates of \(z_{k}=\Gamma x_{k}\) ensure the privacy of the data (including participating agents) in a guaranteed manner. The rationale behind this concern lies in the potential for extracting fresh insights about the multi-agent system through interval estimates of \(z_{k}\). This can be accomplished, for instance, by exploiting linkage attacks. In such scenarios, an adversary can deduce novel information about specific individuals [21] by combining the newly published information with additional side knowledge. To do this, the operator aims to satisfy a deterministic notion of privacy, which ensures that the publicly released \(\underline{z}_{k},\overline{z}_{k}\) guarantee _hard (deterministic) privacy bounds_ for each agent's data. This is motivated by safety-critical settings, which benefit from "hard" bounds. The synthesis of the interval-valued estimates is being done through an \(\mathcal{H}_{\infty}\)-optimal interval observer, which is formally introduced via the following sequence of definitions. **Definition 2** (Interval Framer).: _The sequences \(\{\underline{x}_{k},\overline{x}_{k}\}_{k=0}^{\infty}\) are called lower and upper framers for the states of system \(\mathcal{G}\) if \(\forall w_{k}\in[\underline{w},\overline{w}],\forall v_{k}\in[\underline{v}, \overline{v}],\underline{x}_{k}\leq x_{k}\leq\underline{x}_{k}\). Moreover, any dynamical system \(\widehat{\mathcal{G}}\) whose states are framers for the states of \(\mathcal{G}\), i.e., any (tractable) algorithm that returns upper and lower framers for the states of (2), is called an interval framer for \(\mathcal{G}\)._ **Definition 3** (Input-to-State Stability & Interval Observer).: _An interval framer \(\widehat{\mathcal{G}}\) is input-to-state stable (ISS), if the framer error \(e_{k}^{x}\triangleq\overline{x}_{k}-\underline{x}_{k}\) is bounded as follows:_ \[\|e_{k}^{x}\|_{2}\leq\kappa(\|e_{0}^{x}\|_{2},k)+\theta(\|\delta_{\hat{w}}\|_{ \ell_{\infty}})\quad\forall k\in\mathbb{K}, \tag{3}\] _where \(\delta_{\hat{w}}\triangleq[\delta_{w}^{\top}\otimes_{\hat{w}}^{\top}]^{\top}\) is the augmented vector of noise widths, while \(\kappa\) and \(\theta\) are functions of classes \(\mathcal{KL}\) and \(\mathcal{K}_{\infty}\), respectively. An ISS interval framer is called Note that we do not assume that such an observer exists, rather we want to synthesize it while satisfying privacy specifications with deterministic bounds. This is formalized via the notion of _guaranteed privacy_ as described next. ### _Guaranteed Privacy_ To formally define guaranteed privacy, we use a version of _adjacency relation_. Let \(\mathcal{Y}\) denote the space of measured signal sequences \(\{y_{k}\}_{k\geq 0}\), and \(\rho>0\) be given. A symmetric binary relation on \(\mathcal{Y}\), denoted \(\mathrm{Adj}_{\rho}\), identifies the types of variations in \(y\) that we aim to make hard to detect. **Definition 5** (Adjacency Relation).: _For any \(y,y^{\prime}\in\mathcal{Y}\),_ \[\mathrm{Adj}_{\rho}(y,y^{\prime})\text{ if and only if }\|y-y^{\prime}\|_{2}\leq\rho. \tag{4}\] Such interpretation of adjacent datasets implies that a single participant possibly contributes additively to each \(y^{i}\) in a way that its overall impact on the dataset \(y\) is bounded in \(2\)-norm by \(\rho\). We are ready to formally introduce the notion of guaranteed privacy. **Definition 6** (Guaranteed Privacy).: _Let \(e,\delta\geq 0\), and \(D\) be a space equipped with the symmetric binary relation \(\mathrm{Adj}_{\rho}\) given in Definition 5. A deterministic set-valued mechanism \(\mathcal{M}:D\rightarrow\mathbb{IR}^{n}\) is \((\epsilon,\delta)\)-guaranteed private w.r.t. \(\mathrm{Adj}_{\rho}\), if for all \(d,d^{\prime}\in D\) such that \(\mathrm{Adj}_{\rho}(d,d^{\prime})\), all \(q\in\mathcal{M}(d)\) and all \(q^{\prime}\in\mathcal{M}(d^{\prime})\), we have_ \[e^{\epsilon}\|q-q^{\prime}\|_{p}\leq\delta,\] _where \(\mathcal{M}(d)\) and \(\mathcal{M}(d^{\prime})\) are the entire interval range of \(\mathcal{M}\) applying to \(d\) and \(d^{\prime}\), respectively._ This notion of guaranteed privacy is stronger than the one introduced in [22, Definition 3] for distributed nonconvex optimization. The new definition certifies that every arbitrary \(m,m^{\prime}\) that belongs to the interval ranges of \(\mathcal{M}\) when applied to adjacent \(d\) and \(d^{\prime}\), remain close to each other. Instead, the notion in [22] only guarantees that the diameters of \(\mathcal{M}(d)\) and \(\mathcal{M}(d^{\prime})\) remain close to each other. In other words, Definition 6 implies the one in [22], but not conversely. Further, we re-emphasize the difference between this notion with that of differential privacy in [12, 13, 23]. Under differential privacy, the statistics of the output of \(\mathcal{M}\), i.e., the probability of the values of \(\mathcal{M}\), is allowed to change only slightly if there is a slight perturbation of the data \(y\). Instead, when guaranteed privacy is considered, the entire range of the set-valued mechanism \(\mathcal{M}\) is allowed to change only slightly with respect to the perturbed data. With this being said, our problem can be cast as follows: **Problem 1** (Guaranteed Privacy-Preserving Interval Observer Design).: _Given system \(\mathcal{G}\), design a mechanism (or mapping) \(\mathcal{M}\) that simultaneously_ * _outputs framers for_ \(z_{k}\) _through a to-be-designed framer system (cf. Definition_ 2_),_ * _ensures that the framer system is ISS, i.e., the framer system is an interval observer (cf. Definition_ 3_),_ * _satisfies the guaranteed privacy of data_ \(\{y_{k}\}_{k=0}^{\infty}\) _(cf. Definition_ 6_), and_ * _guarantees that the observer design is optimal in the sense of_ \(\mathcal{H}_{\infty}\) _(cf. Definition_ 4_)._ ## IV Guaranteed Privacy-Preserving Interval Observer Design In this section, we introduce our proposed strategy to design a guaranteed privacy-preserving mechanism (or mapping) for interval observer design, addressing Problem 1. Our approach consists of injecting an additional perturbation bounded noise on the outputs of system \(\mathcal{G}\), and then synthesizing stable and guaranteed privacy-preserving estimates of the desired variable \(z_{k}\). We assume that the additional bounded perturbation noise satisfies \(\alpha\underline{v}\leq v_{k}^{a}+v_{k}\leq\alpha\overline{v}\), where \(\alpha>0\) is an additional to-be-chosen degree of freedom, that will be designed later along with the observer gain to satisfy the desired properties of the mechanism. Hence, after injecting the controlled additional noise, the lower and upper bounds of the output noise in the perturbed version of (2) are \(\alpha\underline{v}\) and \(\alpha\overline{v}\), respectively. ### _Framer Structure_ Next, we construct a dynamical system, which depends on a new, to-be-designed gain \(L\), which aims to provide interval estimates for \(x_{k}\), for any arbitrary \(L\) and \(\alpha>0\). The proposed framer has the following structure: \[\widehat{\mathcal{G}}:\begin{cases}\underline{x}_{k+1}=(A-LC)^{\oplus} \underline{x}_{k}-(A-LC)^{\ominus}\overline{x}_{k}+Ly_{k}\\ \quad\quad\quad+W^{\ominus}\underline{w}-W^{\ominus}\overline{w}+(LV)^{\ominus }\underline{v}^{\prime}-(LV)^{\ominus}\overline{v}^{\prime},\\ \overline{x}_{k+1}=(A-LC)^{\oplus}\overline{x}_{k}-(A-LC)^{\ominus}\underline{x }_{k}+Ly_{k}\\ \quad\quad\quad+W^{\oplus}\overline{w}-W^{\ominus}\underline{w}+(LV)^{\ominus }\overline{v}^{\prime}-(LV)^{\ominus}\underline{v}^{\prime},\\ \underline{z}_{k}\quad\quad=\Gamma^{\oplus}\underline{x}_{k}-\Gamma^{\ominus} \overline{x}_{k},\ \overline{z}_{k}=\Gamma^{\ominus}\overline{x}_{k}-\Gamma^{\ominus} \underline{x}_{k},\end{cases} \tag{5}\] and is initialized at \([\overline{x}_{0}^{\top},\underline{x}_{0}^{\top}]^{\top}\). Here, \(\overline{v}^{\prime}\triangleq\alpha\overline{v},\underline{v}^{\prime} \triangleq\alpha\underline{v}\), and \(\underline{z}_{k},\overline{z}_{k}\) can be interpreted as the "outputs" of \(\widehat{\mathcal{G}}\). As we show below, system \(\widetilde{\mathcal{G}}\) is a framer for \(\mathcal{G}\), while simultaneously satisfies stability (i.e., is an observer), maintains guaranteed privacy of the desired variable \(z_{k}=\Gamma x_{k}\), and is optimal in the sense of \(\mathcal{H}_{\infty}\) (cf. Definition 4). First, we show that \(\widehat{\mathcal{G}}\) constructs a framer for \(\mathcal{G}\) for all values of \(\alpha>0\) and \(L\). **Proposition 2** (Framer Property).: _The state trajectory of system (5), initialized at \([\overline{x}_{0}^{\top},\underline{x}_{0}^{\top}]^{\top}\), frames the true state of (2) at each time step \(k\), i.e., \(\underline{x}_{k}\leq x_{k}\leq\overline{x}_{k},\ \forall k\geq 0\). Moreover, \(\forall k\geq 0,\underline{z}_{k}\leq z_{k}-\Gamma_{k}\leq\overline{z}_{k}\), i.e., \(z_{k}\in[\underline{z}_{k},\overline{z}_{k}]\)._ Proof.: Starting from (2) and given that \(L(y_{k}-Cx_{k}-Vv_{k})=0\) holds for any \(L\) with appropriate dimensions, yields \(x_{k+1}=(A-LC)x_{k}+Ww_{k}+Ly_{k}-LVv_{k}\). The results in (5), as well as the inequalities \(\underline{z}_{k}\leq z_{k}=\Gamma_{k}\leq\overline{z}_{k}\), follow from applying Proposition 1 to all multiplications of matrices with uncertain vectors. ### _Observer Input-to-State Stability_ In this subsection, we formalize sufficient conditions to satisfy the stability of the proposed framer (5). These conditions simultaneously satisfy some bounds on the \(\mathcal{H}_{\infty}\)-norm of the observer error dynamics, which is also required in the next subsection where we provide guaranteed privacy-preserving conditions. Starting from (5), and defining \(\delta_{\xi}\triangleq\overline{\xi}-\underline{\xi},\forall\xi\in\{w,v\}\), it is straightforward to obtain the following dynamical system \(\widetilde{\mathcal{G}}\) for the evolution of the observer error \(e_{k}^{x}\triangleq\overline{x}_{k}-\underline{x}_{k}:\) \[\widetilde{\mathcal{G}}:e_{k+1}^{x}=|A-LC|e_{k}^{x}+\Lambda\delta_{\lambda},\ \text{where} \tag{6}\] \[\Lambda\triangleq\left[\left|W\right|\right. \left|LV\right|\right],\ \delta_{\lambda}\triangleq\left[\delta_{w}^{\top}\ \delta_{v}^{\top}\right]^{\top},\ \delta_{v^{\prime}}\triangleq\alpha\delta_{v}.\] The following lemma provides sufficient conditions in the form of mixed-integer semi-definite (MISDP) matrix inequalities for the stability of \(\widetilde{\mathcal{G}}\), and derives an upper bound for its \(\mathcal{H}_{\infty}\)-norm, where the mixed-integer feature of the conditions arises from the presence of the absolute values. **Lemma 1** (Observer Stability and Error Dynamics Upper Bound).: _Suppose there exist \(Q\in\mathbb{D}_{>0}^{n\times n}\), \(\gamma>0\), and \(\widetilde{L}\in\mathbb{R}^{n\times m}\), for which the following MISDP matrix inequality holds:_ \[\begin{bmatrix}Q\ \left|QA-\widetilde{L}C\right|\ \left[Q|W\right|\ \left| \widetilde{L}V\right|\right]&0\\ *&Q&0&I\\ *&*&\gamma I&0\\ *&*&*&\gamma I\end{bmatrix}\succ 0. \tag{7}\] _Then, system \(\widetilde{\mathcal{G}}\) is stable and satisfies:_ \[\left\|\widetilde{\mathcal{G}}\right\|_{\mathcal{H}_{\infty}}\triangleq \sup_{k\geq 0}\frac{\|c_{k}^{\star}\|_{2}}{\|\delta_{\lambda}\|_{2}}\leq\gamma. \tag{8}\] Proof.: The result follows from applying [24, Lemma 2] to (6), as well as defining \(\widetilde{L}\triangleq QL\), and restricting the positive-definite matrix \(Q\succ 0\) to be a diagonal matrix with positive diagonal elements, so that we have \(Q|A-LC|=|Q(A-LC)|=|QA-\widetilde{L}C|\) and \(Q|\widetilde{L}V|=|\widetilde{L}V|\). ### _Guaranteed Privacy-Preserving Mechanism_ We introduce a set-valued deterministic mechanism (i.e., mapping \(\mathcal{M}\)) that is guaranteed privacy-preserving for the desired variable \(z_{k}=\Gamma x_{k}\) in the sense of Definition 6. First, note that given the measurement signal \(y_{k}\) and using Proposition 1, the to-be-designed mechanism \(\mathcal{M}\) outputs set-valued estimates of \(z_{k}\in[\underline{z}_{k},\overline{z}_{k}]\) as \[\mathcal{M}(y_{k})=\mathcal{Z}_{k}\triangleq[\underline{z}_{k}, \overline{z}_{k}],\text{where} \tag{9}\] \[\underline{z}_{k},\overline{z}_{k}\text{ are outputted by the observer (\ref{eq:1}).}\] We aim to find sufficient conditions for the design factors \(\alpha\) and \(L\) such that \(\mathcal{M}\) is guaranteed private. To do so, we upper bound the distance between any two arbitrary points \(z_{k}\in\mathcal{Z}_{k}=\mathcal{M}(y_{k})\) and \(z_{k}^{\prime}\in\mathcal{Z}_{k}^{\prime}=\mathcal{M}(y_{k}^{\prime})\), by upper bounding the distance between the centers of the intervals \(\mathcal{Z}_{k}\) and \(\mathcal{Z}_{k}^{\prime}\), and using the fact that the stability of the observer (5) implies boundedness of the diameters of \(\mathcal{Z}_{k}\) and \(\mathcal{Z}_{k}^{\prime}\). The procedure is formalized through the following theorem. **Theorem 1** (Guaranteed Privacy-Preserving Mechanism).: _Consider system (2) (after adding the bounded perturbation noise), and suppose that all the assumptions in Lemma 1 hold (consequently system (5) is stable and (8) holds). Let \(\rho>0,\epsilon,\delta\geq 0\), \((\gamma,Q,\widetilde{L})\) be a solution to the MISDP in (7) and let \(L=Q^{-1}\widetilde{L}\). Then, the mechanism \(\mathcal{M}\) defined in (9) is \((\epsilon,\delta)\)-guaranteed private w.r.t. \(\mathrm{Adj}_{\rho}\) given in (4), if for some \(\eta,\alpha>0\), the following matrix inequalities hold:_ \[\begin{bmatrix}Q\ \left|QA-\widetilde{L}C\right|\ \widetilde{L}&0\\ *\ Proof.: The proof directly follows from Lemma 1, Theorem 1, and defining the new decision variable \(\beta=\gamma\alpha\). **Remark 1**.: _Note that while the MISDP in (19) involves more computational complexity compared to a typical semi-definite program (SDP), it remains tractable for solving using off-the-shelf solvers such as CUTSDP. Additionally, it's important to note that (19) is solved in an offline manner, i.e., it only needs to be computed once for any given design. So, its computational demands are quite minimal in contrast to online design methods. Alternatively, via imposing supplementary linear constraints, the error dynamics can be upper bounded by a linear comparison system. Hence, (19) can be relaxed to an SDP with reduced computational complexity. However, this comes at the cost of introducing more conservatiism and sacrificing optimality._ ## V Accuracy Analysis As a consequence of introducing perturbations to ensure guaranteed privacy, the estimates of \(x\) and \(z\) incur into an accuracy loss. In this section, we quantify the difference between the interval estimate widths, i.e., the observer errors, with and without considering guaranteed privacy. First, it is straightforward to see that in the absence of privacy considerations, a non-private (NP) \(\mathcal{H}_{\infty}\)-optimal interval observer can be designed by implementing (5) without any additional perturbation noise, i.e, with \(\alpha^{\text{NP}}=1\), and with an observer gain \(L^{\text{NP}}=(Q^{\text{NP}})_{*}^{-1}\widetilde{L}^{\text{NP}}_{*}\), where \((Q^{\text{NP}}_{*},\widetilde{L}^{\text{NP}}_{*},\gamma^{\text{NP}}_{*})\) is a solution to the following MISDP: \[\min_{\gamma>0,\ Q\in\mathbb{D}_{\infty}^{\text{NP}},\widetilde{L }}\gamma\] (20) \[\text{s. t. matrix inequality (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: observer gain \(L_{*}=\begin{bmatrix}l_{1}&l_{2}&l_{0}&l_{0}&l_{0}\\ l_{0}&l_{1}&l_{2}&l_{0}&l_{0}\\ l_{0}&l_{0}&l_{1}&l_{2}&l_{2}\\ l_{0}&l_{0}&l_{1}&l_{2}\\ l_{2}&l_{0}&l_{0}&l_{1}&l_{1}\\ \end{bmatrix}\) and noise attenuation level \(\gamma_{*}=0.865\), where \(l_{0}=-.005,l_{1}=0.425\) and \(l_{2}=0.076\). The red plots in Figure 1 indicate the guaranteed private (GP) upper and lower framers (left) and the estimate interval widths (right) of \(z_{k}\), obtained by observer \(\widetilde{\mathcal{G}}\) with the computed \(L_{*}\) and \(\alpha_{*}\). As can be seen, the plotted framers contain the actual state trajectory (the green plot). Moreover, as expected, the non-private (NP) interval estimates (black plots) are tighter than the GP ones due to the additional required guaranteed privacy-preserving constraints (10) and (11), as well as the additional perturbation noise. Finally, for the sake of comparison, we implemented a slightly modified version of the differentially private (DP) interval observer in [19], using our computed gain \(L_{*}\), by perturbing the input data \(y_{k}\) with additional stochastic noise as described in [19]. As can be seen in Figure 1, our proposed GP interval estimates (red) outperforms the DP ones (blue). ## VII Conclusion and Future Work A novel generalization of guaranteed privacy was proposed in this paper, affording a deterministic portrayal of privacy. In contrast to stochastic differential privacy, guaranteed privacy was found to impose precise constraints on the proximity between the ranges of two sets of estimated data. To do so, an interval observer was designed for bounded-error LTI systems, incorporating a bounded noise perturbation factor and an observer gain. The observer simultaneously outputted guaranteed private and stable interval-valued estimates for the desired variable. The optimality of the design was demonstrated, and the accuracy of the mechanism was assessed by quantifying the loss incurred when considering guaranteed privacy specifications. Future work will consider nonlinear systems and combination of privacy and attack resilience.
2310.20316
HWD: A Novel Evaluation Score for Styled Handwritten Text Generation
Styled Handwritten Text Generation (Styled HTG) is an important task in document analysis, aiming to generate text images with the handwriting of given reference images. In recent years, there has been significant progress in the development of deep learning models for tackling this task. Being able to measure the performance of HTG models via a meaningful and representative criterion is key for fostering the development of this research topic. However, despite the current adoption of scores for natural image generation evaluation, assessing the quality of generated handwriting remains challenging. In light of this, we devise the Handwriting Distance (HWD), tailored for HTG evaluation. In particular, it works in the feature space of a network specifically trained to extract handwriting style features from the variable-lenght input images and exploits a perceptual distance to compare the subtle geometric features of handwriting. Through extensive experimental evaluation on different word-level and line-level datasets of handwritten text images, we demonstrate the suitability of the proposed HWD as a score for Styled HTG. The pretrained model used as backbone will be released to ease the adoption of the score, aiming to provide a valuable tool for evaluating HTG models and thus contributing to advancing this important research area.
Vittorio Pippi, Fabio Quattrini, Silvia Cascianelli, Rita Cucchiara
2023-10-31T09:44:27Z
http://arxiv.org/abs/2310.20316v1
# HWD: A Novel Evaluation Score for ###### Abstract Styled Handwritten Text Generation (Styled HTG) is an important task in document analysis, aiming to generate text images with the handwriting of given reference images. In recent years, there has been significant progress in the development of deep learning models for tackling this task. Being able to measure the performance of HTG models via a meaningful and representative criterion is key for fostering the development of this research topic. However, despite the current adoption of scores for natural image generation evaluation, assessing the quality of generated handwriting remains challenging. In light of this, we devise the Handwriting Distance (HWD), tailored for HTG evaluation. In particular, it works in the feature space of a network specifically trained to extract handwriting style features from the variable-lenght input images and exploits a perceptual distance to compare the subtle geometric features of handwriting. Through extensive experimental evaluation on different word-level and line-level datasets of handwritten text images, we demonstrate the suitability of the proposed HWD as a score for Styled HTG. The pretrained model used as backbone will be released to ease the adoption of the score, aiming to provide a valuable tool for evaluating HTG models and thus contributing to advancing this important research area. 1 HWD: A Novel Evaluation Score for Styled Handwritten Text Generation ## 1 Introduction Styled Handwritten Text Generation (Styled HTG) entails producing realistic images of arbitrary handwritten text in a desired style given in the form of one or more exemplar style images. Those images can be used to: train models for document analysis tasks (_e.g._, Handwritten Text Recognition [4, 5, 6, 9, 12, 22, 24, 35, 42]) in low-resource scenarios such as ancient languages or documents by specific authors; enhance the user experience in augmented reality scenarios and the public engagement at GLAM institutions (galleries, libraries, archives, and museums); assist physically impaired people in taking notes on electronic devices. Although HTG is receiving increased interest in recent years [3, 16, 23, 34], a precise standard for evaluation has not yet been defined. It is important to note that evaluating the similarity between two writers' calligraphy involves more than considering the overall appearance of the text images. This concerns color and texture of background and ink, stroke thickness, slant, and roundness. Nonetheless, handwriting is characterized by the shape of individual characters, ligatures, and the spacing between characters in the text. Hence, a thorough evaluation procedure should consider all these factors to ensure an accurate and meaningful assessment of the ability of Styled HTG models to imitate a desired handwriting. The commonly adopted approach is the one proposed by Kang et al. [23], which entails exploiting the Frechet Inception Distance (FID) [21]. The FID is computed on the features extracted by an Inception-v3 ConvNet trained on natural images from ImageNet. Thus, the FID is somehow unsatisfactory for measuring the faithfulness in the handwriting style of the generated images but rather captures the overall appearance [23, 24, 34]. Another critical point of applying the FID in the text images domain is that it leverages a backbone trained on images whose aspect ratio is very different from text images. These latter are usually wider than high, while natural images in ImageNet are roughly squared. For this reason, in HTG evaluation, the FID is commonly computed on the beginning part of the text image, discarding the rest. This approach offers invariance with respect to the textual content but is prone to miss artifacts and dissimilarities that appear in the center or at the rightmost part of the image. Therefore, while the FID can help evaluate certain aspects of HTG, it does not provide a complete picture of the quality of the generated handwriting images. Moreover, different studies [7, 11] have shown that the FID is biased on the number of samples used for its calculation, resulting in lower values with more instances and higher and more unstable values when reducing the number of examples. To address these challenges, we propose a new evaluation score called Handwriting Distance (HWD). The main characteristics of HWD consist of: (1) the use of robust style features extracted by a backbone trained on a large dataset of synthetic text images; (2) the application of a perceptual distance for style comparison; (3) the ability to handle variable-length text images; (4) the numerical stability even when computed on a limited number of samples. This is particularly useful in Styled HTG, where only a few real images per author are usually available. To assess the suitability of the proposed score for the Styled HTG task, we examine the values obtained when comparing sets of text images in the same style with respect to those obtained when comparing images in different styles. We demonstrate that the HWD effectively captures differences in the handwriting and is numerically stable. This makes it more suitable than the FID in expressing the performance of the Styled HTG models. Overall, the HWD could contribute to the field of HTG by providing a tailored and practical evaluation score to measure the realism and faithfulness of generated text images. The code and the weights of the convolutional backbone used to compute the HWD score can be found here: [https://github.com/aimagelab/HWD](https://github.com/aimagelab/HWD). ## 2 Related Work The early-proposed approaches to HTG [20, 39] apply handcrafted geometric statistical-based feature extraction on human-made glyphs segmentations, then combine them with appropriate ligatures and render the results with texture and background blending. The two major limitations of these approaches are their inability to render glyphs and ligatures not observed for each style and their reliance on costly human intervention. In contrast, recent deep learning-based HTG approaches can infer styled glyphs even when they are not ex plicitly shown in the style examples. A majority of learning-based solutions are based on GANs [19], either unconditioned (for Non-Styled HTG) or conditioned on a variety of handwriting style examples (Styled HTG). In the second scenario, style samples may consist of whole sentences or lines [13], a few words [3, 23, 34], or a single word [17, 18, 27]. The first learning-based Non-Styled HTG approach, proposed in [1], entails generating fixed-sized images conditioned on the embedding of the desired textual content but does not control the calligraphic style of the output. Since, different from natural images, handwritten text images are highly variable-sized, subsequent approaches [16] entail concatenating character images. Styled HTG approaches condition the generation on both the text content and a vector representation of the style [13, 17, 18, 23, 24, 31]. The two representations are obtained separately and then combined for generation, preventing those approaches from effectively capturing local writing style and patterns. On the other hand, the Transformer-based [38] approach adopted in [3, 34] exploits the cross-attention mechanism between the style vector representation and the content text representation to entangle the content and the style, thus better rendering local style patterns. **HTG Evaluation.** As for the performance evaluation, models for HTG are evaluated by exploiting the FID [21]. The FID is a commonly used score for evaluating the quality of generative models. It exploits image representations extracted from an Inception-v3, which are fit to two multivariate Gaussian distributions, one from real images and the other from generated images. For this reason, the FID tends to focus more on general image characteristics rather than the shape of handwriting. Furthermore, the backbone network used to compute the FID is trained on ImageNet, which contains natural images whose domain and aspect ratio are completely different from those of handwritten text images, which can result in misleading values. Other adopted metrics are the Geometric Score [25] and the Character Error Rate. The latter measures the readability of the generated text images, which serves as a proxy to express their realism, _i.e._, how similar to a well-formed text they look. However, as discussed in [23, 24, 34], these measures fail to capture all the desired characteristics of a well-generated styled text image. In this work, we propose a score specific for evaluating HTG models to address this point. ## 3 Handwriting Distance Inspired by the strategy adopted for natural image generation evaluation, Styled HTG works employ the FID score, adapted as described in [23] and depicted in Figure 1 (bottom). In this work, we devise an alternative score for evaluating the performance of Styled HTG models, called HWD. The main characteristics of HWD are the domain-aware image representation strategy and the use of a perceptual distance instead of a distribution distance. The pipeline for computing our proposed score is described below and depicted in Figure 1 (top). ### Text Images Representation When evaluating Styled HTG, we consider the set of real images \(\mathbf{X}_{m}{=}\{\mathbf{x}_{m,i}\}_{i=0}^{N}\), where \(N\) is the number of samples for writer \(m\), and the set \(\mathbf{X}^{\prime}{}_{m}{=}\{\mathbf{x}^{\prime}{}_{m,i}\}_{i=0}^{N^{\prime}}\) of generated images in the style of writer \(m\). In general, the number of generated images, \(N^{\prime}\), can differ from \(N\). **Domain-Specific Feature Extraction.** Given the constrained domain of the images for the HTG task, we propose to use a domain-specific backbone as feature extractor. In particular, we adopt a VGG16 pretrained on Font\({}^{2}\), a large synthetic dataset of text images built according to the procedure presented in [33]. We choose VGG16 as a backbone for its superiority in extracting meaningful style representations compared to deeper networks featuring skip connections [40]. The pretraining dataset contains more than 100M samples obtained by rendering 10400 English words in 10400 calligraphic fonts and superimposing them to paper-like backgrounds. Random geometric distortions and color transformations are also performed to increase the realism of the images. We train the VGG16 backbone to classify the images according to their calligraphic font. The high visual variability of the datasets forces the network to learn features that represent the handwriting given by the font and disregard the overall visual appearance. In this way, the features extracted by our adopted backbone are strong representations of the handwriting style. For computing the HWD, we feed the pretrained VGG16 with the whole image resized to a height of 32 pixels while keeping the aspect ratio. Note that since the images have different widths, we feed them to the network one by one to avoid using padding. **Variable-lenght Images Representation.** To handle the difference in the aspect ratio of the natural images in ImageNet and text images, the strategy adopted in [23] and the following HTG approaches entails feeding to the Inception-v3 backbone only a squared, truncated sample. The text image is reshaped to a height of 32 pixels, keeping the aspect ratio, and then cropped at the beginning to have a fixed width (32 pixels). This strategy has two drawbacks. First, it disregards a large part of the text image. Second, the characters appearing more often at the beginning of words due to language regularity assume more importance than others in computing the score. Consequently, this strategy prevents evaluating the ability of the HTG models to generate images of variable-length texts with the same quality throughout the entire image. To overcome these limitations, we propose processing the whole images with the VGG16 backbone for the HWD. The representations are obtained from the feature maps of the last convolutional block. Such feature maps have shape \(1\times W\times 512\), where \(W\) depends Figure 1: Pipelines for the computation of the proposed HWD score (top) and the commonly used FID score (bottom) from a set of real and generated text images with varying widths. Top: the samples are processed by a VGG16 pretrained on a synthetic dataset of text images. The different widths of the inputs result in a different number of feature vectors. The value of HWD is the Euclidean distance between the feature vectors’ means \(\mu\). Bottom: the text images are cropped to a square and fed to an Inception-v3 pretrained on ImageNet, obtaining one feature vector per image. The FID is the Fréchet distance between the distributions of the features, represented as multivariate Gaussians with mean \(\mu\) and covariance matrix \(\Sigma\). on the input text image width. From those maps, we obtain a set of 512-sized feature vectors that represent the images. As a result, wide images are represented by a larger set of vectors than short ones. When computing the HWD for writer \(m\), all the vectors from all the real images sets are gathered and then averaged (the same applies to the generated images). In the gathering set, there will be more vectors coming from the longer words, and thus, the longer words will have a bigger impact on the mean of the gathering set. In summary, each image for writer \(m\), both real and generated, is represented by a set of vectors. The real images are represented by sets as \(\mathbf{y}_{m,i}{=}\{f(\mathbf{x}_{m,i})_{j}\}_{j=0}^{W_{i}}\), and the generated ones by \(\mathbf{y}^{\prime}_{m,i}{=}\{f(\mathbf{x}^{\prime}_{m,i})_{j}\}_{j=0}^{W_{i}^ {\prime}}\), where \(f(\cdot)\) denotes the feature extraction, and \(W_{i}\) and \(W_{i}^{\prime}\) are the number of vectors extracted from the \(i\)-th real and generated images, respectively. ### Perception-Aware Feature Distance The main idea behind the distribution distance-based evaluation scores, such as the FID, is to evaluate the performance of a generative model by its ability to generate images that match the distribution of the real ones. Our domain of interest is more constrained than the generation of natural images. In fact, Styled HTG entails considering the handwriting, expressed by subtle geometric features other than macroscopic texture. In light of this, we argue that a score capturing the perceptual aspects is more suitable than one based on the distance between feature distributions. Therefore, we employ the Euclidean distance between the averaged feature vectors of the real and generated images in the style of the same writer: \[Y_{m}=\frac{\sum_{i=1}^{N}\sum_{j=1}^{W_{i}}f(\mathbf{x}_{m,i})_{j}}{\sum_{i= 1}^{N}W_{i}}\qquad\text{and}\qquad Y_{m}^{\prime}=\frac{\sum_{i=1}^{N^{\prime}} \sum_{j=1}^{W_{i}^{\prime}}f(\mathbf{x}^{\prime}_{m,i})_{j}}{\sum_{i=1}^{N^{ \prime}}W_{i}^{\prime}}.\] For the images in the style of writer \(m\), the HWD is given by: \[\text{HWD}_{m}=\|Y_{m}-Y_{m}^{\prime}\|_{2}.\] Note that when computed on robust image representations, _e.g._, obtained from a backbone trained on a semantic prediction task, the Euclidean distance is highly predictive of the perceptual similarity between images [41]. Finally, the HWD on datasets containing images in the style of \(M\) different authors is obtained as \[\text{HWD}=\frac{1}{M}\sum_{m=i}^{M}\text{HWD}_{m}.\] The HWD score has the non-negativity, symmetry, triangular inequality properties, but it is not guaranteed that it exhibits the identity of indiscernible elements property since two non-identical images might have \(\text{HWD}=0\) due to their representation via the pre-trained backbone. This is a desirable characteristic for the HTG task, in which text images containing different texts and different backgrounds should be at low (or even zero) HWD if they are written in the same handwriting style. ## 4 Experimental Analysis For our experimental analysis of the proposed HWD score, we consider images from a number of multi-author and single-author datasets. We compare the HWD score against the FID score in the variant proposed in [23], which is the common approach adopted in Styled HTG. The comparison is performed along two dimensions. First, we assess its capability to recognize corresponding handwriting styles, quantifying the style verification capability with the Overlap coefficient and the Equal Error Rate (EER). Second, we compare the numerical stability of the proposed HWD score and the FID. Additionally, we evaluate current Styled HTG State-of-the-Art models with our score and other common metrics for image generation evaluation. Finally, we conduct extensive ablation analyses on the main components of our approach to investigate their individual contributions. Further results can be found in the Supplementary material. ### Considered Datasets The considered multi-author datasets are described below and in Table 1. **IAM.** The IAM Database [30] is a collection of greyscale document scans by 657 writers. These are written in English with ink on white paper and cleaned digitally. Here, we use the line-level version of the dataset. **RIMES.** The RIMES Database [2] consists of binary French documents by 1500 authors. For this dataset, we use the line-level version. **CVL.** The CVL Database [26] features word images obtained from RGB scans of English and German manuscripts, written with ink on white paper by 310 writers. **KHATT.** The KHATT Database [29] contains binarized images of handwritten Arabic words handwritten by 838 people. **BanglaWriting.** The BanglaWriting Dataset [32] is composed of greyscale images of Bengalese words handwritten by 212 authors. **NorHand.** The NorHand Dataset [28] features text lines extracted from greyscale scans of ancient documents written with ink on yellowed paper by 12 Norwegian authors. The considered single-author datasets are presented below. **Saint Gall.** The Saint Gall Dataset [14] features binary images of 1410 lines from a medieval manuscript written in Latin with gothic calligraphy. **Washington.** The George Washington Dataset [15] contains binary images of 656 lines from English letters written by American President George Washington. **Rodrigo.** The Rodrigo Database [37] contains 20357 lines extracted from greyscale scans of an historical manuscript, written in Spanish with ink on ancient paper. **ICFHR14.** The ICFHR14 Dataset [36] is a collection of 11473 lines extracted from greyscale scans of ancient pages, written by the English philosopher Jeremy Bentham. **Leopardi.** The Leopardi Dataset [8] is a collection of 2459 lines from RGB scans of letters by the Italian poet Giacomo Leopardi, written with ink on ancient paper. **LAM.** The LAM Dataset [10] includes 25823 text lines images obtained from RGB scans of ancient letters in Italian, written by Lodovico Antonio Muratori. ### Sensitivity to the Handwriting We evaluate the sensitivity of HWD to the handwriting style by splitting the multi-author datasets. We consider half of the samples for each featured writer as references and the other half as if they were the output of a perfect Styled HTG model. Then, we compare the distributions of the HWD and the FID computed on text images of multiple matching and non-matching authors pairs. Note that, in such an ideal case for Styled HTG, both the HWD and the FID should be as close as possible to their best value. Therefore, the more these two distributions are separated, the better the corresponding score captures the handwriting similarity between the considered images. The obtained distributions are reported in Figure 2. We observe that the histograms for the FID show significant overlap and that there is no clear separation between the distributions of matching and non-matching authors pairs. Moreover, except for CVL, which has a higher number of samples per author compared to the other three datasets, the FID values of the corresponding authors' distribution are roughly above 100. Such high FID values highlight the bias of the score when computed on a few samples. On the other hand, the histograms for the HWD are more separated. In addition, we quantitatively evaluate the style recognition capability by comparing the distributions in Figure 2 in terms of the Overlap coefficient, a statistical index that quantifies the overlap between distributions, and the EER, which represents the point where the False Acceptance Rate equals the False Rejection Rate. The results are reported in Table 1. As we can see, for both the FID and HWD, the Overlap and ERR are lower on the IAM and CVL datasets than in KHATT and RIMES. We argue the causes to be, respectively, the languages of the datasets and the similar overall appearance of the samples from the same author. In fact, the FID mainly focuses on the latter, while HWD performs feature extraction with a VGG16 pretrained on Font\({}^{2}\), which contains words in the same language as IAM and CVL. Nonetheless, the HWD score achieves lower Overlap and EER values than the FID in all datasets, including those in languages different from English. ### Sensitivity to the Number of Samples Numerical stability is an important factor to consider when assessing a score. As argued by [7, 11], the FID exhibits a strong bias towards the number of samples. To assess the stability of HWD, we consider the large single-author LAM dataset and compute the values of the HWD and FID on images from LAM against variably sized subsets of images from ICFHR14, Saint Gall, Leopardi, Rodrigo, Washington, and LAM itself. We determine the mean of the scores over multiple runs and also consider the range between the 25th and 75th Figure 2: Distributions of different scores used to evaluate HTG models when applied on same-author (green) or different-author (red) subsets. The overlap area is in dark red. percentiles of values. The results are reported in Figure 3. It can be observed that the FID (left-most plot) shows a significant bias towards the number of images: all the curves start with very high values and decrease slowly until at least around 2000 samples are used to obtain the score. On the other hand, the HWD (right-most plot) is more stable with respect to the subset sizes, reaching a plateau even when computed on around 100 samples. To further investigate the cause of this behavior, we compute the Euclidean distance on the Inception-v3 features (FID w/ Euclidean) and the Frechet distance on the VGG16 features (HWD w/ Frechet). In both cases (center-left plot and center-right plot of Figure 3, respectively), we can observe a slight bias with respect to the number of samples used for the computation. Moreover, the computation of these scores necessitates roughly one more order of magnitude of images to reach a plateau, compared to the case of the FID and the HWD, respectively. For the FID w/ Euclidean, we argue that this is because its Inception-v3 backbone is applied to the beginning part of the text images. As a result, more samples are needed to effectively represent the author's handwriting. In the case of the HWD w/ Frechet, we argue that this score suffers from the numerical instability of the Frechet Distance, which fits the image representations to multivariate Gaussians. Further, by comparing the values in the plots in the sight of the exemplar images reported in Figure 3, we can make some considerations on the convolutional backbones used as feature extractors. Inception-v3-based scores assign dataset distances according to their overall appearance. For instance, ICFHR14 and Rodrigo (both containing greyscale images) are close to each other, and the same happens for Saint Gall and Washington (both containing binarized images). On the other hand, the VGG16-based scores focus more on the handwriting. Thus, Saint Gall is isolated from the others because of its peculiar gothic calligraphy, while ICFHR14 is closer to Washington and Leopardi than to Rodrigo, reflecting the similarity in the cursive calligraphies. This characteristic is better suited for the Styled HTG task, as we are interested in evaluating the ability of a model to mimic the handwriting style. ### Styled HTG Models Evaluation For reference, we report the performance of State-of-the-Art Styled HTG models trained on the IAM dataset. In particular, we consider the conditional-GAN-based HIGAN+ approach [18], which uses a single image to condition the generation and disentangles text and style, and the few-shot Convolutional-Transformer-based models HWT [3] and VATr [34], which capture both global and character-level style-text dependencies by exploiting cross-attention. The latter approach represents the textual input with visual archetypes (_i.e._, Unifont-rendered characters) instead of one-hot vectors, as done in the other two. In Table 2, we evaluate the aforementioned models in terms of the proposed HWD and other scores for \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & **Language** & **Samples** & **Authors** & \begin{tabular}{c} **Avg. Samples** \\ **per author** \\ \end{tabular} & \begin{tabular}{c} **FID** \\ **Overlap** \\ \end{tabular} & \begin{tabular}{c} **HWD** \\ **EER** \\ \end{tabular} & \begin{tabular}{c} **Overlap** \\ \end{tabular} & \begin{tabular}{c} **EER** \\ \end{tabular} \\ \hline **Norhand** & Norwegian & 21939 & 12 & 1828.25 & 4.2 & 4.2 & 0.0 & 0.0 \\ **BanglaWriting** & Bengali & 17265 & 212 & 81.44 & 11.6 & 5.6 & 6.1 & 2.9 \\ **CVL** & English/German & 13473 & 310 & 43.46 & 24.7 & 12.5 & 0.0 & 0.0 \\ **IAM** & English & 13353 & 657 & 20.32 & 27.1 & 13.6 & 0.7 & 0.3 \\ **KHATT** & Arabic & 11427 & 838 & 13.64 & 40.3 & 21.6 & 12.0 & 5.9 \\ **RIMES** & French & 12111 & 1500 & 8.07 & 39.1 & 20.8 & 7.0 & 3.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Overlap and EER of the FID and HWD values calculated on the images in the considered multi-author datasets. image generation evaluation. In particular, we consider the FID score; the Geometric Score (GS) [25], which compares the data and model manifold estimates; and the Kernel Inception Distance (KID) [7], which relaxes the Gaussian assumption of the FID and uses the Maximum Mean Discrepancy to compare the distributions of the features. ### Ablation Analysis Finally, we analyze the effects of the four main components of our proposed score: the backbone, the pretraining dataset, the input image portion, and the distance measure. We perform the ablation analysis on the IAM dataset and report the results in Table 3. Looking at the results, we notice that the backbone used to extract the features plays a crucial role in the separability scores, as the use of VGG16 leads to very good results also when pretrained on ImageNet. A second important aspect is the feature distance metric used. The Frechet distance on the VGG16-extracted features achieves good results on all settings. Nevertheless, it is not influenced by the backbone pretraining and the image portion used to extract the features. On the other hand, the Euclidean distance fully exploits the input information (_i.e._, the quantity and the type of feature vectors) and thus is the best in the HWD setting. ### Sensitivity to the Visual Appearance To assess the sensitivity to handwriting-related visual aspects, we compare the FID and HWD between reference images and increasingly altered ones, taken from the LAM dataset. In particular, the considered alterations entail shear, erosion, and dilation to simulate handwriting slant and strokes thickness. The results are reported in Figure 4 and show that HWD is more \begin{table} \begin{tabular}{l l l l l} \hline & **FID** & **KID** & **GS** & **HWD** \\ \hline **HWT** & 23.36 & 1.37\(\times\)10\({}^{-2}\) & **1.05\(\times\)10\({}^{-2}\)** & 1.928 \\ **HIGAN+** & **18.21** & 9.38\(\times\)10\({}^{-3}\) & 2.15\(\times\)10\({}^{-2}\) & 1.237 \\ **VATr** & 18.80 & **7.06\(\times\)10\({}^{-3}\)** & 2.19\(\times\)10\({}^{-2}\) & **0.828** \\ \hline \end{tabular} \end{table} Table 2: Scores of the considered HTG models when generating the same or different words as those in the style reference images of the IAM test set. Best performance in bold. Figure 3: Left to right: comparison between FID, a version of the FID exploiting the Euclidean distance (FID w/ Euclidean), a version of HWD exploiting the Fréchet Distance (HWD w/ Fréchet), and HWD with varying number of samples. The lines denote the mean, and the transparent bands represent the range between the 25th and 75th percentiles, obtained with 10 calculation runs. sensitive than the FID to such visual aspects and that it increases linearly with the alteration intensity, thus enforcing its suitability for evaluating HTG. ## 5 Conclusion In this work, we have proposed HWD, a score specifically designed to evaluate Styled HTG. HWD exploits the features extracted by a convolutional backbone trained on a large synthetic dataset of handwritten text to compare the perceptual differences between handwritings. Moreover, it is designed to work with images of variable lengths, such as those containing text. The results obtained from extensive experimental analysis demonstrate its suitability for evaluating text image generation approaches, its sensitivity to different styles, and its numerical stability. Hopefully, the use of the proposed score, whose implementation and backbone model weights will be made publicly available, will contribute to pushing forward the research on the Styled HTG task. Figure 4: FID and HWD by varying handwriting thickness and slant. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Backbone**} & **Pretraining** & **Image** & \multirow{2}{*}{**Distance**} & \multicolumn{2}{c}{**IAM**} \\ \cline{3-5} & **Dataset** & **Portion** & & **Overlap** & **EER** \\ \hline Inception-v3 & ImageNet & Beginning & Fréchet & 27.1 & 13.6 \\ Inception-v3 & ImageNet & Beginning & Euclidean & 29.6 & 14.5 \\ Inception-v3 & ImageNet & Whole & Fréchet & 24.0 & 11.6 \\ Inception-v3 & ImageNet & Whole & Euclidean & 8.5 & 3.9 \\ \hline Inception-v3 & Font\({}^{2}\) & Beginning & Fréchet & 18.8 & 9.3 \\ Inception-v3 & Font\({}^{2}\) & Beginning & Euclidean & 11.3 & 4.8 \\ Inception-v3 & Font\({}^{2}\) & Whole & Fréchet & 19.0 & 9.1 \\ Inception-v3 & Font\({}^{2}\) & Whole & Euclidean & 7.2 & 3.3 \\ \hline VGG16 & ImageNet & Beginning & Fréchet & 3.2 & 1.6 \\ VGG16 & ImageNet & Beginning & Euclidean & 26.2 & 13.0 \\ VGG16 & ImageNet & Whole & Fréchet & 2.8 & 1.2 \\ VGG16 & ImageNet & Whole & Euclidean & 6.2 & 2.9 \\ \hline VGG16 & Font\({}^{2}\) & Beginning & Fréchet & 3.4 & 1.7 \\ VGG16 & Font\({}^{2}\) & Beginning & Euclidean & 16.5 & 8.2 \\ VGG16 & Font\({}^{2}\) & Whole & Fréchet & 3.5 & 1.6 \\ VGG16 & Font\({}^{2}\) & Whole & Euclidean & **0.7** & **0.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation analysis of the main components of the HWD score. Note that the first row is the FID while the last is the complete HWD. Best performance in bold.
2309.09181
From Cooking Recipes to Robot Task Trees -- Improving Planning Correctness and Task Efficiency by Leveraging LLMs with a Knowledge Network
Task planning for robotic cooking involves generating a sequence of actions for a robot to prepare a meal successfully. This paper introduces a novel task tree generation pipeline producing correct planning and efficient execution for cooking tasks. Our method first uses a large language model (LLM) to retrieve recipe instructions and then utilizes a fine-tuned GPT-3 to convert them into a task tree, capturing sequential and parallel dependencies among subtasks. The pipeline then mitigates the uncertainty and unreliable features of LLM outputs using task tree retrieval. We combine multiple LLM task tree outputs into a graph and perform a task tree retrieval to avoid questionable nodes and high-cost nodes to improve planning correctness and improve execution efficiency. Our evaluation results show its superior performance compared to previous works in task planning accuracy and efficiency.
Md Sadman Sakib, Yu Sun
2023-09-17T07:09:16Z
http://arxiv.org/abs/2309.09181v1
From Cooking Recipes to Robot Task Trees - Improving Planning Correctness and Task Efficiency by Leveraging LLMs with a Knowledge Network ###### Abstract Task planning for robotic cooking involves generating a sequence of actions for a robot to prepare a meal successfully. This paper introduces a novel task tree generation pipeline producing correct planning and efficient execution for cooking tasks. Our method first uses a large language model (LLM) to retrieve recipe instructions and then utilizes a fine-tuned GPT-3 to convert them into a task tree, capturing sequential and parallel dependencies among subtasks. The pipeline then mitigates the uncertainty and unreliable features of LLM outputs using task tree retrieval. We combine multiple LLM task tree outputs into a graph and perform a task tree retrieval to avoid questionable nodes and high-cost nodes to improve planning correctness and improve execution efficiency. Our evaluation results show its superior performance compared to previous works in task planning accuracy and efficiency. ## I Introduction Robotic cooking has emerged as a highly promising domain within robotics, presenting notable advantages such as convenience and the potential for enhanced efficiency and precision in meal preparation. To effectively automate cooking tasks, the key component is efficient task planning. This entails generating a series of actions guiding the robot in accomplishing a specific goal. However, this is an intricate field of research due to the fact that cooking tasks typically involve lengthy sequences of actions encompassing various ingredients and tools. Moreover, they necessitate the attainment of numerous crucial ingredient states throughout the process. Additionally, the cooking conditions, processes, and requirements are exceptionally diverse. Approaches like state-space planning, learning from demonstration, and even knowledge network retrieval encounter difficulties when confronted with unseen starting conditions and requests. In cooking tasks, ingredients or objects can vary in form, shape, and size, and there are multiple states to consider during recipe execution. The manipulation of an ingredient depends on its specific state, and certain ingredients may not be readily available in the desired state. Additionally, robots have varying capabilities, making some actions easier for them to perform than others. A task planning method should consider these factors and propose a plan that is most suitable for the robot to execute efficiently. Previous work has created a knowledge network consisting of 140 cooking recipes called the Functional Object-Oriented Network (FOON) [1, 2]. However, generating plans in novel scenarios where FOON lacks knowledge about the recipe or an ingredient proved challenging. Furthermore, expanding the knowledge base was difficult due to the reliance on manual annotation. Recently, the emergence of Large Language Models (LLMs) [3, 4, 5] has addressed the limitation of limited knowledge. These LLMs possess the ability to generate "likely" viable solutions for different scenarios and requests. While their results may not always be correct or optimal, their notable capacity for generalization can help overcome the limitations of search-based task tree retrieval methods. The search-based retrieval approach with a comprehensive knowledge network on the other hand can detect, remove and replace the wrong elements in the LLM outputs. The primary focus of this research paper is to tackle the task planning challenge in robotic cooking through the introduction of an innovative task tree generation approach (Figure 1). We aim to generate a task plan that is both error-free and cost-effective. To enhance the accuracy of the task plan, we employ a method that involves detecting incorrect components within the task trees generated by GPT-3 and search for alternative options either within other task trees or within the FOON knowledge graph. This approach allows us to improve the overall quality and reliability of the generated task plan. By carefully selecting the most optimized plan from these alternatives, the pipeline ensures effective resource utilization while achieving the desired objectives. The effectiveness of the task tree generation pipeline is evaluated through a comparative analysis with a previous approach. The results demonstrate the superiority of the proposed method, showcasing enhanced task planning accuracy and improved cost-efficiency. Fig. 1: Overview of our approach. Given a meal preparation instruction, the model generates a list of tasks specifically designed for the robot. Our contributions in this paper are as follows: (i) We propose a novel task tree generation approach that accepts any dish of the user's choice and produces a robot task tree with state-of-the-art accuracy and efficiency; (ii) We fine-tune GPT-3 to convert natural language instructions into a task tree structure; (iii) We improve the accuracy of the task plan by detecting incorrect components in the GPT generated task trees and finding alternatives either in other task trees or FOON; (iv) We optimize execution costs by performing weighted retrieval in a mini-FOON combined from multiple GPT outputs or FOON. We demonstrate the superiority of our model through a comparison with a previous approach. ## II Background ### _Functional Object-Oriented Network_ FOON and related knowledge graphs have been used in many tasks for robots, such as robotic cooking [6] and furniture assembly [7, 8, 9]. The one used here is a knowledge graph constructed through manual annotation of video demonstrations. It consists of two types of nodes: object nodes and motion nodes. These nodes are connected by directed edges, which depict the preconditions and effects of actions. The functional unit is the fundamental building block of FOON, representing a single action observed in the video demonstration. It consists of one or more input nodes, one or more output nodes, and a single motion node. The input nodes specify the required state of objects before the action, while the output nodes describe the resulting state after the action is executed. The motion node represents the action itself. Functional units provide a detailed and vivid representation of the actions observed in the video demonstrations. Figure 2 shows two functional units of slicing an onion and placing onion to cooking pan. The current FOON dataset (available in [10]) consists of 140 annotated recipes sourced from platforms such as YouTube, Activity-Net [11], and EPIC-KITCHENS [12]. #### Ii-B1 Task Planning with FOON The utilization of FOON as a knowledge base for task planning offers several advantages, including the ability to provide recipe variations. Task planning with FOON involves searching the network to find a goal node and retrieving a path, referred as a task tree, that leads to achieving the desired objective. A task tree, consists of a sequence of functional units that need to be executed in order to prepare the dish. To illustrate, consider the task tree associated with boiling water, which comprises actions such as placing a pot on the stove, pouring water, turning on the stove, and turning off the stove. Each of these procedural steps is represented by input object nodes, signifying the prerequisites for executing the action; a motion node, denoting the action itself; and output object nodes, denoting the effect of executing the action. The task tree retrieval algorithm proposed in [1] focuses on finding a path that utilizes only the ingredients available in the kitchen. On the other hand, [13] retrieves a plan that can be executed with human-robot collaboration. Nevertheless, these approaches have a limitation when it comes to generating a plan for a recipe that is not explicitly available in FOON. For example, if a user asks for a plan to prepare a mango milkshake, but there is no dedicated recipe for it in FOON, the system may be unable to provide a plan, even if there is a recipe for a banana milkshake. To address this limitation, a novel task tree retrieval method [14] was introduced that can learn from similar recipes in FOON and make necessary modifications to match the user's requirements. While this approach introduces some level of generalization, the quality of the generated plan heavily relies on the availability of closely matched recipes in FOON. In this work, we leverage LLMs to overcome this dependency on closely matched recipes and generate high-quality task trees for any recipe, thereby enhancing the flexibility and effectiveness of the task planning process. ### _Related Works_ In the domain of robotic cooking and task planning, several strategies have been proposed to tackle the challenges associated with generating effective action sequences for executing user instructions. One prominent approach involves the use of knowledge graphs to address this challenge. Notably, the KNOWROB framework [15, 16] has made significant contributions in this area by leveraging a knowledge base constructed from data collected in sensor-equipped environments. [17] introduced a task generalization scheme that relaxes the requirement of having multiple task demonstrations to perform tasks in unknown environments. This scheme integrates the task plan with a knowledge graph derived from observations in a virtual simulator. The impact of knowledge graphs on a robot's decision-making process was further investigated in [18]. However, these approaches heavily rely on the limited information contained in their respective knowledge bases. In contrast, our approach harnesses the power of Language Models (LLMs) to alleviate the burden of creating a knowledge base, offering a more comprehensive and flexible solution. Recently, task planning with LLMs has become a prominent area of research, capitalizing on the impressive language understanding and generation capabilities of LLMs. Various studies have explored the use of LLMs to generate step-by-step plans for long-horizon tasks. For instance, Erra et al. [19] proposed an approach that employs LLMs to generate Fig. 2: Two functional units from FOON depicting slicing an onion and placing it to the cooking pan. Objects and motions nodes are denoted by green circle and red square respectively. plans for complex tasks. [20, 21, 22] have also utilized LLMs for plan generation in different domains. However, these works often do not explicitly consider the robot's capability to perform specific actions. One limitation of relying solely on LLMs is the lack of interaction with the environment. To address this limitation, SayCan [23] introduced a framework that combines the high-level knowledge of LLMs with low-level skills, enabling the execution of plans in the real world. By grounding LLM-generated plans with the robot's capabilities and environmental constraints, SayCan bridges the gap between language-based planning and physical execution. In addition, recent research efforts such as Text2Motion [24], ProgPrompt [25] have integrated LLMs with learned skill policies. They exhibit trust in the LLM-generated plan and proceed to execute it whereas we focus on enhancing LLM's accuracy to generate an optimal task plan. ## III Proposed Method Our objective is to develop a robust pipeline that generates highly accurate and executable task trees for robotic operations. To achieve this, we employ a multi-step approach that leverages the capabilities of LLMs and FOON. Initially, we utilize ChatGPT [26] to generate a recipe based on the user's meal specifications. However, the output is in natural language, which may pose challenges for direct robot execution. To address this, we employ a fine-tuned GPT-3 model to convert the recipe instructions into a task tree format. Due to the uncertainty of the generative model, the task plan may not be always correct or most efficient. To enhance reliability and efficiency, we look for alternative options in other task trees generated by GPT-3 or in FOON. From these alternatives, the selected task tree is expected to be accurate and easier for the robot to execute. A visual representation of our pipeline and its key components are presented in Figure 3. In the following subsections, we will provide detailed explanations of each component. ### _Prompt engineering for recipe generation_ Our system is designed to accommodate dish specifications provided by the user. The user can specify a list of desired ingredients or exclude certain ingredients. Additionally, specifications such as gluten-free, vegetable-based, or non-diarity options are also accepted. Based on this information, we engine a prompt and retrieve the recipe from ChatGPT. To facilitate easier parsing, we have designed the prompt to include numbered instructions within the response from ChatGPT. ### _Converting instructions to a task tree_ When a robot performs an action, several factors need to be considered, such as preconditions, effects, and the objects involved. Additionally, understanding the state of these objects is crucial for determining the appropriate grasp or manipulation technique. However, extracting all this information from a recipe written in natural language poses significant challenges. To address this complexity, we propose translating the instructions into structured functional units that encapsulate all the necessary details. By organizing these functional units into a task tree, we provide a step-by-step guide for the robot to execute the task effectively. To accomplish this, we have created a dedicated dataset for fine-tuning a GPT-3 Davinci model. This model takes a recipe as input and translates it into a task tree representation. The dataset comprises 180 recipe examples sourced from FOON, each consisting of natural language instructions and a corresponding FOON task tree. Due to the limitation of maximum token count, some recipes had to be divided into multiple parts, resulting in multiple task trees for a single recipe. ### _Creating a mini-FOON_ To address the potential presence of errors in the task plans generated by the fine-tuned model, we adopt a strategy of generating multiple task trees for the same recipe. Our aim is to search for a task tree that is error-free and one that is efficient for the robot to execute from the combined graph mini-FOON. FOON has revealed that merging recipes in a graph structure can lead to the emergence of novel cooking methods. This merging process allows recipes to share information and learn diverse approaches for accomplishing subtasks. Inspired by this idea of exploring new paths, we employ a similar graph structure to merge the five task trees generated by GPT. This merged structure is referred to as a mini-FOON. #### Iii-C1 Merging task trees During the merging process, our objective is to eliminate any incorrect functional units and remove duplicates. An incorrect functional unit can arise in two ways: (i) syntax error and (ii) an erroneous object-action relationship. Syntax verification involves checking whether the functional unit includes the necessary components such as input and output objects, as well as a motion node. Additionally, it verifies if each object has an assigned state. On the other hand, validating the object-action relationship poses the challenge of determining if the state transition for an action is correct. To tackle this challenge, we compiled a comprehensive list of all valid state transitions from FOON. Based on this list, we can assess the correctness of a transition. For instance, if a transition such as "sliced \(\rightarrow\) whole" Fig. 3: Overview of our task tree generation procedure. Starting with a meal specification as the query, our pipeline generates a task plan represented as task tree 7. is not present in FOON, it would be identified as incorrect. Functional units that successfully pass the verification criteria are then added to the mini-FOON. ### _Creating a super-FOON_ We integrate the mini-FOON with the original FOON, forming a combined network known as the super-FOON. During this merging process, our primary focus is on node consolidation, as the mini-FOON and FOON may use different names for the same object or motion node. To achieve consolidation, we follow a set of basic rules. For instance, we convert all object names to their singular form. We observed that GPT-3 often generates plural forms such as "strawberries" and "onions," while FOON represents them as "strawberry" and "onion" respectively. By applying these rules, we try to ensure consistency and compatibility between the node names in the mini-FOON and FOON within the super-FOON network. ### _Task tree retrieval_ Taking the desired dish as the goal node, we employ a search procedure similar to [13] to retrieve all paths leading to the goal. We execute the same search algorithm in both the mini-FOON and super-FOON. This approach often yields multiple task plans, exceeding five in number, which may differ in the number of cooking steps involved. For instance, when preparing a banana milkshake, one plan may suggest adding the whole peeled banana to the blender, while another plan may propose cutting the banana in half before blending. Once the incorrect functional units have been filtered out, the task tree retrieval procedure does not select them. Instead, it prioritizes the available correct functional units to construct the task plan. For instance, if the functional unit for "slicing an apple" is found to be incorrect in the first generated tree but correct in the other four task trees, the search procedure will choose the functional unit of slicing an apple from those four task trees. From the generated plans, we must select the most feasible one for the robot to execute. The feasibility of executing an action depends on the robot's configuration. For example, a robot with only one hand may find pouring easier than chopping. Consequently, the success rate of executing a task tree varies among different robots. Following the approach of [13], we assign a cost value ranging from 0 to 1 to each action. These values are determined by three factors: 1) the physical capabilities of the robot, 2) its past experiences and ability to perform actions, and 3) the tools or objects that the robot needs to manipulate. A higher cost value indicates a more challenging action to execute. Based on these costs, we select task tree 6 from the mini-FOON and task tree 7 from the super-FOON. Ideally, task tree 7 should never be worse than task tree 6 since the super-FOON encompasses all the task trees from the mini-FOON. Task tree 7 serves as the final output of this pipeline. Figure 4 illustrates an example of cost optimization using the super-FOON, where two pouring actions are preferred over scooping due to the significantly lower cost assigned to pouring (0.1) compared to scooping (0.4). We assigned a low cost to pouring based on the successful pouring accuracy achieved by Huang et al. [27] with a robot. ## IV Experiments and Results Our experiment aims to assess both the quality of the generated task trees and the associated execution costs. Simultaneously, we seek to compare the performance of our model in generating recipes across different dish categories. To accomplish this, we curated a dataset consisting of 60 randomly selected recipes from the Salad, Drink, and Muffin categories. These recipes were extracted from Recipe1M+ [28], a comprehensive collection of over one million recipes encompassing a wide range of dish types and ingredients. ### _Evaluation Metric_ Validating the plan of a cooking task in an automated manner is challenging due to the absence of a fixed method for preparing a dish. Two task plans for the same dish can differ in their cooking approaches, yet both can be deemed correct. As a result, manual verification becomes necessary. However, the original format of a task tree can be difficult for humans to comprehend. To address this, we convert the task trees into progress lines as used in [14] to illustrate how the ingredients are manipulated and undergo changes throughout the cooking process. This simplified visualization facilitates the detection of errors in the task plan by humans. We consider a recipe correct if the progress lines for all ingredients used in the recipe are accurate. An example of progress lines for a Greek Salad recipe is provided in Figure 5. ### _Task Planning Accuracy_ We employed four different methods to generate task trees for the selected recipes. The quality of the generated trees was assessed using the progress line, and the corresponding accuracy results are shown in Figure 6. When relying solely on FOON, the task trees obtained for Salad and Drink recipes exhibited good quality. This was expected as FOON contained an ample number of recipes (10 each) for these categories. However, for Muffin recipes, the quality of the generated task trees suffered due to the scarcity of available examples in FOON (only one recipe). The FOON-search based approach heavily depends on finding a similar recipe in FOON as a reference for making necessary modifications to the task plan. Consequently, a high number of adjustments were required, leading to inaccuracies in the task plan. In the case of the fine-tuned GPT-3 model, errors in functional units frequently resulted in task plan failures. However, the introduction of the Mini-FOON helped mitigate these errors by providing a wider range of alternatives to achieve the desired objectives. Integrating FOON into our approach enabled us to choose a path from a broader set of options, resulting in higher accuracy. Compared to [14], our approach achieved a 4% higher accuracy for Salad, 6% higher accuracy for Drink, and a significant 45% higher accuracy for Muffin recipes. Notably, our fine-tuned model demonstrated good accuracy for Muffin recipes, despite not being specifically trained on this particular dish. This highlights the significant advantage of employing an LLM. Once the LLM is fine-tuned to comprehend the structure of a task tree, it can effectively generalize to various types of recipes. ### _Execution cost_ The objective of this experiment is to evaluate the extent to which our approach can optimize the execution cost of recipes. If a recipe cannot be optimized, it implies that there are no superior alternatives in FOON compared to the initial output generated by the fine-tuned model (task tree 1). In Figure 7, we present the number of optimized recipes by generating different numbers of task trees. When the number of task trees is 2, and we select the plan with the lower cost, it yields a better solution in 5% of the cases. Similarly, by gradually increasing the number of task trees up to 5 and selecting the one with the minimum cost, we obtain a better solution in 15% of the cases. More optimization occurs when we choose task tree 6 from the Mini-FOON, as it Fig. 4: Example of cost optimization: Comparison between task trees retrieved from the mini-FOON and super-FOON. The assigned costs for scooping, pouring, and mixing are 0.4, 0.1, and 0.1 respectively. (a) The task tree from the mini-FOON (b) The task tree from the super-FOON. Fig. 5: Progress lines for a Greek Salad recipe. Fig. 6: Comparison of different approaches’ accuracy on Salad, Drink, and Muffin dishes. combines subtasks from five different task trees, resulting in a lower cost. Ultimately, task tree 7, the final output from our pipeline, maximizes the advantages of FOON and minimizes the execution cost compared to task tree 1 in 40% cases. ## V Discussion ### _Finetuning a GPT-3_ We examined how the model's understanding of the task tree structure improves with the addition of new training data (Figure 8). Initially, the training began with a dataset consisting of only 30 examples. Consequently, the model struggled to grasp the syntax of functional units, resulting in grammatical errors in the generated functional units. For instance, it would include multiple motion nodes within a single functional unit, whereas, according to the definition, a functional unit should contain only one motion node. As we increased the number of recipes, the model gradually reduced its syntactical errors. However, it still exhibited logical mistakes, such as incorrect state transitions or missing actions. Finally, after finetuning with 180 examples, the model achieved an accuracy of 67%. ### _Executing a task tree_ A task tree provides a high-level plan that lacks interaction with the environment. However, executing actions often requires additional information, such as the geometric position of objects or the initial quantity of ingredients in a container. For instance, a task plan might involve adding ice to an empty glass, but the glass could be positioned upside down on a table. Therefore, before pouring the ice, the glass would need to be rotated back to its original position. This crucial step is missing in our high-level planning. Hence, there is a need for hierarchical planning, where the task tree can be converted into a low-level plan that can be executed in the real world. Paulius et al. [29] proposed a method to convert a task tree into a representation using Planning Domain Definition Language (PDDL) [30]. Each functional unit is treated as a planning operator, and a plan is generated based on the robot's low-level motion primitives. ### _Limitations of our approach_ (i) The generation of a task tree involves making 5 API calls to the fine-tuned model. Each API call takes approximately 5 seconds, resulting in a slow pipeline. The focus of this research was not on time complexity. In the future, if we aim to enhance the system's speed, it may be necessary to explore fine-tuning locally installed LLMs. (ii) The generated plan sometimes introduces new names for ingredients, states or motions such as garnish. These unknown labels in functional units pose a challenge when attempting to find alternative options in FOON, as proper mapping to existing functional units becomes difficult. Furthermore, the detection of incorrect transitions is also hindered, as the possible transition list may not include these new labels. (iii) A fine-tuned GPT-3 model has a limitation where the combined query and completion cannot exceed 2048 tokens. Due to this constraint, generating a task tree becomes challenging when dealing with complex recipes that require a higher number of functional units. ## VI Conclusion In this study, our objective was to propose a novel pipeline for task tree generation, leveraging the advantages offered by LLMs. We utilized ChatGPT to respond to user queries, and then fine-tuned a GPT-3 model to convert the response into a task tree representation. To enhance the accuracy and execution cost of the task tree, we integrated the output of the fine-tuned model with FOON, exploring multiple possibilities to achieve the desired objectives. Through our experiments, we demonstrated its superior performance, highlighting its remarkable generalization capabilities. In future, we intend to focus on addressing the challenges of task tree correction and re-planning in cases of planning or execution failures. It is worth noting that our pipeline exhibits a high degree of flexibility, allowing for the seamless substitution of GPT and FOON with more advanced Language Models or knowledge networks. We aim to incorporate image inputs into our system by utilizing the newly released GPT-4, which can handle both textual questions and accompanying images. This would allow users to upload images of dishes and inquire about their preparation methods. Fig. 8: Impact of training dataset size on model accuracy. Fig. 7: Number of recipes that were optimized by generating varying numbers of task trees in comparison to Task Tree 1 (generated by the fine-tuned model).
2305.00516
Dissipative Soliton Resonance: Adiabatic Theory and Thermodynamics
We present the adiabatic theory of dissipative solitons (DS) of complex cubic-quintic nonlinear Ginzburg-Landau equation (CQGLE). Solutions in the closed analytical form in the spectral domain have the shape of Rayleigh-Jeans distribution for a positive (normal) dispersion. The DS parametric space forms a two-dimensional (or three-dimensional for the complex quintic nonlinearity) master diagram connecting the DS energy and a universal parameter formed by the ratio of four real and imaginary coefficients for dissipative and non-dissipative terms in CQGLE. The concept of dissipative soliton resonance (DSR) is formulated in terms of the master diagram, and the main signatures of transition to DSR are demonstrated and experimentally verified. We show a close analogy between DS and incoherent (semicoherent) solitons with an ensemble of quasi-particles confined by a collective potential. It allows applying the thermodynamical approach to DS and deriving the conditions for the DS energy scalability.
Vladimir L. Kalashnikov, Alexander Rudenkov, Evgeni Sorokin, Irina Sorokina
2023-04-30T16:05:17Z
http://arxiv.org/abs/2305.00516v3
# Dissipative Soliton Resonance: Adiabatic Theory and Thermodynamics ###### Abstract We present the adiabatic theory of dissipative solitons (DS) of complex cubic-quintic nonlinear Ginzburg-Landau equation (CQGLE). Solutions in the closed analytical form in the spectral domain have the shape of Rayleigh-Jeans distribution for a purely real quintic nonlinearity. The DS parametric space forms a two-dimensional (or three-dimensional for the complex quintic nonlinearity) master diagram connecting the DS energy and a universal parameter formed by the ratio of four real and imaginary coefficients for dissipative and non-dissipative terms in CQGLE. The concept of dissipative soliton resonance (DSR) is formulated in terms of the master diagram, and the main signatures of transition to DSR are demonstrated and experimentally verified. We show a close analogy between DS and incoherent (semicoherent) solitons with an ensemble of quasi-particles confined by a collective potential. It allows applying the thermodynamical approach to DS and deriving the conditions for the DS energy scalability. **Keywords:** complex cubic-quintic nonlinear Ginzburg-Landau equation, dissipative soliton resonance, dissipative soliton thermodynamics ## 1 Introduction Many recent scientific breakthroughs in various fields were made possible by using ultrashort pulse lasers and understanding how a dissipative soliton (DS) forms and works. DS is a stable and localized pattern with different levels of coherence, which arises in a nonlinear system far from equilibrium due to energy loss or gain. The DS concept applies in diverse scientific areas, such as cosmology, optics, physics, biology, and medicine [1, 2, 3]. Due to the nonequilibrium of a system, DS needs to exchange energy with the environment in a well-organized way. This energy flow shapes the internal structure of DS, which allows the energy to be redistributed within it. In this sense, a DS is a simple version of a cell. A complex internal structure of DS affects its behavior and can lead even to turbulence that links DS to a family of incoherent or semicoherent solitons [4, 5]. The variety of phenomena that optical DS can mimic, such as turbulence, noise, and rogue waves [6, 7], makes them useful for studying nonlinear systems and thermodynamics far from equilibrium. Moreover, they offer us powerful and flexible methods for simulating, computing, and analyzing large and rare data sets that can be applied to different fields of science, technology, and medicine. Moreover, DS phenomenology allows us to use powerful and adjustable methods for metaphorical computing and modeling processes from distant fields of science [8]. Despite the evident fact that DS is a classical field structure due to the large \(k-\)mode occupation number \(n_{k}\gg 1\) and strong entanglement with an environment, the non-trivial DS composition enhanced by spectral-temporal condensation and the resonant enhancement of sensitivity to perturbations throw a bridge across microscopic and mesoscopic physics and put a question about the quantum theory of DS [9, 10]. The latter is especially important in view of the close analogy between coherent structures in photonics and Bose-Einstein condensate (BEC) [11, 12, 13, 14, 15, 16]. A scalable formation of coherent condensate phase in the DS form was named _dissipative soliton resonance_ (DSR) [17]. The theoretical workhorse in the above-mentioned endeavors was the _complex nonlinear Ginzburg-Landau equation_[18, 19, 20, 21, 22], which is akin the _nonlinear Schrodinger equation_[23, 24] so that some terminological confusing could appear. It is possible to divide these terms conditionally based on the object under consideration: DS solutions of the complex nonlinear Ginzburg-Landau equation considered in this work have no non-dissipative limit, i.e., they exist in the non-soliton sector of the nonlinear Schrodinger equation, where \(\gamma/\beta<0\), with \(\gamma\) and \(\beta\) being the coefficients of the imaginary terms characterizing nonlinearity and kinetic (dispersion, diffraction, or kinetic energy) parameters in a system, respectively. The same concerns the _Gross-Pitaevskii equation_, which is actively exploited in the studies on BEC 1. In this work, we consider a region of anomalous group-delay dispersion (GDD in an optical context). The nonlinear gain and spectral filtering [26] (or viscous friction [27]) are absolutely necessary for the existence of this type of DS. Footnote 1: There is inexhaustible literature on this topic. Therefore, we limit ourselves to a single review citation [25] The article is organized in the following way. Firstly, we expose the adiabatic theory of DS of the complex cubic-quintic nonlinear Ginzburg-Landau equation (CQGLE) and shortly characterize their properties with focus on the DS spectra, which have a shape of the Rayleigh-Jeans distribution in the simplest case. Then, the DS parametric space is formulated in terms of the master diagram, which is two-dimensional for the reduced CQGLE and three-dimensional for the complete CQGLE. The concept of _dissipative soliton resonance_ (DSR) [17] is formulated using the adiabatic theory. Finally, we consider, in short, the thermodynamics of the strongly chirped DS using an ideology of the in-(semi-)coherent soliton theory [4]. ## 2 Adiabatic theory of dissipative solitons Let us consider the (1+1)-dimensional CQGLE, which describes an evolution of the field envelope \(a(z,t)\) in the following form [6, 22]: \[\frac{\partial}{\partial z}a(z,t)=-\Sigma a(z,t)+\left(\alpha+ \mathrm{i}\beta\right)\frac{\partial^{2}}{\partial t^{2}}a(z,t)+\] \[\qquad+\left(\kappa-\mathrm{i}\gamma\right)Pa(z,t)-\left(\kappa \zeta+\mathrm{i}\chi\right)P^{2}a(z,t)\,. \tag{1}\] Here, we consider \(z\) as an evolution coordinate (e.g., propagation distance in a laser/waveguide or time in BEC), and \(t\) is the local (co-moving) time coordinate (or spatial coordinate in a planar waveguide or BEC). \(P=|a\left(z,t\right)|^{2}\) in Eq. (1). The \(\beta\)-term describes an action of GDD. The anomalous GDD \(\beta<\)0 corresponds to the diffraction term for a planar waveguide or the kinetic energy of bosons. Below, we will consider the case of \(\beta>\)0 (_normal GDD_) that breaks the above analogy between temporal phenomena in optics and spatial phenomena in the waveguide and condensed-matter physics. The nonlinear non-dissipative terms \(\gamma>\)0 and \(\chi\) describe the self-phase modulation (SPM) (self-focusing or attracting boson interaction in the spatial domain), which is saturable (\(\chi<\)0) or growable (\(\chi>\)0) with power. In a laser, the quintic nonlinear term \(\chi\) can appear, for instance, due to the mode size variation caused by the self-focusing. The dissipative terms in Eq. (1) describe: \(\sigma\) - a saturated net-loss defined by interaction with a finite basin causing loss and gain, which is saturated by the full field energy \(\int|a|^{2}\,dt\). \(\alpha\) - a spectral dissipation ("kinetic cooling" [15]). In a laser, this parameter equals the squared inverse bandwidth of a spectral filter, which is formed by the finite gain bandwidth of the active medium, spectral filters, mirror coatings, etc. \(\kappa\) and \(\zeta\) - the saturable nonlinear gain (self-amplitude modulation, SAM) providing excessive but top-bounded (\(\zeta>0\)) gain for the higher peak power signal over noise. The particular exact soliton-like solution of (1) is known and explored [28, 29, 30]2. It can be written in the following form: Footnote 2: The solution of a dissipation-free version of (1) was presented in [31]. \[a(z,t)=\sqrt{\frac{\mathfrak{A}}{\cosh(t/T)+\mathfrak{B}}}\exp[-i\psi/2\ln( \cosh(t/T))-iqz], \tag{2}\] with real parameters \(\{\mathfrak{A},\ \mathfrak{B},\ T,\ q\}\in\Re\). The new insights into the CQGLE world could be provided by the approximated methods based on the perturbative method [32, 33], Lagrangian approach, and method of moments [34, 35], etc. Here, we will build on the _adiabatic theory of the strongly chirped DS3_ based on the following propositions: Footnote 3: This theory was first developed in [36], and its further applications can be found in [37, 38, 39, 40]. A similar approach was suggested in [41]. **Proposition 1**.: _The nondissipative terms dominate strongly over the dissipative ones in Eq. (1): \(\alpha/\beta\ll 1\wedge\kappa/\gamma\ll 1.\)_ One must note that the first two conditions do not require the quasi-homogeneous approximation \(L_{nl}\ll L_{l},\) where \(L_{nl}\propto 1/\left\{\gamma,\kappa\right\}\) and \(L_{l}\propto 1/\left\{\alpha,\beta\right\}\) are the effective nonlinear, and linear lengths in (1), respectively [4]. However, as it will be shown below, the _large DS chirp_\(\psi\) (i.e., DS phase inhomogeneity) could play a role of the "paraxial approximation" [5] connecting the characteristic "correlation lengths" in the time (\(T\)) and spectral (\(\Delta\)) domains: \(T\Delta\simeq\psi\gg 1.\) One has to note that the perturbative analysis of the soliton-like solutions of CQGLE under the conditions of this Proposition was considered in [32]. **Proposition 2**.: \(C\equiv\frac{\alpha}{\beta}\times\frac{\gamma}{\kappa}\simeq 1.\)__ This Proposition means proximity to the _soliton_ or _potential condition_[42] (although the sign before \(\beta\) in (1) is inverse relatively that for a familiar nonlinear Schrodinger equation!). This could allow conjecturing that the steady-state probability distribution for a partially coherent DS is Gibbs-like. We note that the last conjecture is not a pre-assumption for further analysis, but, as it will be shown below, it means proximity to the _dissipative soliton resonance_ (DSR) condition 4. Footnote 4: The definition of DSR will be given below. **Proposition 3**.: _Adiabatic approximation: field envelope \(\sqrt{P(t)}\) evolves with \(t\) slowly in comparison with the instant phase \(\varphi\left(t\right)\) change._ In this Proposition, we assume the standard traveling wave ansatz: \[a(z,t)=\sqrt{P(t)}\,\mathrm{e}^{\mathrm{i}\varphi(t)-\mathrm{i}qz}, \tag{3}\] where \(P(t)\) is a slowly-varying DS power, \(\varphi\left(t\right)\) is an instant phase, and \(q\) is a wavenumber (propagation constant). Formally, this means that DS is "long" in comparison with the characteristic scale \(\sqrt{\beta}\) so that one may omit the terms \(\propto\frac{d^{2}\sqrt{P}}{dt^{2}}\) after substitution of (3) into (1). After such a substitution and using the first and third propositions, one has5: Footnote 5: We follow the calculations in [37], and the corresponding algebra can be found in [43]. \[\beta\Omega(t)^{2}=q-P(t)(\gamma+P(t)\chi), \tag{4}\] \[P\left(t\right)\kappa-P\left(t\right)^{2}\kappa\zeta=\sigma+ \alpha\Omega\left(t\right)^{2}+\frac{\beta\Omega\left(t\right)\frac{d}{dt}P \left(t\right)}{P(t)}+\beta\frac{d}{dt}\Omega\left(t\right), \tag{5}\] where \(\Omega(t)=d\varphi(t)/dt\) is an instant frequency deviation. ### DS having a \(\boldsymbol{\chi\to 0}\) limit Eq. (4) allows us to obtain the expressions for the DS envelope \(P(t)\): \[P(t) = \frac{1}{2}\frac{-\gamma+\sqrt{\gamma^{2}+4\chi\left(q-\beta\Omega \left(t\right)^{2}\right)}}{\chi}, \tag{6}\] \[P(t) = -\frac{1}{2}\frac{\gamma+\sqrt{\gamma^{2}+4\chi\left(q-\beta\Omega \left(t\right)^{2}\right)}}{\chi}. \tag{7}\] Eq. (6) has the limit of \(\chi\to 0\) corresponding to DS of the reduced CQGLE: \(\gamma P(t)=q-\beta\Omega\left(t\right)^{2}\)[36], and we will concentrate on the solution (6) below. The temporal localization of DS, i.e., \(\lim_{t\rightarrow\pm\infty}P(t)=0\), gives the maximal instant frequency, that is the _cut-off frequency_\(\Delta\): \[\Delta^{2}=q/\beta. \tag{8}\] This expression and Proposition 2 expose that the DS considered by us belongs to the normal GDD range \(\beta>0\). Eqs. (6,8) allow excluding \(P(t)\) from (5) that leads to the expression for _instant frequency deviation_: \[\frac{d}{dt}\Omega\left(t\right)=-\frac{\left(\sigma+\alpha \Omega\left(t\right)^{2}+\frac{\kappa}{4\chi^{2}}\left(\gamma-A\right)\left(2 \chi+\zeta\cdot\left(\gamma-A\right)\right)\right)\left(\gamma-A\right)A}{ \beta\left(4\chi\beta\Omega\left(t\right)^{2}+\left(\gamma-A\right)A\right)},\] \[A=\sqrt{\gamma^{2}+4\beta\chi\left(\Delta^{2}-\Omega\left(t \right)^{2}\right)}\ \ \&\ \ \Omega(t)^{2}\leq\Delta^{2}. \tag{9}\] Then, the cut-off frequency \(\Delta\) can be obtained after some algebra from Eqs. (4,5,6,9): \[\frac{\zeta\beta}{\gamma}\Delta^{2}=\frac{\left(\frac{2\left(3+\frac{\left(C+ 4\right)}{b}\right)\left(2+\frac{\left(C+3b\right)}{2}\pm\sqrt{\left(2-C\right) ^{2}-16\Sigma\left(\frac{C}{b}+1\right)}\right)}{\frac{C}{b}+1}-3\left(C+3b \right)-\frac{32\Sigma}{b}-12\right)}{16\left(\frac{C}{b}+1\right)}, \tag{10}\] where the new combined parameters are introduced: _control parameter_\(C=\alpha\gamma/\beta\kappa\) (see Proposition 2), _relative quintic parameter_\(b=\gamma\zeta/\chi\), and _composite net-loss parameter_\(\Sigma=\sigma\zeta/\gamma\). _The \(\pm\) signs in Eq. (10) denote two branches of DS solutions_. The crucial characteristic of these branches is their stability against a vacuum excitation, which means \(\sigma\geq 0\) in Eq. (1). For the \((+)\)-branch, the squared dimensionless cut-off frequency \(\Delta^{\prime 2}=\zeta\beta\Delta^{2}/\gamma\) on the stability threshold \(\sigma=0\) \[\Delta^{\prime 2}=\frac{1}{4}\frac{bC\left(2-c\right)\left(C+3b+4\right)}{ \left(C+b\right)^{2}} \tag{11}\] is shown in Fig. 1, where the existence range is defined as \(\mathbf{C}\in]\mathbf{0},\mathbf{2}]\) & \(\mathbf{b}>0\bigcup b<-\mathbf{C}/\mathbf{3}-\mathbf{4}/\mathbf{3}\). The \((-)\)-branch is detached from the unstable vacuum within these regions in the sense that \(\sigma>0\) for it. The \((-)\)-branch of the solution (10) can be "connected" with the region of unstable vacuum in the sense that it has a marginally stable limit \(\sigma\to 0\) (Fig. 2). Its existence domain corresponds to the "enhancing" self-phase modulation, i.e., \(b<0\): \[\Delta^{\prime 2}=-\frac{1}{4}\frac{bC\left(c-2\right)\left(C+3b+4\right)}{ \left(C+b\right)^{2}} \tag{12}\] that lies out of Proposition 2, and there exists no for \(b\rightarrow\infty\), i.e., it is not connected to the reduced cubic-quintic DS with \(\chi\to 0\). Therefore we will not consider it below so that \(b>0\) below except for Fig. (9). The DS branches are divided by a surface (Fig. 3 for a positive \(b\)): \[C_{\pm}=\frac{8\Sigma+2b-4\sqrt{\Sigma\left(b^{2}+4\Sigma+2b\right)}}{b}, \tag{13}\] so that, for \((+)\)-branch, the net-loss parameter is confined within the regions of \(0<\Sigma<\Sigma_{\pm}\) if \(b>0\) & \(\Sigma>\Sigma_{\pm}\) if \(b<-2\). Also, let's note that \(\lim_{c\to 0}\Sigma=0.25\). The dependencies of \(\Delta^{\prime}\) on the control parameter \(0\leq C\leq 2\) and the net-loss \(\Sigma\) for both branches of DS are shown in Fig. 4. These branches coincide on the surface shown in Fig. 3. The \((+)\)-branch has a more significant cut-off frequency that, as it Figure 1: The dimensionless cut-off frequency \(\Delta^{\prime}\) in dependence on the \((C,\;b)\)-parameters for the \((+)\)-branch of DS on the stability threshold \(\Sigma=0\). will be shown later, corresponds to the DS "fine-graining", its chirp growth, and a minimization of the pulse width \(T_{c}\) after its compression by a de-chirping: \(T_{c}\propto 1/\Delta\). As it was noted above, the important parameter characterizing DS is the chirp which we define as \(\Psi=(\beta\gamma/\kappa)\times(d\Omega^{\prime}(t)/dt^{\prime})\) (we use the normalization for time and frequency as above): \[\Psi=-(\Sigma+\frac{b^{2}}{4}(1-\sqrt{1+\frac{4\Delta^{\prime 2}}{b}})(1+\frac{ 2}{b}-\sqrt{1+\frac{4\Delta^{\prime 2}}{b}})), \tag{14}\] where a zero frequency deviation at \(t=0\), i.e., \(d\varphi(t)/dt|_{t=0}=0\), is taken into account in (9). The chirps in the DS center (\(t=0\)) on the stability threshold \(\Sigma=0\) (\("+"\) branch) are shown in Fig. 5 in dependence on \(C\)-parameter and positive \(b\). Interestingly, the central chirp (\(t=0\)) tends to zero for some minimal \(C\) (e.g., for \(C=2/3\) and \(\chi\to 0\)). We assume the large chirp in accordance with Proposition 1. That means a fast variation of the DS phase \(\phi(t)\) with the time that allows applying the stationary phase approximation [36, 37, 44]. One may assume that the Fourier transform of Eq. (3) \[e(\omega^{\prime})=1/\sqrt{2}\int_{-\infty}^{\infty}\sqrt{b(\sqrt{1+4(\Delta^ {\prime 2}-\Omega(t)^{\prime 2})/b\ C}-1)}\exp[i(\varphi(t)-\omega^{\prime}t)]dt \tag{15}\] is dominated by the contribution from the stationary points \(a\) where \(d\varphi(t)/dt|_{t=a}=0\) so that the leading term in the Taylor expansion of the phase is \((d^{2}\varphi(t)/dt^{2}|_{t=a})t^{2}/2\). Figure 2: The dimensionless cut-off frequency \(\Delta^{\prime}\) in dependence on the \((C,\ b)\)-parameters for the \((-)\)-branch of DS on the stability threshold \(\Sigma=0\). Thus, Eqs. (15, 9) after some algebra lead to the expression for a complex spectral amplitude: \[e(\omega^{\prime})=\frac{\sqrt{\pi\,b\left(B-1\right)}\,\mathrm{e}^{\frac{1}{B }\left(\left(B-1\right)+4\left(2\omega^{\prime 2}-\Delta^{\prime 2}\right) \right)\omega^{\prime 2}}}{\sqrt{\frac{1Bb\left(\left(B-1\right)\left(\Sigma+C\omega^{ \prime 2}+b+b^{2}\right)-3b\left(\Delta^{\prime 2}-\omega^{\prime 2}\right)-2 \left(\Delta^{\prime 2}-\omega^{\prime 2}\right)+bB\left(\Delta^{\prime 2}-\omega^{ \prime 2}\right)}{(B-1)Cb+4\left(2\omega^{\prime 2}-\Delta^{\prime 2}\right)}}}\,\mathcal{H} \left(\Delta^{\prime 2}-\omega^{\prime 2}\right), \tag{16}\] where \(B=\sqrt{\frac{4\Delta^{\prime 2}+b-4\omega^{\prime 2}}{b}}\) and \(\mathcal{H}\) is a Heaviside function. From Eq. (16), one may obtain the DS spectral profile: \[s(\omega^{\prime})\equiv\left|e(\omega^{\prime})\right|^{2}=\frac{\left(A-1 \right)\pi\left(\left(A-1\right)b+4\left(2\omega^{\prime 2}-\Delta^{\prime 2} \right)\right)\mathcal{H}\left(\Delta^{\prime 2}-\omega^{\prime 2}\right)}{A\left( \left(\left(\Sigma+C\,\omega^{\prime 2}+b+b^{2}\right)+b\left(\Delta^{\prime 2}- \omega^{\prime 2}\right)\right)\left(A-1\right)-2\left(\Delta^{\prime 2}- \omega^{\prime 2}\right)\left(b+1\right)\right)}. \tag{17}\] **Example 1**.: _Eq. (17) has a limit \(b\rightarrow\infty\) (i.e., \(\chi\to 0\)), which is important for further consideration. In the dimensionless form and after factorization (see Appendix), it looks as [36]:_ \[s(\omega^{\prime})=\frac{6\pi\mathcal{H}\left(\Delta^{\prime 2}-\omega^{\prime 2 }\right)}{\Xi^{\prime 2}+\omega^{\prime 2}}, \tag{18}\] **Fig. 3**: The \(\Sigma_{\pm}\)-parameter dividing the \((+)\) and \((-)\) branches of DS in dependence on \((\Sigma,\ b)\). Only the case of \(b>0\) is illustrated. _This spectrum has the form of the Rayleigh-Jeans distribution with a negative chemical potential:_ \[\Xi^{\prime 2}=-\frac{5}{3}\Delta^{\prime 2}+C+1. \tag{19}\] _Such a similarity is not only formal and has substantial consequences (see below, and Refs. [4, 5, 45])._ **Fig. 4**: Dimensionless cut-off frequency \(\Delta^{\prime}\) in dependence on the net-loss \(\Sigma\) and the control parameter \(C\) for both branches of the DS solution: upper/bottom sheets correspond to the \((+)/(-)\)-branches, respectively. \(b=20\) (a) and \(0.1\) (b). ### DS temporal profiles and spectra Unlike the exact solution (2) with the fixed parameters6, the adiabatic approximation provides an approximated solution but without the strict restrictions on the parameters of (1) except for the very broad ones imposed by Propositions. Moreover, the parametric space of the solution based on this approximation has reduced dimensionality (\(C\), \(b\), and \(\Sigma\)7). Footnote 6: fixed in the sense of [29, 30], when the restriction on the four free parameters of Eq. (1) (i.e., \(\tau\), \(\kappa\), \(\zeta\), and \(\chi\) in our notations) are imposed. Footnote 7: \(\Sigma\)-parameter can be considered as irrelevant in some sense because the parametric space topology is defined by \(\Sigma=0\) and \(\Sigma_{\pm}\)-isosurfaces. The adequateness of the considered approach as well as its compliance with that based on the solution (3) (e.g., see [30]), in particular, is demonstrated by a "zoo" of spectral and temporal DS shapes obtained from Eqs. (6,7,9,17) (Fig. 6). We consider only \((+)\)-branch and \(b>0\) (self-enhancing self-phase modulation8). Footnote 8: Self-enhancing self-phase modulation could be interpreted, for instance, in the following way. In a Kerr-lens model-locked laser, the mechanism of ultrashort pulse formation is the loss decrease due to a laser beam self-focusing [46]. That means beam squeezing and, thereby, the self-phase modulation growth \(\propto w^{-2}\), where \(w\) is a beam size. The main feature of the approach considered above is that it is built in the spectral domain. Therefore, the spectral shapes could be considered as a roadmap to a DS classification9. One may see three main types of spectra from Fig. 6 (a): convex (1), concave (2), and finger-like (3). The first and third types correspond to a large \(b\), i.e., a small contribution of the imaginary quintic term in (1). These spectra relate to Eq. (18) and, thereby, represent a truncated Lorentzian so that a transition between them is defined by the condition of \(\Xi=\Delta\). The transition from (1) to (3) represents a shift from a soliton-like temporal profile to a stretched and flattened one, demonstrating an energy harvesting mechanism due to DS broadening. One has to note that the cutoff frequency remains almost the same in this case. That is the pulse width scales as \(\propto\) Figure 5: The dimensionless chirp \(\Psi(t=0)\) of \((+)\)-branch in dependence on \((C,\ b)\)-parameters on the stability threshold \(\Sigma=0\). \(1/\Xi\). As it will be shown below, such transformation of the DS spectrum demonstrates a transition to the _dissipative soliton resonance_ (DSR). When the contribution of the positive imaginary quintic term in (1) grows (\(b\to 0\)), the spectrum becomes concave (the red dashed curves in Fig. 6)10. In this case, the DS energy (compare black solid and red dashed curves in Fig. 6) decreases. But that results from the chirp degradation for a chosen value of \(C\) (see Fig. 5). For the considered case, the DS energy could be much bigger for a smaller \(C\)-parameter. The energy scaling is provided by the DS stretching but without a profile flattening. Footnote 10: It should be noted that such concave spectra are not the product of the GDD action, which could be described by an additional term \(\propto i\,\partial^{4}/\partial t^{4}\) in (1) [48]. Fig. 7 illustrates the experimentally observed spectral profile evolution during the DS energy scaling in chirped pulse oscillators (CPO) [47, 49]. Spectra in Figs. 7 (a) and (b) were obtained in an oscillator capable of generating both Schrodinger-like and DS by smooth tuning of the cavity GDD from negative to positive values. Spectrum in Fig. 7 (a) was obtained near the CPO threshold, while spectrum (b) was obtained with slightly increased average GDD and pump power. Pulse energy scaling was demonstrated in the CPO cavity with reduced pulse repetition frequency \(f\) (12.3 compared to 69 MHz), when we further increased the positive cavity GDD and pump power to maintain the DS stability. The slight asymmetry of the spectrum (Fig.7 (c)) is associated with an uncompensated third-order GDD. The narrowband spectral features result from the water vapour absorption in the atmosphere [50]. It should also be noted that the adiabatic approximation provides an adequate description of DS spectra even beyond the validity of Proposition 1. Namely, the spectra transform from concave to concave-convex when \(\kappa>\gamma\) and \(\chi\neq 0\), as it was Figure 6: Dimensionless spectra (a) and temporal profiles (b) of DS. Solid black curves (1): \(C=1\), \(b=20\); Dashed red curves (2): \(C=1\), \(b=0.2\); Dotted blue curves (3): \(C=2/3\), \(b=20\). \(\Sigma=0.01\). Scale for the black solid and red dashed curves in (a) increases tenfold. described in [37, 51]. Moreover, there are classes of unusual DS solutions for \(b<0\), for instance, spike on a background or parabolic-like.11 Footnote 11: The parabolic-like pulses have \(d\Omega/dt\rightarrow\pm\infty\) on the edges, i.e., such DS is truncated on \(t\). ### DS compressibility and its fidelity Eq. (16) provides us with information about the DS internal phase profile. This profile is inhomogeneous, which troubles its compression. Such compression would allow producing, for instance, high-intensive ultrashort laser pulses for numerous applications. For simplicity, let us assume that \(\chi=0\). Returning to the dimensional values, one may wright12 Footnote 12: Eq. (20) demonstrates that the chirp is proportional to \(\gamma^{2}/\kappa\zeta\), which clarifies Proposition 1. \[e(\omega)=\sqrt{\frac{6\pi\gamma}{\zeta\kappa}}\frac{\mathrm{e}^{\frac{\frac{ \mathrm{d}}{2}\mathrm{i}\gamma^{2}\omega^{2}}{\beta\kappa\zeta(\Xi^{2}+\omega ^{2})(\Delta^{2}-\omega^{2})}}}{\sqrt{\mathrm{i}\left(\Xi^{2}+\omega^{2} \right)}}\mathcal{H}\left(\Delta^{2}-\omega^{2}\right). \tag{20}\] DS would be maximally compressible if its spectral phase \(\varphi(\omega)=\Upsilon\times\omega^{2}\) (\(\Upsilon\) is a spectral chirp). Such a phase could be externally compensated by an appropriate GDD \(\beta=-\Upsilon\) that would lead to temporal "focusing" of DS in agreement with the principle of space-time duality in optics [52]. However, Eq. (20) demonstrates that the spectral phase of DS is not purely quadratic in \(\omega\), that is, the spectral chirp \(\Upsilon\) is frequency dependent. However, the phase distortion is maximally suppressed or "flat", when \(\Xi=\Delta\): \(\Upsilon\propto(\Delta^{4}-\omega^{4})^{-1}\). Such a "flatness" allows a DS compression with minimal fragmentation, or maximal "_fidelity_". Let us return to the dimensionless values in (20) so that the amplitude \(e(\omega)\) is normalized to \(\sqrt{\kappa/\beta}\), \(d=\gamma/\kappa\) and the frequencies are normalized as above. Then, Fig. 7: Experimental CPO spectra (solid lines) obtained during pulse energy scaling and corresponding dispersion curves (dashed lines): (a) bell shape spectrum at the threshold of CPO operation, (b) M-shape spectrum obtained with slightly increased average GDD and pump power values [49], (c) near finger-like spectrum after energy scaling by pump power and pulse repetition frequency in DSR [47]. \(T_{OC}\), \(P_{out}\), \(E_{intr}\), and \(f\) are the laser output mirror transmission, output power, intracavity energy, and DS repetition rate, respectively. we perform the same procedure as before based on the stationary phase method by expanding the phase into the Taylor series and keeping the term \(\propto\omega^{2}\) in the integral \(\int_{-\infty}^{\infty}e(\omega^{\prime})\exp(i\omega^{\prime}t)d\omega^{\prime}\). The corresponding term is \[\Upsilon^{\prime}=\frac{6d\Delta^{\prime 2}\Xi^{\prime 2}+\pi(\Delta^{\prime 2}- \Xi^{\prime 2})}{4\Delta^{\prime 4}\Xi^{\prime 4}}, \tag{21}\] which gives a value of group-velocity dispersion required for the DS compression. ## 3 Master diagram and dissipative soliton resonance Eqs. (17,18) allow finding the DS energy: \(E=\frac{1}{2\pi}\int_{-\infty}^{\infty}s(\omega)d\omega\). This integral can be evaluated numerically in the general case or found in the closed form for \(\chi=0\): \[E=\frac{6\gamma\arctan(\frac{\Delta}{\Xi})}{\zeta\kappa\Xi}. \tag{22}\] At this stage, we can introduce the following definition: **Definition 1** (**Master diagram)**.: _The master diagram represents a DS parametric space in (\(C-E\))-coordinates._ The DS parametric space represented by the master diagram is confined by a vacuum instability threshold \(\Sigma=0\) and filled by "isogains" \(\Sigma=const>0\) which extreme points \(\frac{d\hat{C}}{dE}=0\) define the curve dividing (\(\pm\))-DS branches. All DS parameters are implied to be dimensionless (e.g., \(E\) in (22) could be normalized to \(\kappa\sqrt{\zeta/\beta\gamma}\) for the above frequency normalization). The diagram can contain other physically sound curves (e.g., "fidelity curve" \(\Xi=\Delta\)) and regions. Moreover, the "web" of isogains is deformable by finite \(b\), and the disjointed islands (e.g., for \(b\lessapprox 0\)) may coexist. In the latter case, one should be cautious and check the physicality and stability of solutions. The master diagrams for \(b\gg 1\), \(b=\)0.2, and -5 are shown in Figs. 8, 9[37, 47]. This diagram allows formulating the notion of _dissipative soliton resonance_ (DSR) [17]. **Definition 2** (**Dissipative soliton resonance)**.: \(\exists\,C^{*}:\lim_{C\to C^{*}}E=\infty\) _or there exists a set of \(C\)-parameters providing an infinite energy asymptotic13._ Footnote 13: The term of DSR was invented in [17] based on the method of moments. The variational method leads to a softer definition: there is asymptotics \(E\propto C^{-p}\), where \(p=1/2\) for CQGLE with \(\chi=0\)[53]. The variational approximation represents DSR as a range of master diagram. One can see from Fig. 8 that (\(+\))-branch of DS is energy-scalable in the sense of Definition 2. The bottom border of the corresponding region is characterized by: \[\begin{array}{c}E=\frac{6\sqrt{2\gamma\beta}}{\kappa\sqrt{\eta}}\frac{ \arctan(\frac{\sqrt{5}\,4\sqrt{5}}{\sqrt{6-13\sqrt{5}}})}{\sqrt{6-13\sqrt{5}}},\\ C=2-4\sqrt{5},\\ P_{0}=\frac{3\sqrt{5}}{2\zeta},\\ \Delta^{2}=\frac{3\gamma\sqrt{5}}{2\beta\zeta},\\ \Xi^{2}=\frac{\gamma}{2\zeta\beta}(6-13\sqrt{5})\end{array} \tag{23}\] within \(\mathbf{\Sigma\in[0,36/169]}\). Thus, the DSRs \(E\rightarrow\infty\) are located between \(\mathbf{C=2/3}\), \(\mathbf{2/13}\)14. On the vacuum instability border \(\Sigma=0\), one has: Footnote 14: See Proposition 2, where we limited ourselves by the interval approximately corresponding to DSR. \[\begin{array}{c}P_{0}\rightarrow\zeta^{-1},\\ \Delta^{2}\rightarrow\gamma/\beta\zeta,\\ \Xi\to 0.\end{array} \tag{24}\] and \(\mathbf{C}=\mathbf{2}/\mathbf{3}\). Eqs. (23,24) demonstrate important signatures of transition to DSR: cut-off frequency \(\Delta\) (DS spectrum half-width) tends to a constant; "chemical potential" \(\Xi\) tends to zero, and a peak power becomes above-confined. Owing to the latter, the DS width scales with energy. The DS width can be estimated from Eq. (9) by its integration that gives for \(\chi=0\) the implicit DS temporal profile: \[t=\frac{3\gamma^{2}\left(\frac{\arctan\left(\frac{\Omega(t)}{\Xi}\right)\Delta }{\Xi}+\text{arctanh}\Big{(}\frac{\Omega(t)}{\Delta}\Big{)}\right)^{2}}{\beta \zeta\kappa\Delta\left(\Delta^{2}+\Xi^{2}\right)}, \tag{25}\] where \(\Omega(t)\) and \(P(t)\) are connected through Eq. (6). Then, the DS width can be expressed as \(T=\frac{3\gamma^{2}}{\beta\zeta\kappa\Delta(\Delta^{2}+\Xi^{2})}\). These tendencies are illustrated in Fig. 8. One may see that the transition to DSR, with the subsequent change of a DS shortening by its broadening and simultaneous "freezing" of spectral width growth, is accompanied by a crossing of the maximum fidelity curve. The latter means a visible growth of spectral peak, i.e., transition to a "finger-like" spectrum. All these manifestations are experimentally verifiable. Fig. 10 shows the experimental master diagram obtained in the experiments with Cr:ZnS chirped-pulse oscillator [47]. ## 4 DS thermodynamics Perhaps, the most exciting advance of the approach considered above is that its main results are formulated in the spectral domain ("momentum space"). That allows applying the notions of kinetic theory straightforwardly to DS so that the latter could be understood in terms of an incoherent/semicoherent condensate of incoherent nonlinear waves [54]. Let us limit ourselves to the case of \(\chi=0\). Eq. (18) demonstrates the well-known Rayleigh-Jeans equilibrium distribution [55] with a negative "chemical potential" \(-\mu=\Xi^{2}\) and a "temperature" \(\Theta=6\pi\gamma/\kappa\zeta\). This spectrum and its counterpart from the turbulence theory [56] are shown by red curves in Fig. 11. The Lorentzian turbulence spectrum results from the \(k\to 0\) condensation of waves with the Langmuir dispersion relation \(k=\omega^{2}\), as shown by a graded shading in Fig. 11 (b). The DS wave number is \(q=\gamma P_{0}=\beta\Delta^{2}\) from Eq. (8), which is analog to the soliton area theorem \(q=\gamma P_{0}/2=\beta T^{2}/2\)[57]. The equality of the DS wave number Figure 9: Master diagram for \(\chi=\)-5 (1), 0.2 (2). Black solid curves are the vacuum instability thresholds, blue solid curves divide (\(\pm\))-branches of DS solutions, and green dashed lines correspond to isogains \(\Sigma=0.01\). One can see as a region of DSR squeezes and shifts to the smaller \(C\) for \(b=0.2\) in parallel with the corresponding chirp transformation (Fig. 5). \(q=\gamma P_{0}\) with that of linear waves \(k(\omega)=\beta\omega^{2}\) (compare with the Langmuir dispersion curve in Fig. 9 (b)) defines a DS spectral (half-)width \(\Delta.\) In the case of turbulence, a wave number cut-off is provided by dissipation, but dissipation is also a vital factor for DS. Roughly from Eq. (1), the spectral dissipation \(\alpha\Delta^{2}\) has to be compensated by a nonlinear gain \(\kappa P_{0}\) (\(\Sigma=0\) on the vacuum instability border)15. Hence, a combination of the dispersion/dissipation balances leads to \(\alpha\gamma/\beta\kappa\simeq 1,\) or _soliton condition_ ("potential condition") implying a Gibbs-like steady-state probability distribution in statistical mechanics [58] (Proposition 2)16. Footnote 15: Spectral filtering causes a cut-off on the pulse edges, where the spectral components with maximal frequency deviation are located [26]. Simultaneously, a nonlinear gain is roughly defined by a DS peak power, which is usually concentrated towards a DS center \(t=0\) or is constant for a flat-top pulse in the DSR regime. Such a gain provides a spectral loss compensation on the DS edges due to energy redistribution inside a chirped DS [2]. Footnote 16: It should be noted, that \(\beta-\)sign is opposite to that in the nonlinear Schrödinger equation, therefore this condition is soliton-_like._ These observations on the DS properties testify about an immediate relation between DS and a family of incoherent/semicoherent solitons [4, 5, 59]. This means that the DS thermodynamics has to be based not only on considering the DS interaction with an external thermal basin but on a view of DS as a microcanonical statistical ensemble of independent "_quasi-particles_" confined by a collective potential (18) [4]. One may indirectly test this proposition through a numerical experiment. For this goal, we have to take into account the energy dependence of \(\sigma\)-parameter in Eq. (1) assuming that it describes a saturable net-loss in a laser [37]: \(\sigma\approx\delta(E/E^{*}-1),\) where \(\delta\equiv dE/dE^{*}|_{E=E^{*}},\) and \(E^{*}\) is the energy of continuum wave generation at \(\sigma=0\) Figure 10: Experimental master diagram [47] demonstrating a transit to DSR regime via asymptotically constant DS spectral width \(\Delta\) and its temporal width \(T\) scaling. A spectrum becomes finger-like. (now, its normalized value replaces \(E^{\prime}\) in Fig. 8). Also, we include the thermal basin, which is described as an additive complex noise term \(\Gamma\) in Eq. (1). It is assumed to be Gaussian and uncorrelated: \[\langle\Gamma(z_{1},t_{1})\Gamma^{*}(z_{2},t_{2})\rangle=\Theta_{b} \delta(z_{1}-z_{2})\delta(t_{1}-t_{2}),\] \[\langle\Gamma(z_{1},t_{1})\Gamma(z_{2},t_{2})\rangle=0, \tag{26}\] where \(\Theta_{b}\) is noise's spectral power (temperature). Let's "wander" inside a DS master diagram in searching for transit to turbulence. The starting point \((a)\) (Fig. 8) corresponds to a DSR region with a finger-like spectrum and table-top temporal profile (Fig. 12 (a)). The shift to a region of \((-)\)-DS branch (point \((b)\) (Fig. 8)) excites (slightly decouples) an "internal modes" or quasi-particle complexes that manifests itself as a distortion of both spectral and temporal profiles (Fig. 12 (b)). As a rule, such distortions are asymmetrical but preserve the DS spectral-time integrity. A "propagation" of inro-DS excitation is illustrated by the inset in Fig. 12 (b), where a narrow-band Lorentzian absorption line at the DS spectrum center excites a long-range asymmetric perturbation "wave" confined in a collective potential between the perturbation and the DS spectral edge. The energy growth leads to turbulence (point (c) in Fig. 8 and Fig. 13 (a)), which is characterized by a Rayleigh-Jeans spectrum (Fig. 13 (a); inset) and a localization in both spectral and temporal domains. The Wigner function makes it evident that there are two correlation times: a correlation time of wave in equilibrium defining a confinement potential (a "homogeneity scale") \(\Lambda\propto 1/\sqrt{\Xi}\) ("tails" of the Wigner functions and the DS profile in Fig. 13 (a)), and an "internal" correlation time (an "inhomogeneity scale") \(\ell\propto 1/\sqrt{\Delta}\) (a thick "snake" in the central part of the Wigner function Figure 11: (a): DS spectrum (18) (red Lorentzian curve) and a wave-number of linear waves (black parabolic curve), which resonance with DS is denoted by red points. (b): The turbulence spectrum in the wave-number space (red curve) and the Langmuir dispersion curve (black). See main text for the comments. in Fig. 13 (a)). The easily-visible "trajectories" in the Wigner function center can be interpreted as a visualization of a DS energy in/out-flow induced by Kolmogorov's turbulence cascade [56]. The existence of internal coherence scale \(\ell\) (inset in Fig. 13 (b)) can stimulate a spontaneous creation of the coherent DSs from a localized incoherent DS (Fig. 13 (b)). Thus, the treatment of DS as a "quasi-particles ensemble" could be considered as reasonable when \(\ell\ll\Lambda\) inside a DSR region [54]. Thus, we can base on the following **Proposition 4**.: _In a DSR region with \(\Xi<\Delta\), DS can be considered as a microcanonical ensemble of quasi-particles confined by a collective potential,_ Figure 12: DS spectra (bottom axis), temporal profiles (right axis), and the corresponding Wigner function [60] (center) for \(E^{*}=\)18 and \(C=\)0.24 (a) and 0.18 (b). \(\delta=\)0.05, \(\Theta_{b}=10^{-10}\gamma^{-1}\), \(\chi=\)0. Inset - DS spectrum distorted by an absorption line with the dimensionless amplitude 0.0025 and the width of 1GHz [50]. so that the analytical technique formulated above in the spectral domain allows the formulation of the essential DS thermodynamic characteristics [45]. Let us assume \(\chi=0\). Then from Eq. (18): **Definition 3**.: _DS temperature \(\Theta\equiv 6\pi\gamma/\zeta\kappa\)._ The DS "temperature"17 has a sound physical sense. (i) It rises with \(\gamma\), i.e., with a chirp. Physically, it means decreasing phase inhomogeneity or a growing tendency to the quasi-particles decoupling. (ii) A temperature increases with the decrease in Figure 13: (a) DS spectra (bottom axis), temporal profiles (right axis), and the corresponding Wigner function (center) for \(E^{*}=\)54 and \(C=\)0.19 (point (c) in Fig. 8). Other parameters as in Fig. 12. (b) Multipulsing from turbulence. Laser cavity roundtrip equals \(z\) in (1). Left inset in (a): logarithm of spectral power, right inset in (b): a field autocorrelation function. the \(\kappa\zeta\). That is, when saturation of self-amplitude modulation vanishes (Eq. (1)), the power becomes lesser confined from the top. As a result, inhomogeneity grows. In both cases, DS warms up. **Definition 4**.: _Chemical potential \(-\mu=\Xi^{2}\)._ From Eq. (24), the chemical potential tends to zero for DSR that corresponds to \(E\to\infty\) by analogy with the Bose-Einstein condensation. The field concentrates at \(\omega\to 0\) (\(s\propto 1/\omega^{2}\) in an equilibrium) so that a DS ("condensate") tends to absorb all available volume [61]. **Definition 5**.: _Entropy \(S\equiv\int_{-\Delta}^{\Delta}\ln\left[s(\omega)\right]d\omega=2\Delta\left( \ln\left(\frac{\Theta}{\Delta^{2}+\Xi^{2}}\right)+2\right)-4\Xi\tan^{-1}\left( \frac{\Delta}{\Xi}\right)\)._ Hence and from Definition 4: \(\frac{\partial S}{\partial\mu}=0\)[45]. The dimensionless entropy is shown for both DS branches in Fig. 14. The figure demonstrates lesser entropy for the \((-)\)-branch. One may comment on that fact in the following way. The \(P_{0}\)-solution for a \((-)\)-branch has a finite limit for \(\zeta\to 0\)[36] that is it is "connectable" with a classical soliton of the nonlinear Schrodinger equation in the sense of [19]. In other words, this branch is in a "ground state" and has no excited internal degrees of freedom, so its entropy is minimal. The \((+)\)-branch is energy-scalable, i.e., it belongs to a DSR range. It has excitable internal degrees of freedom so that its entropy grows with an approach to the vacuum instability threshold, where it is maximal and grows with a temperature along the extreme DSR level \(C=2/3\), \(\Sigma=0\) as: \[S_{max}=\sqrt{\frac{7}{2}}\left(\ln\left(\frac{12\Theta}{13}\right)+2\right)- \sqrt{\frac{10}{3}}\tan^{-1}\left(\sqrt{\frac{21}{5}}\right), \tag{27}\] so that \(\frac{\partial S_{max}}{\partial\Theta}=\frac{\sqrt{7/2}}{\Theta}\neq 0\). In particular, this "high-entropy" branch with enriched internal degrees of freedom has a larger "informational capacity" that could make DS a prospective tool for information transmission [62]. Other thermodynamic values could be defined as [45]: **Definition 6**.: _Enthalpy (internal energy) \(U\equiv\int_{-\Delta}^{\Delta}\frac{\Theta\,\omega^{2}}{\Xi^{2}+\omega^{2}}\,d\omega\),_ so that \(\frac{\partial S}{\partial U}=\frac{2}{\Theta}\). **Definition 7**.: _Energy ("particle number" or energy contained in condensate) \(\mathcal{E}\equiv\int_{-\Delta}^{\Delta}\frac{\Theta}{\Xi^{2}+\omega^{2}}\,d\omega\),_ so that \(\frac{\partial S}{\partial\mathcal{E}}=\frac{-4\mu}{\Theta}\). **Definition 8**.: _Gibbs free energy \(\mathcal{F}\equiv U\quad-\quad\Theta\,S=-2\Theta\left(\Delta\ln\left(\frac{ \Theta}{\Delta^{2}+\Xi^{2}}\right)-\Xi\tan^{-1}\left(\frac{\Delta}{\Xi} \right)+\Delta\right),\)_ so that the minimal free energy along the extreme DSR level \(C=2/3\), \(\Sigma=0\) is: \[\mathcal{F}_{min}=\Theta\left(\sqrt{\frac{5}{6}}\tan^{-1}\left(\sqrt{\frac{21 }{5}}\right)-\sqrt{\frac{7}{2}}\left(\log\left(\frac{12\Theta}{13}\right)+1 \right)\right). \tag{28}\] The free energy is plotted in Fig. 15 for both DS branches. It is negative, i.e., DS is a thermodynamic preferable state within a range confined by the master diagram. It could be considered as an equilibrium state forming spontaneously from an incoherent basin. The energy-scalable (DSR) branch has the lowest free energy in the vicinity of the vacuum instability border and decreases with \(\Theta\) (28). Such minimization of free energy agrees with the analogous feature of BEC. ## 5 Conclusion We have presented the adiabatic theory of a dissipative soliton (DS). It is based on the assumption that DS is strongly chirped, which requires domination of the nondissipative factors, such as Kerr nonlinearity and GDD, over the dissipative ones, such as self-amplitude modulation and spectral dissipation. The complex cubic-quintic nonlinear Ginzburg-Landau equation (CQGLE) could describe all these factors. Under spatio-temporal duality, CQGLE can represent a broad range of nonlinear dynamical phenomena, particularly optical DS and weakly dissipative BEC. As CQGLE is not integrable in the general form, the approximated approaches to its study are highly desirable. The adiabatic approximation restrains a range of CQGLE parameters but keeps them remarkably realistic. Meanwhile, the obtained solutions are general within this range in the sense that they do not fix the relations between the equation parameters. This class of solutions belongs to the single-parametric family [39] that associate them with "true" solitons. One of the advantages is that the obtained solutions are formulated in a spectral domain that allows for tracing the close analogies with the kinetic approaches to an interpretation of DS characteristics. The analytical expressions are straightforward in the case of vanishing imaginary quintic term. The DS spectrum has the shape of a truncated Lorentzian function so that all spectra can be divided into flat-top and finger-like classes. The division between them is defined by the equality of the truncation frequency and the Lorentzian width. These values play the role of two correlation lengths representing the internal DS phase inhomogeneity so that their equality is a markup of the maximal DS fidelity in the sense of its compressibility and the transition to the energy-scalable regime. The latter corresponds to the DSR region, where DS is asymptotically scalable. The model provides simple analytical expressions corresponding to the DSR conditions. Advantageously, the concept of DSR is embedded organically into a representation of the DS parametric space in the form of two- (or three for a nonzero imaginary quintic term) dimensional master diagram, which connects a dimensionless DS energy and a parameter relating spectral and nonlinear dissipation to GDD and phase nonlinearity. The confined region of the last parameter corresponding to DSR has a simple analytical expression. The master diagram has a physically sound structure, which includes the stability threshold against vacuum instability, the region of DSR, a curve of maximum fidelity, etc. Moreover, the signatures characterizing a transition to the DSR regime are explicitly visible in the experiment in the form of a transition to a constant spectral width, the appearance of a Lorentzian peak in the spectrum, and a change of the DS squeezing to its broadening. All these phenomena have close analogous to BEC. The Rayleigh-Jeans spectral shape of DS and two independent correlation scales that diverge with the DS energy scaling suggest that a strongly chirped DS is akin to an incoherent (or partially coherent) soliton. The latter can be treated as an ensemble of interacting "quasi-particles" confined by a collective potential [4]. Indeed, the analysis demonstrates that DS has a nontrivial internal structure so that such "particles" or their conglomerates can be excited, which perturbs the DS spectral and temporal profiles but preserves its total integrity. In some cases, this leads to DS turbulence. The internal kinetic of DS allows applying a thermodynamic language so that DS can be characterized by temperature, chemical potential, entropy, and free energy. The adiabatic theory expresses these values through the DS and CQNGLE parameters and demonstrates the thermodynamic differences between two types of DS "populating" the master diagram. Also, the thermodynamic viewpoint connects a limit of the DS energy scalability with the vanishing of chemical potential and the internal entropy growth. We believe that the approaches presented in this work will be helpful in the different areas, including photonics and BEC. In particular, an explicit definition of the DS energy-scalability limit can be expressed thermodynamically. The closely connected and unexplored problem is the analysis of the DS-basin interaction, which is essential to understand the DS self-emerging [63]. Also, including the higher-order derivative terms in CQGLE describing, in particular, higher-order GDD, is of interest from the viewpoint of the DS chaotization and the distortion of its internal structure. ## Declarations Ethics approval and consent to participate.Not applicable. Consent for publication.All authors (VLK, AR, ES, ITS) consent to publication of this Work. Availability of data and materials.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Competing interests.The authors (VLK, AR, ES, ITS) declare no conflicts of interest. Funding.The work is supported by the Norwegian Research Council projects #303347 (UNLOCK), #326503 (MIR), and by ATLA Lasers AS. Authors' contributions.The authors (VLK, AR, ES, ITS) contributed equally to this Work. Acknowledgements.The work of VLK, AR and ITS was supported by NFR projects #303347 (UNLOCK), #326503 (MIR), and by ATLA Lasers AS (ES). ## Appendix A Factorization For \(\chi=0\), the equation for a spectral deviation is: \[\frac{d}{dt}\Omega(t)=\frac{\left(\frac{\kappa\beta\left(\Omega(t)^{2}-\Delta^ {2}\right)\left(-\gamma-\zeta\beta\left(\Omega(t)^{2}-\Delta^{2}\right)\right) }{\gamma^{2}}-\sigma-\alpha\Omega(t)^{2}\right)\left(\Omega(t)^{2}-\Delta^{2} \right)}{\beta\left(3\Omega(t)^{2}-\Delta^{2}\right)}.\] (A1) Let us write the numerator of (A1) as \[\frac{\kappa\beta\big{(}\Omega(t)^{2}-\Delta^{2}\big{)}\big{(}-\gamma-\zeta\beta \big{(}\Omega(t)^{2}-\Delta^{2}\big{)}\big{)}-\sigma-\alpha\Omega(t)^{2}}{\beta}= \left(3\Omega(t)^{2}-\Delta^{2}\right)\epsilon\left(\Xi^{2}+\Omega(t)^{2}\right)\] (A2) Collecting the coefficient before the powers of \(\Omega(t)\) gives \[\left(-\frac{\kappa\beta\zeta}{\gamma^{2}}-3\epsilon\right) \Omega(t)^{4}+\left(\frac{\frac{\kappa\,\beta^{2}\Delta^{2}\zeta+\kappa\beta \big{(}-\gamma+\zeta\beta\,\Delta^{2}\big{)}}{\gamma^{2}}-\alpha}{\beta}+ \Delta^{2}\epsilon-3\epsilon\Xi^{2}\right)\Omega(t)^{2}-\] \[-\frac{\frac{\kappa\beta\,\Delta^{2}\left(-\gamma+\zeta\beta\, \Delta^{2}\right)}{\gamma^{2}}+\sigma}{\beta}+\Delta^{2}\epsilon\Xi^{2}=0.\] (A3) Equating the coefficient to zero and taking into account \(\Delta^{2}=\gamma P_{0}/\beta\) results in: \[b=-\frac{1}{3}\frac{\kappa\beta\zeta}{\gamma^{2}},\] (A4) \[\beta\Xi^{2}=-\frac{5}{3}P_{0}\gamma+\frac{\gamma}{\zeta}+\frac{ \gamma^{2}\alpha}{\kappa\beta\zeta},\] (A5) \[P_{0}=\frac{3}{4}\frac{1-\frac{1}{2}c\pm\sqrt{\left(1-\frac{1}{ 2}c\right)^{2}-\frac{4\zeta\sigma}{\kappa}}}{\zeta}.\] (A6) This decomposition allows excluding a singularity from the denominator in Eq. (A1) and, thereby, avoiding the nonphysical solutions. Such a procedure would also be highly desirable for (9). ## Appendix B Numerical calculation of the DS parameters That is Matlab code for calculating the DS parameters of the complex cubic-quintic nonlinear Ginzburg-Landau equation. % Cubic-quintic CGLE % % % x isthe cparameter % % y isthesoliton energy % % z isthe spectral half-width % % Vladimir Kalashnikov % % % [email protected] % % clear Nt = 1000; chi=5;%controlparameterchi Sigma=0;%controlparameterSigma fork=2:1000 c=2-2*(k-1)/1000; %Spectralhalf-widthforthepositivebranch eq8=sqrt((0.2e1*(c/chi+3+4/chi)*... (c/0.2e1+0.3e1/0.2e1*chi+... 0.2e1+sqrt(((c-2)^2-16*Sigma*... (c/chi+1))))/(c/chi+1)-... (3*c)-(9*chi)-(32/chi*Sigma)-0.12e2)/... (c/chi+1)*c)/0.4e1; %Spectralhalf-widthforthenegativebranch eq9=sqrt((0.2e1*(c/chi+3+4/chi)*... (c/0.2e1+0.3e1/0.2e1*chi+... 0.2e1-sqrt(((c-2)^2-... 16*Sigma*(c/chi+1))))/(c/chi+1)-... (3*c)-(9*chi)-(32/chi*Sigma)-... 0.12e2)/(c/chi+1)*c)/0.4e1; %Curve,wherethebranchesmerge eq10=sqrt((0.2e1*(c/chi+3+4/chi)*... (c/0.2e1+0.3e1/0.2e1*chi+... 0.2e1-0*sqrt(((c-2)^2-16*Sigma*... (c/chi+1))))/(c/chi+1)-... (3*c)-(9*chi)-(32/b*Sigma)-0.12e2)/... (c/chi+1)*c)/0.4e1; Delta=eq8; if(imag(Delta)==0) z(k)=Delta; domega=2*Delta/Nt; omega=[-Delta:domega:Delta]; ar=(sqrt(((c*chi+4*Delta^2-... 4*omega.^2)/c/chi))-0.1e1).*... ((sqrt((((c*chi+4*Delta^2-... 4*omega.^2)/c/chi))-0.1e1)*c*... chi+(8*omega.^2)-(4*Delta^2))/c.*... (((c*chi + 4*Delta^2 -... 4*omega.^2)/c/chi).^(-0.1e1/0.2e1))./... ((Sigma*c + c*omega.^2+... c*chi + c*chi^2+chi*Delta^2 -... chi*omega.^2).*(sqrt(((c*chi + 4*... Delta^2 - 4*omega.^2)/c/chi))-0.1e1)-... (2*(Delta^2 - omega.^2)*... (chi + 1)))/0.2e1; forkk=1:Nt if(ar(kk)<0) ar(kk)=0; else ar(kk)=ar(kk); end end arg=ar(2:Nt); E=trapz(arg)*domega; x(k)=c; if(imag(E)==0) y(k)=E; else y(k)=0; end else end end
2309.05963
An Alternating Direction Implicit Method for Mean Curvature Flows
This paper is concerned with the mean curvature flow, which describes the dynamics of a hypersurface whose normal velocity is determined by local mean curvature. We present a Cartesian grid-based method for solving mean curvature flows in two and three space dimensions. The present method embeds a closed hypersurface into a fixed Cartesian grid and decomposes it into multiple overlapping subsets. For each subset, extra tangential velocities are introduced such that marker points on the hypersurface only moves along grid lines. By utilizing an alternating direction implicit (ADI)-type time integration method, the subsets are evolved alternately by solving scalar parabolic partial differential equations on planar domains. The method removes the stiffness using a semi-implicit scheme and has no high-order stability constraint on time step size. Numerical examples in two and three space dimensions are presented to validate the proposed method.
Han Zhou, Shuwang Li, Wenjun Ying
2023-09-12T05:11:54Z
http://arxiv.org/abs/2309.05963v1
# An Alternating Direction Implicit Method for Mean Curvature Flows ###### Abstract This paper is concerned with the mean curvature flow, which describes the dynamics of a hypersurface whose normal velocity is determined by local mean curvature. We present a Cartesian grid-based method for solving mean curvature flows in two and three space dimensions. The present method embeds a closed hypersurface into a fixed Cartesian grid and decomposes it into multiple overlapping subsets. For each subset, extra tangential velocities are introduced such that marker points on the hypersurface only moves along grid lines. By utilizing an alternating direction implicit (ADI)-type time integration method, the subsets are evolved alternately by solving scalar parabolic partial differential equations on planar domains. The method removes the stiffness using a semi-implicit scheme and has no high-order stability constraint on time step size. Numerical examples in two and three space dimensions are presented to validate the proposed method. Keywords:Mean curvature flow Cartesian grid Overlapping surface decomposition Geometric flows ADI method Msc: 35K93 53E10 65N06 65M55 ``` ## 1 Introduction Geometric evolution of interfaces draws many attentions in the last decades due to its wide applications in mathematics [10], materials science [22], biology [23] and, more recently, image processing [19; 33; 20]. In a geometric evolution problem, the dynamics of a hypersurface are described by its geometry. Typically, the normal velocity of the hypersurface is given by a law defined by geometry. In this paper, we are concerned with a representative case of geometric evolution problems, the mean curvature flow, in which the hypersurface evolves such that its normal velocity equals its negative mean curvature. Mean curvature flow was originally proposed by Mullins to model an ideal grain boundary motion [22]. Thereafter, it was also used to model various other physical phenomena [23; 1]. Numerical methods for mean curvature flows can be classified into three categories, based on their different representations of hypersurfaces, which are parametric approaches, level set method [24; 26; 25] and phase field method [8; 9; 27]. Representative parametric approaches include the parametric finite element method [3; 7; 5; 4], graph approach [12; 13] and the front tracking method [18; 31]. For a comprehensive survey on numerical methods for mean curvature flows, the interested reader is referred to the review article by Deckelnick et al. [13]. For general moving interface problems, although the level set method and the phase field method have their advantages in handling topological changes and ease in implementation, parametric approaches provide surprisingly good results, such as accurate computation of curvature and conservation of mass, even with a coarse grid, and are computationally very cheap as well [6; 30]. Despite the benefits, numerical computation of mean curvature flows with parametric approaches also encounters several difficulties, including the deterioration of mesh quality during the computation and the numerical stiffness induced by the mean curvature term. Due to the pure normal motion of the surface, adjacent mesh nodes may become closer and closer, making the computation highly unstable. The problem is even more severe for the mean curvature flow due to its "curve shortening" property by Mullins [22]. In addition, the evolution equation of mean curvature flows has second-order spatial derivatives in the mean curvature term, which induces numerical stiffness such that a small time step is required for explicit time integration schemes [17]. A naive discretization with implicit time integration to remove stiffness leads to a nonlinear system, for which finding a numerical solution is time-consuming, even with advanced iterative solvers. For two space dimensional curves, these difficulties can be very well handled by the small scale decomposition method, initially proposed by Hou et al. [17]. The idea was also extended to three space dimensional cases for some special surfaces [16; 2]. However, for general closed surfaces in three space dimensions, it is still unclear how to apply small scale decomposition for mean curvature flows. This work proposes a new numerical method for mean curvature flows. The method decomposes a moving hypersurface into multiple overlapping subsets such that each subset can be viewed as a Monge patch for which the graph approach [12; 13] is applicable. The method is based on an overlapping decomposition method for hypersurfaces, initially proposed by Wilson for computing integrals on implicitly defined curves and surfaces [32]. A few years later, the method was extended by the second author to incorporate a kernel-free boundary integral method for solving elliptic partial differential equations on irregular domains [34]. The decomposition strategy has many advantages in simplicity and efficiency. By representing each subset with its intersection points with grid lines, it is natural to keep marker points quasi-equidistant and maintain mesh quality. By reformulating the evolution equation, which is a nonlinear system, into a sequence of scalar PDEs on overlapping subsets, one can evolve the subsets alternately in the spirit of the alternating direction implicit (ADI) method [28; 14; 15]. The resulting algorithm is efficient and has no high-order stability constraint. The remainder of the paper is organized as follows. In Section 2, we describe the governing equation of mean curvature flow and its hybrid formulation based on an overlapping surface decomposition method. The numerical methods for solving mean curvature flows are described in Section 3. In Section 4, the numerical algorithm of the proposed method is briefly summarized. Multiple numerical examples are presented to validate the present method in Section 5. In the final Section 6, we briefly discuss the present method and some further work. ## 2 Mathematical formulation Let \(\Gamma(t)\subset\mathbb{R}^{d},d=2,3\) be a closed moving hypersurface. Consider the mean curvature flow problem that, for any point \(\mathbf{x}\) on \(\Gamma\), the evolution is given by, \[\mathbf{x}_{t}=V\mathbf{n},\quad V=-\kappa,\quad\mathbf{x}\in\Gamma, \tag{1}\] where \(\kappa\) is the (mean) curvature and \(\mathbf{n}\) the unit outward normal. Here, a circle/sphere has positive curvature. By applying the transport theorem of evolving hypersurfaces, it can be shown that the mean curvature flow has length-decreasing and area-decreasing properties in 2D and 3D, respectively, \[\frac{d}{dt}|\Gamma(t)| =\frac{d}{dt}\int_{\Gamma(t)}1\,ds=-\int_{\Gamma(t)}\kappa^{2}\, ds<0, \qquad\qquad\text{if }d=2, \tag{2}\] \[\frac{d}{dt}|\Gamma(t)| =\frac{d}{dt}\int_{\Gamma(t)}1\,dA=-\int_{\Gamma(t)}\kappa^{2}\, dA<0, \qquad\qquad\text{if }d=3, \tag{3}\] where \(|\Gamma(t)|\) is the length of \(\Gamma(t)\) in 2D and the area of \(\Gamma(t)\) in 3D. Specially, for the case \(d=2\), it holds that \[\frac{d}{dt}A(t)=\int_{\Gamma(t)}\frac{d\mathbf{x}}{dt}\cdot\mathbf{n}\,ds=-\int_{ \Gamma(t)}\kappa\,ds=-2\pi, \tag{4}\] where \(A(t)\) is the area enclosed by \(\Gamma(t)\). It suggests that the enclosed area of a 2D mean curvature flow always decreases at a constant rate [11]. The result no longer holds for the enclosed volume of 3D surfaces since the surface integral of mean curvature varies from case to case. If \(\Gamma\) is closed, and there is no boundary condition, then the solution of mean curvature flow is determined by its initial configuration. For some certain initial configurations, solutions of mean curvature flows may develop singularities, such as pinch-off and topological changes, in finite time before shrinking to a point. In this paper, we seek well-defined solutions of mean curvature flows, \(i.e.\), solutions before singularities happen. In order to tackle the mean curvature flow problem, the evolution equation (1) is divided into multiple subproblems with an overlapping surface decomposition of \(\Gamma\). For \(r=1,\cdots,d\), let \(\mathbf{e}_{r}\) be the \(r^{th}\) unit vector in \(\mathbb{R}^{d}\) and \(\alpha\in(\cos^{-1}(1/\sqrt{d}),\pi/2)\) be a fixed angle. The set \[\Gamma_{r}=\{\boldsymbol{x}\in\Gamma:|\mathbf{n}\cdot\mathbf{e}_{r}|( \boldsymbol{x})>\cos\alpha\}, \tag{5}\] is an open subset of the surface \(\Gamma\) for each \(r=1,2,\cdots,d\). The union of the sets \(\{\Gamma_{r}\}_{r=1}^{d}\) forms an overlapping surface decomposition of \(\Gamma\). Then the surface \(\Gamma\) is represented by the overlapping subsets \(\Gamma_{r}\) with a partition of unity. ### Divided problems Note that the evolution of a hypersurface is only determined by its normal velocity \(V\). One is allowed to add arbitrary tangential velocity to the evolution of \(\Gamma\) without altering its shape. The tangent velocity only changes in the frame for the parametrization of the surface. For arbitrary tangential velocities \(T\), \(T_{1}\) and \(T_{2}\), the evolution governed by (1) is equivalent to \[\boldsymbol{x}_{t}=-V\boldsymbol{n}+T\boldsymbol{\tau},\quad\boldsymbol{x}\in \Gamma, \tag{6}\] in two space dimensions and \[\boldsymbol{x}_{t}=-V\boldsymbol{n}+T_{1}\boldsymbol{\tau}_{1}+T_{2} \boldsymbol{\tau}_{2},\quad\boldsymbol{x}\in\Gamma, \tag{7}\] in three space dimensions. Here, the notations \(\boldsymbol{\tau}\), \(\boldsymbol{\tau}_{1}\) and \(\boldsymbol{\tau}_{2}\) mean tangent vectors. Consider the evolution of the overlapping subsets \(\Gamma_{r}\) in three space dimensions. Let \(\boldsymbol{x}^{(r)}\) denote a point on the subset \(\Gamma_{r}\). The evolutions of \(\Gamma_{r}\) are given by \[\boldsymbol{x}_{t}^{(r)}=-V^{(r)}\boldsymbol{n},\quad\boldsymbol{x}^{(r)}\in \Gamma_{r},\quad r=1,\cdots,d. \tag{8}\] where \(V^{(r)}\) are the restrictions of \(V\) from \(\Gamma\) to \(\Gamma_{r}\). By adding tangential velocities \(T_{1}^{(r)}\) and \(T_{2}^{(r)}\), it yields the equivalent evolution equations \[\boldsymbol{x}_{t}^{(r)}=-V^{(r)}\boldsymbol{n}+T_{1}^{(r)}\boldsymbol{\tau}_ {1}+T_{2}^{(r)}\boldsymbol{\tau}_{2},\quad\boldsymbol{x}^{(r)}\in\Gamma_{r}, \quad r=1,\cdots,d. \tag{9}\] Hence, the evolution equation (1) of \(\Gamma\) is divided into a sequence of evolution equations of subsets \(\Gamma_{r}\). With the divided formulation (9), for each subset \(\Gamma\), the tangential velocities \(T_{1}^{(r)}\) and \(T_{2}^{(r)}\) can be chosen independently. Due to the overlapping surface decomposition (5), each subset \(\Gamma_{r}\) can be easily parameterized with Cartesian coordinates in a planar domain \(\Omega_{r}\subset\mathbb{R}^{d-1}\). For example, in three space dimensions, denote by \(\Omega_{3}\) the projection of \(\Gamma_{3}\) onto \(X\)-\(Y\) plane. Then \(\Gamma_{3}\) can be represented by the Monge patch \(\mathbf{x}(x,y)=\mathbf{x}(x,y,z(x,y)),(x,y)\in\Omega_{3}\) in which \(z(x,y)\) is a height function. With this understanding, the evolution of \(\Gamma_{r}\) can be described as a time-dependent height function on the base plane \(\Omega_{3}\) in \(d-1\) space dimensions. The height function representation is an Eulerian description of the moving hypersurface. Its numerical approximation is much simpler than that for tracking a moving hypersurface with its Lagrangian motion. With Eulerian description, it is natural to use a Cartesian grid to approximate the height function. With the understanding that the Eulerian description is equivalent to moving marker points of the hypersurface along fixed grid lines, the evolution equations of the equivalent Eulerian motion can be derived by carefully choosing tangent velocities \(T^{(r)}\), \(T_{1}^{(r)}\) and \(T_{2}^{(r)}\) such that \(\Gamma_{r}\) only have one non-zero velocity component in direction \(\mathbf{e}_{r}\). #### 2.1.1 Two space dimensional case Let \(\Gamma\subset\mathbb{R}^{2}\) be a time-dependent Jordan curve which is given by \(\mathbf{x}(t)=(x(\theta,t),y(\theta,t))\) where \(\theta\) parameterizes the curve. Its curvature \(\kappa\) and unit outward normal vector are, respectively, given by \[\kappa=\frac{x_{\theta}y_{\theta\theta}-x_{\theta\theta}y_{\theta}}{(x_{ \theta}^{2}+y_{\theta}^{2})^{\frac{3}{2}}},\quad\mathbf{n}=\frac{1}{(x_{\theta}^{ 2}+y_{\theta}^{2})^{\frac{1}{2}}}\left(\begin{array}{c}y_{\theta}\\ -x_{\theta}\end{array}\right). \tag{10}\] For each subset \(\Gamma_{r},r=1,2\), it can be parameterized by \(x\) or \(y\) to be a height function \(y=y(x)\) or \(x=x(y)\) depending on its orientation. Suppose \(\Gamma_{r}\) is represented in the form \(\eta=\eta(\xi)\) where \((\xi,\eta)\) coincides with \((x,y)\) or \((y,x)\). After extra tangential velocity \(T\) is added into the original evolution equation (1), the evolution of \(\Gamma_{r}\) is equivalent to \[\frac{d}{dt}\left(\begin{array}{c}\xi\\ \eta\end{array}\right)=-\;\frac{\eta_{\xi\xi}}{(\eta_{\xi}^{2}+1)^{2}}\left( \begin{array}{c}\eta_{\xi}\\ -1\end{array}\right)+T\left(\begin{array}{c}1\\ \eta_{\xi}\end{array}\right). \tag{11}\] It is worth mentioning that equation (11) does not rely on the orientation of the curve due to the cancellation of signs in the curvature and normal vector when one reverses the parameterization, namely, from \(\xi\) to \(-\xi\). To determine a specific tangential velocity \(T\) such that marker points on \(\Gamma_{r}\) only have non-zero velocity component in \(\mathbf{e}_{r}\) direction. One needs to set \(\xi_{t}=0\), \(i.e\). \[\xi_{t}=-\frac{\eta_{\xi}\eta_{\xi\xi}}{(\eta_{\xi}^{2}+1)^{2}}+T=0. \tag{12}\] The expression of \(T\) can be easily solved. By substituting the determined \(T\) into (11), it yields the evolution law for \(\Gamma_{r}\) in terms of height function, \[\eta_{t}=\frac{\eta_{\xi\xi}}{\eta_{\xi}^{2}+1}, \tag{13}\] which is a scalar parabolic-type partial differential equation. #### 2.1.2 Three space dimensional case The derivation in three space dimensions is similar. For each subset \(\Gamma_{r},r=1,2,3\), it can be regarded as a Monge patch \(\mathbf{x}(u,v)=\mathbf{x}(u,v,w(u,v)),(u,v)\in\Omega_{r}\) where \(\Omega_{r}\) is the projection of \(\Gamma_{r}\) on its base plane and \(w(u,v)\) is the height function. Denote by \(\mathbf{\tau}_{1}=(1,0,w_{u})^{T},\mathbf{\tau}_{2}=(0,1,w_{v})^{T}\) two tangent vectors of \(\Gamma_{r}\). After adding two tangential velocities \(T_{1}\) and \(T_{2}\), the evolution equation (1) in three space dimensions becomes \[\frac{d}{dt}\begin{pmatrix}u\\ v\\ w\end{pmatrix}=\frac{(1+w_{u}^{2})w_{vv}-2w_{u}w_{v}w_{uv}+(1+w_{v}^{2})w_{uu} }{2(1+w_{u}^{2}+w_{v}^{2})^{2}}\begin{pmatrix}-w_{u}\\ -w_{v}\\ 1\end{pmatrix}+T_{1}\begin{pmatrix}1\\ 0\\ w_{u}\end{pmatrix}+T_{2}\begin{pmatrix}0\\ 1\\ w_{v}\end{pmatrix}. \tag{14}\] By setting \(u_{t}=v_{t}=0\), one can solve for \(T_{1}\) and \(T_{2}\), \[T_{1} =w_{u}\frac{(1+w_{u}^{2})w_{vv}-2w_{u}w_{v}w_{uv}+(1+w_{v}^{2})w_ {uu}}{2(1+w_{u}^{2}+w_{v}^{2})^{2}}, \tag{15}\] \[T_{2} =w_{v}\frac{(1+w_{u}^{2})w_{vv}-2w_{u}w_{v}w_{uv}+(1+w_{v}^{2})w_ {uu}}{2(1+w_{u}^{2}+w_{v}^{2})^{2}}. \tag{16}\] By substituting (15) and (16) into (14), it gives the evolution of \(\Gamma_{r}\) in terms of its height functions \(w\), \[w_{t}=\frac{(1+w_{u}^{2})w_{vv}-2w_{u}w_{v}w_{uv}+(1+w_{v}^{2})w_{uu}}{2(1+w_{ u}^{2}+w_{v}^{2})}. \tag{17}\] The equation (17) is also a scalar parabolic-type partial differential equation. ### Matching condition Until now, the original evolution equation (1) is reformulated as a sequence of scalar partial differential equations (13) and (17). To ensure the well-posedness of the divided problem, we follow the idea of domain decomposition [21] to add an extra matching condition for the solution of the divided problem at the overlapping zone such that the equations (13) and (17) have boundary condition. A simple choice of the matching condition is to enforce continuity of the global solution at an overlapping zone with a partition of unity, \[\mathbf{x}^{(r)}=\sum_{j\neq r}\chi_{j}\mathbf{x}^{(j)},\quad\mathbf{x}^{(r)}\in\partial \Gamma_{r}. \tag{18}\] where \(\partial\Gamma_{r}\) denotes the boundary of \(\Gamma_{r}\). Here, the notation \(\chi_{j}\) is the partition of unity subordinate to the subset \(\Gamma_{j}\), which satisfies \[\begin{cases}\chi_{r}(\mathbf{x})\geq 0,&\mathbf{x}\in\Gamma_{r},\\ \chi_{r}(\mathbf{x})=0,&\mathbf{x}\in\Gamma\backslash\Gamma_{r},\\ \sum_{r=1}^{d}\chi_{r}(\mathbf{x})=1,&\mathbf{x}\in\Gamma.\end{cases} \tag{19}\] The divided problem (9) together with the matching condition (18) forms an equivalent coupled system, which is called hybrid formulation, to equation (1) in three space dimensions, \[\begin{cases}\mathbf{x}_{t}^{(r)}=-V^{(r)}\mathbf{n}+T_{1}^{(r)}\mathbf{\tau}_{1}+T_{2}^{(r )}\mathbf{\tau}_{2},&\mathbf{x}^{(r)}\in\Gamma_{r},\\ \mathbf{x}^{(r)}=\sum_{j\neq r}\chi_{j}\mathbf{x}^{(j)},&\mathbf{x}^{(j)}\in\partial\Gamma _{r}.\end{cases} \tag{20}\] Once the hybrid formulation (20) is solved, the global solution \(\mathbf{x}\) can be reconstructed with a partition of unity, \[\mathbf{x}=\sum_{r=1}^{d}\chi_{r}\mathbf{x}^{(r)}. \tag{21}\] Unlike Ambrose's method [2], which is applicable only for a particular class of surfaces with doubly-periodic boundary conditions, our method can handle more general cases, including closed surfaces, with this hybrid formulation. ## 3 Numerical Methods In this section, the numerical methods for mean curvature flow are described, including the discrete representation of a moving hypersurface, numerical discretizations of the partial differential equations (13) and (17) as well as the matching condition (18). ### Hypersurface representation Let \(\Gamma\) be a smooth closed hypersurface. It is separately represented by the height functions of its overlapping subsets \(\Gamma_{r},r=1,\cdots,d\), which are approximated by nodal values at Cartesian grid nodes in the base domain \(\Omega_{r}\). Equivalently, the nodal values are, in fact, the intersection points of \(\Gamma_{r}\) and grid lines, which are aligned with \(\mathbf{e}_{r}\). At the implementation level, this is done by selecting from all intersection points \(\mathbf{p}\) of \(\Gamma\) and grid lines for certain ones which satisfy the decomposition rule: \[\mathbf{p}\in\Gamma_{r}\text{ and }\mathbf{n}(\mathbf{p})\cdot\mathbf{e}_{r}>\cos\alpha. \tag{22}\] where \(\mathbf{n}(\mathbf{p})\) is the unit outward normal at \(\mathbf{p}\) and \(\alpha\) is a given threshold angle. Those points which satisfy (22) are named as control points for \(\Gamma_{r}\) and the point set is denoted by \(\Gamma_{r}^{h}\). We also denote all control points on \(\Gamma\) by \(\Gamma^{h}=\cup_{r=1}^{d}\Gamma_{r}^{h}\). The point set \(\Gamma^{h}\) is used to represent \(\Gamma\) in terms of its overlapping subsets. We remark that, although points in \(\Gamma^{h}\) are not quasi-equidistant, local interpolation stencil only involves points in each subset \(\Gamma_{r}^{h}\), which are quasi-equidistant. Figure 1 and 2 show the distribution of control points in two and three space dimensions, respectively. Figure 1: Control points for the representation of an ellipse: (a) control points on \(\Gamma_{1}\) (red rectangle markers); (b) control points on \(\Gamma_{2}\) (blue circle markers). Figure 2: Control points for the representation of an ellipsoid on \(\Gamma_{1}\). The advantage of this representation of hypersurface is evident. One can easily find out that the projections of control points in \(\Gamma_{r}^{h}\) coincide with Cartesian grid nodes in \(\Omega_{r}\). Instead of tracking \(\Gamma\) by marker points whose velocities are in \(d\) dimensions, one needs to solve the evolution for the height functions, \(i.e.\) marker points moving along grid lines, which only change values in one dimension. #### 3.1.1 Solving PDEs on hypersurfaces Generally, for a closed smooth hypersurface \(\Gamma\), the subset \(\Gamma_{r}\) consists of several isolated components which are denoted by \(\Gamma_{r,l},l=1,2,\cdots\). For example, suppose \(\Gamma:\boldsymbol{x}=\boldsymbol{x}(\theta),\theta\in[0,2\pi)\) is a circle, then the curve segments \(\Gamma_{1,1}:\boldsymbol{x}=\boldsymbol{x}(\theta),\theta\in(2\pi-\alpha,2 \pi)\cup[0,\alpha)\) and \(\Gamma_{1,2}:\boldsymbol{x}=\boldsymbol{x}(\theta),\theta\in(\pi-\alpha,\pi+\alpha)\) are both subsets of \(\Gamma_{1}\). Let \(\Omega_{r,l}\) denote the projection of \(\Gamma_{r,l}\) on the base plane. If \(\Omega_{r,l}\) overlap with each other, then \(\Gamma_{r}\) is a multi-valued function on \(\Omega_{r}\), which induces ambiguity. The correct understanding is to separate \(\Gamma_{r,l}\) from each other, and on which PDEs are solved independently. Once the components are separated from each other, the ambiguity is removed, since \(\Gamma_{r,l}\) is a single-valued function on \(\Omega_{r,l}\) (see Figure 3). Only points on the same component are involved in a local stencil for solving PDEs. In the implementation, one can check their distance and normals to determine if two points are on the same component \(\Gamma_{r,l}\). We identify two points \(\boldsymbol{p}\) and \(\boldsymbol{q}\) on the same component if they satisfy \[\|\boldsymbol{p}-\boldsymbol{q}\|<D_{0}\text{ and }\boldsymbol{n}(\boldsymbol{p}) \cdot\boldsymbol{n}(\boldsymbol{q})<\cos(\theta_{0}), \tag{23}\] where \(\|\cdot\|\) is the Euclidean distance, \(D_{0}\) and \(\theta_{0}\) are threshold values given in advance. In this work, we set \(D_{0}=5h\) and \(\theta_{0}=\pi/6\). The separation procedure is meant to find the correct finite difference stencil point from possible intersection points that lie on the same grid line. The implementation based on the criteria (23) can naturally handle cases with multiple (more than 2) components in each \(\Gamma_{r}\), for example, oscillating curves or surfaces, as long as the grid is fine enough to resolve them. #### 3.1.2 Interpolation on hypersurface In the discrete representation \(\Gamma^{h}\), geometric quantities and functions on the hypersurface are evaluated by local interpolation. Given a point \(\boldsymbol{p}\in\Gamma\), to interpolate the function value at \(\boldsymbol{p}\) using function values on \(\Gamma_{r}^{h}\), one needs to find its projection point \(\boldsymbol{p}^{\star}\in\Omega\) in which the selection of \(r\) depends on the direction of \(\boldsymbol{n}(\boldsymbol{p})\). A quadratic polynomial is locally constructed for interpolation, \[P_{2}(u,v)=c_{1}+c_{2}u+c_{3}v+c_{4}u^{2}+c_{5}v^{2}+c_{6}uv, \tag{24}\] where \((u,v)\) is the local coordinate near \(\boldsymbol{p}^{\star}\) in the base plane. With the help of a Cartesian grid, finding interpolation stencils on \(\Gamma^{h}\) is very simple. One can attach control points to their closest grid nodes and find stencil points by searching nearby grid nodes with their indices in the Cartesian grid. #### 3.1.3 Evolving the hypersurface After the PDEs (13) and (17) are solved for a time step, control points are moved to different positions and form a new hypersurface. Old points can no longer represent the new hypersurface since they do not satisfy the decomposition rule mentioned before. To represent the new hypersurface, new control points must be added, and some old ones must be deleted. This is achieved by finding out all intersection points by local interpolation and select for new control points. Even in three space dimensions, intersection points can be computed using one-dimensional polynomial interpolations since both stencil points and new intersection points are on the same plane. Take the surface component \(\Gamma_{3,l}\) as an example. The component is discretized by intersection points with coordinates \((x_{i},y_{j},\eta_{i,j})\) where \((x_{i},y_{j})\in\Omega_{3,l}^{h}\). In order to find new intersection points of the component with the grid line \(\{(x,y,z)|y=y_{j},z=z_{k}\}\), one first needs to identify the intersection interval \((x_{i_{0}},x_{i_{1}})\) by check the side of grid nodes, and then choose three intersection points \((x_{i_{0}-1},y_{j},\eta_{i_{0}-1,j})\), \((x_{i_{0}},y_{j},\eta_{i_{0},j})\), and \((x_{i_{0}+1},y_{j},\eta_{i_{0}+1,j})\) to locally construct a 1D quadratic polynomial \(z=P_{2}(x)\). By solving the equation \(P_{2}(x^{*})=z_{k}\) with either the Newton method or the bisection method, one can obtain the new intersection point \((x^{*},y_{j},z_{k})\). After an intersection point is found, following (22), one can determine whether it is kept or deleted by checking the normal vector, which is also evaluated by locally constructing a parabola. It is worth mentioning that if a new component \(\Gamma_{r,l}\) is too small and does not have enough control points for interpolation, the whole component should be deleted. ### Discretization of PDEs #### 3.2.1 Temporal discretization The equation (13) and (17) are parabolic PDEs and have second-order spatial derivatives. Generally, explicit time integration methods, such as the forward Euler method, for PDEs involving high-order spatial derivatives suffer from high-order stability constraints, and one has to use small time steps to solve the equations. These problems are known as stiff problems. The stiffness for mean curvature flow is rather severe. Take two space dimensional cases as an example. If one discretizes \(\Gamma\) by uniformly partitioning the parameter \(\theta\). An explicit time integration suffers from a second order stability constraint \(\Delta t\leq C(\min_{\theta}s_{\theta}h)^{2}\) where \(C\) is a constant, \(s\) is arclength, and \(h\) is the grid spacing in \(\theta\). Since 2-D mean curvature flow is also known as the "curve shortening flow", \(s_{\theta}h\) decreases with time and results in an even worse situation. Although implicit methods are unconditionally stable for stiff problems, there is another difficulty in implicit time integration for (13) and (17) since it results in a nonlinear system in each time step, for which finding a solution is highly inefficient. Notice that equations (13) and (17) are quasi-linear equations. The source of stiffness comes from the highest-order terms, which are linear in equations (13) and (17). The stiffness can be removed by only treating the highest order terms implicitly with lower order terms treated explicitly. The resulting time integration scheme is semi-implicit in time. It only requires solving linear systems, which are much more acceptable than nonlinear systems. Suppose the time interval \([0,T]\) is uniformly partitioned into \(0=t^{0}<t^{1}<\cdots<t^{n}<\cdots<t^{N}=T\) with \(t^{n+1}-t^{n}=\Delta t\). The semi-implicit schemes for equations (13) and (17), respectively, are give by \[\frac{\eta^{n+1}-\eta^{n}}{\Delta t}=\frac{\eta_{\xi\xi}^{n+1}}{(\eta_{\xi}^{ n})^{2}+1}, \tag{25}\] and \[\frac{w^{n+1}-w^{n}}{\Delta t}=\frac{(1+(w_{u}^{n})^{2})w_{vv}^{n+1}-2w_{u}^{n }w_{v}^{n}w_{uv}^{n+1}+(1+(w_{v}^{n})^{2})w_{uu}^{n+1}}{2(1+(w_{u}^{n})^{2}+( w_{v}^{n})^{2})}. \tag{26}\] The semi-implicit schemes (25) and (26) are only in semi-discrete forms. #### 3.2.2 Spatial discretization Generally, physical properties should be encoded into the discretization of spatial derivatives. In the case of mean curvature flow, the evolution of a hypersurface is driven by surface tension, which is diffusive in nature. Since diffusion comes from all directions, it is preferred to adopt central differences for approximating the spatial derivatives in the mean curvature term. Though tangential velocities terms may introduce the convection effect, which commonly should be discretized with methods for hyperbolic PDEs, we remark that the convection effect is expected to be small compared to the diffusion effect. Hence, for simplicity, we approximated all the spatial derivatives in (13) and (17) with central differences. Suppose \(\Gamma\in\mathbb{R}^{d},d=2,3\) is embedded into a bounding box \(\mathcal{B}\) which is the tensor product of one dimensional intervals \(\mathcal{I}_{i},i=1,2,\cdots,d\). If \(\mathcal{I}_{i}\) is uniformly partitioned into a Cartesian grid \(\mathcal{G}_{i}\) for each \(i\), then the tensor product of \(\mathcal{G}_{i}\) forms a natural uniform partition of \(\mathcal{B}\), which is also a Cartesian grid and is denoted by \(\mathcal{G}\). Without loss of generality, we assume equal bounding intervals \(\mathcal{I}_{i}=[a,b],i=1,2,\cdots,d\), and they are uniformly partitioned into \(N\) intervals. Denote by \(h=(b-a)/N\) the mesh parameter. Let \(\Delta_{\xi},\delta_{\xi}^{2}\) denote central difference quotients, \[\Delta_{\xi}u_{i}=\frac{u_{i+1}-u_{i-1}}{2h},\quad\delta_{\xi}^{2}u_{i}=\frac{ u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}.\] The fully discrete form of equation (13) is given by \[\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\frac{\delta_{\xi}^{2}u_{i}^{n+1}}{( \Delta_{\xi}u_{i}^{n})^{2}+1}. \tag{27}\] Similarly, introduce the central difference quotients, \[\Delta_{u}w_{ij} = \frac{w_{i+1,j}-w_{i-1,j}}{2h},\quad\Delta_{v}w_{ij}=\frac{w_{i,j +1}-w_{i,j-1}}{2h},\] \[\delta_{uu}^{2}w_{ij} = \frac{w_{i+1,j}+w_{i-1,j}-2w_{ij}}{h^{2}},\quad\delta_{vv}^{2}w_{ ij}=\frac{w_{i,j+1}+w_{i,j-1}-2w_{ij}}{h^{2}}.\] The fully discrete form of equation (17) is given by \[\frac{w_{ij}^{n+1}-w_{ij}^{n}}{\Delta t}=C_{ij,1}^{n}\delta_{vv}^{2}w_{ij}^{n +1}+C_{ij,2}^{n}\Delta_{u}\Delta_{v}w_{ij}^{n+1}+C_{ij,3}^{n}\delta_{uu}^{2}w _{ij}^{n+1}, \tag{28}\] where the coefficients \(C_{ij,1}\), \(C_{ij,2}\) and \(C_{ij,3}\) are specified by \[C_{ij,1}^{n} = \frac{1+(\Delta_{u}w_{ij}^{n})^{2}}{2(1+(\Delta_{u}w_{ij}^{n})^{2 }+(\Delta_{u}w_{ij}^{n})^{2})},\] \[C_{ij,2}^{n} = \frac{-\Delta_{u}w_{ij}^{n}\Delta_{v}w_{ij}^{n}}{1+(\Delta_{u}w_ {ij}^{n})^{2}+(\Delta_{u}w_{ij}^{n})^{2}},\] \[C_{ij,3}^{n} = \frac{1+(\Delta_{v}w_{ij}^{n})^{2}}{2(1+(\Delta_{u}w_{ij}^{n})^{2 }+(\Delta_{u}w_{ij}^{n})^{2})}.\] ### Boundary condition Note that the finite difference schemes (27) and (28) are three-point and nine-point schemes, respectively. Denote by \(\Omega_{r,l}^{h}\) all grid nodes in \(\Omega_{r,l}\). A grid node is identified as a boundary node if all of its stencil nodes belong to \(\Omega_{r,l}^{h}\). Otherwise, it is identified as an interior node, see Figure 4 for an example. The set of boundary nodes is denoted as \(\partial\Omega_{r,l}^{h}\). Further, a control point is identified as a boundary control point if its projection on the base plane belongs to \(\partial\Omega_{r,l}^{h}\). Denote by \(\partial I_{r,l}^{h}\) the set of boundary control points. It is worthwhile to mention that \(\partial I_{r,l}^{h}\) works as the numerical boundary of \(\Gamma_{r,l}\) and is not necessarily a subset of \(\partial\Gamma_{r,l}\). At boundary nodes, the match condition (18) is utilized to pose Dirichlet boundary conditions for (13) and (17). Numerically, the partition of unity \(\chi_{r}\) is taken as \[\chi_{r}(\mathbf{x})=\begin{cases}1,&\text{if }|\mathbf{n}(\mathbf{x})\cdot\mathbf{e}_{r}|>| \mathbf{n}(\mathbf{x})\cdot\mathbf{e}_{j}|,\forall j\neq r,\\ 0,&\text{otherwise}.\end{cases} \tag{29}\] This simple choice of partition of unity is sufficient for accuracy, though it is not a smooth one. The partition of unity can be understood as evaluating values at boundary nodes on \(\Gamma_{r}\) by interpolation from interior nodes on other subsets \(\Gamma_{j},j\neq r\). Since subsets overlap with each other, interpolation stencils always exist. In the following, we introduce the approaches to discretize the matching condition (18). Figure 4: A schematic of node identification of the nine-point scheme (28) on a base domain: boundary nodes \(\partial\Omega_{r,l}^{h}\) (marked by red rectangles) and interior nodes \(\Omega_{r,l}^{h}\backslash\partial\Omega_{r,l}^{h}\)(marked by blue triangles). #### 3.3.1 Coupled matching condition The simplest way to discretize (18) is to enforce it at every time level in a discrete sense. \[\mathbf{x}^{(r),n+1}=\sum_{j\neq r}\chi_{j}\mathbf{x}^{(j),n+1},\quad\mathbf{x}^{(r),n+1}\in \partial\Gamma_{r}^{h,n+1}. \tag{30}\] This leads to a nonlinear system that couples the solutions on all subsets \(\Gamma_{r}\) in each time step. Let \(\mathbf{u}^{i}\) and \(\mathbf{u}^{b}\) denote the vectors of solutions at the interior and boundary nodes, respectively. The system, which needs to be solved in each time step, is written as \[\begin{split}\mathbf{A}\mathbf{u}^{i}+\mathbf{Q}\mathbf{u}^{b}& =\mathbf{f},\\ \mathbf{u}^{b}&=\mathbf{\Pi}\mathbf{u}^{i},\end{split} \tag{31}\] where \(\mathbf{A},\mathbf{Q}\) are matrices, \(\mathbf{\Pi}\) is the interpolation operator and \(\mathbf{f}\) is the vector containing solutions at previous time levels. Typically, in the system (31), the first equation approximates PDEs, and the second approximates the matching condition. Here, the operator \(\mathbf{\Pi}\) is essentially nonlinear since discretizing (30) involves the root-finding of polynomials. Note that the matrix \(\mathbf{A}\) is block-wise diagonal and is invertible. The nonlinear system (31) can be solved, in spirit, with the technique of Schur complement. One first needs to solve the lower dimensional system \[\mathbf{\Pi}\mathbf{A}^{-1}(\mathbf{f}-\mathbf{Q}\mathbf{u}^{b})-\mathbf{u}^{b}=0, \tag{32}\] for \(\mathbf{u}^{b}\) and then obtains \(\mathbf{u}^{i}\) by solving \[\mathbf{A}\mathbf{u}^{i}=\mathbf{f}-\mathbf{Q}\mathbf{u}^{b}. \tag{33}\] The system (32), which looks like a Schur complement system but is nonlinear, can be solved with the method widely used in domain decomposition methods, the Schwarz alternating method, which is a block-wise Gauss-Seidel type iteration method [21; 29]. The main idea of the method is to solve problems alternately on each subdomain and to provide boundary conditions for other subdomains. Generally, the Schwarz alternating method converges geometrically within a few iterations. The blocks in matrix \(\mathbf{A}\) are the approximations of elliptic differential operators, which can also be inverted by an iterative method, such as the successive over-relaxation (SOR) method. In particular, in two space dimensions, \(\mathbf{A}\) is block-wise tri-diagonal, and the Thomas algorithm is applicable. #### 3.3.2 ADI method Instead of directly enforcing the matching condition (18), we can also follow the idea of the alternating direction implicit (ADI) method, which is used to solve time-dependent PDEs in multiple space dimensions and discretize the matching condition with a time splitting technique. Note that there is no need to enforce (18) accurately since numerical discretization of the PDEs has already introduced numerical errors. One only needs to approximate it with an error on the order of \(\mathcal{O}(\tau^{p})\) where \(\tau\) is the time step, and \(p\) is the approximation order. We evolve \(\Gamma_{r}\) alternately and compute boundary conditions with the newest solutions, which is also an accurate approximation to the matching condition, by \[\mathbf{x}^{(r),n+1}=\sum_{j\neq r}\chi_{j}\mathbf{x}^{(j),n^{*}},\quad\mathbf{x}^{(r),n+1 }\in\partial\Gamma_{r}^{h,n+1}. \tag{34}\] where \(\mathbf{x}^{(j),n^{*}}\) is the newest solution on \(\Gamma_{j}\). For example, boundary conditions for \(\Gamma_{1}\) are interpolated from the newest solution on \(\Gamma_{2}\) and \(\Gamma_{3}\); then, update the solution on \(\Gamma_{1}\) and compute boundary conditions for \(\Gamma_{2}\) using the newest solution on \(\Gamma_{1}\) and \(\Gamma_{3}\), etc. This approach is non-iterative in the sense that the solutions on subsets \(\Gamma_{r}\) are not coupled, and no Schur complement system needs to be solved in each time step. This ADI method is also a time-splitting strategy with a formal splitting error on the order of \(\mathcal{O}(\tau)\). The two approaches only differ in the computation of boundary conditions. Figure 5 presents the numerical solutions obtained by these two approaches. One can see that the solutions obtained by these two approaches only have subtle differences at several nodes in the overlapping region. In fact, the ADI method and the Schwarz alternating method are closely related to this problem. The ADI method is only a strategy to provide Dirichlet boundary conditions for PDEs (13) and (17). One can also repeatedly use the ADI method to compute new boundary conditions and update the solution at \(t^{n+1}\), which exactly leads to the Schwarz alternating method. Therefore, the Schwarz alternating method reduces to the ADI method if only one iteration is performed in each time step. Figure 5: Zoom-in snapshots of numerical solutions. (a) initial solution; (b) solution after a time step by the ADI method; (c) solution after a time step by coupled matching condition. Control points on horizontal lines are marked as red rectangles, and those on vertical lines are marked as blue circles. ## 4 Algorithm summary In this section, the algorithm for solving mean curvature flows (1) with the proposed ADI method is summarized as follows: Algorithm. The ADI method for mean curvature flows: Step 1. Given the initial hypersurface by its parametric form or a level set function, embed it into a bounding box that is uniformly partitioned into a Cartesian grid and find the control points with overlapping decomposition strategy described in Subsection 3.1. Step 2. In each time step, evolve the overlapping subsets alternately. For each subset, do the following three procedures: 1. identify all the nodes in each isolated component of the subset with the breadth-first search method. 2. compute the Dirichlet-type boundary condition for boundary nodes with the discrete matching condition (34); 3. evolve the subset to the next time level by solving (27) or (28); Step 4. Update control points such that they satisfy the decomposition strategy described in Subsection 3.1. Step 5. Repeat steps 2-4 until the final computational time. Remark 1: Procedure (a) in Step 2 is only for matrix assembly such that direct methods, such as the Thomas algorithm, are applicable. Suppose the finite difference equations (27) and (28) are solved with iterative methods which only require the matrix-vector product. In that case, one can find the local stencil points on-the-fly instead of finding all the nodes in the components in advance. ## 5 Numerical results This section presents numerical examples in two and three space dimensions to validate the proposed method. Initial hypersurfaces are given in parametric forms or the zero level set of level set functions, which will be prescribed in each example. In all the numerical examples, the bounding box \(\mathcal{B}\) is uniformly partitioned into a Cartesian grid \(\mathcal{G}\) with \(N\) intervals in each direction. For problems with exact solutions, we estimate the numerical error at a surface point by finding its projection on the hypersurface of the exact solution and computing the distance. We take the solution on a fine grid for problems without exact solutions as a reference "exact" solution. Then, we estimate the numerical error at a surface point by finding the closest surface point on the reference solution and computing the distance. The numerical errors in the maximum norm and \(l_{2}\) norm are computed by \[\|\mathbf{e}_{h}\|_{\infty}=\max_{\mathbf{x}_{i}\in\Gamma^{h}}\left\{\|\mathbf{x}_{i}- \mathbf{x}_{i}^{ref}\|\right\},\quad\|\mathbf{e}_{h}\|_{2}=\sqrt{\frac{1}{N_{I}} \sum_{\mathbf{x}_{i}\in\Gamma^{h}}\|\mathbf{x}_{i}-\mathbf{x}_{i}^{ref}\|^{2}}, \tag{35}\] where \(\mathbf{x}_{i}^{ref}\) is the exact solution or reference solution on a fine grid associated with \(\mathbf{x}_{i}\) and \(N_{\Gamma}\) is the total point number. The following numerical experiments are performed on a personal computer with a 3.80 GHz Intel Core i7 processor. The codes for conducting the numerical experiments are written in C++ computer language. ### Two space dimensional examples First, we solve the mean curvature flow for a simple case and compare the numerical solution with the exact solution to verify the convergence of the proposed method. The initial shape is chosen such that the curve is a circle whose radius \(r(t)\) satisfies \[r(t)=\sqrt{1-2t}. \tag{36}\] The bounding box is taken as \([-1.2,1.2]^{2}\). Note that the curve will eventually shrink to a point at \(T_{end}=0.5\). We chose to estimate the numerical errors at \(T=0.2\) to ensure that the coarsest grid \(N=64\) can fully resolve the curve during the computation. On finer grids, the computation can last longer than \(T\). Time step size is chosen to be \(\Delta t=0.1\Delta x\) where \(\Delta x\) is the spatial grid size. Numerical results are summarized in Table 1. Next, we change the initial shape to an ellipse which is given by \[\begin{cases}x=a\cos(\theta),\\ y=b\sin(\theta),\end{cases}\quad\theta\in[0,2\pi), \tag{37}\] with \(a=1.0,b=0.5\). The problem is solved in the bounding box \(\mathcal{B}=[-1.2,1.2]^{2}\). Time step size is taken as \(\Delta t=0.1\Delta x\). Since there is no exact solution for this configuration, the solution on a fine grid with \(N=2048\) is chosen as a reference solution to estimate numerical errors. The estimated error and the convergence order are summarized in Table 2. It can be observed that the convergence order is a bit larger than 1. This may be due to the inaccurate estimation of numerical error based on the distance between the surface point to its closest point in the reference solution. The evolution history of the curve is presented in Figure 6. The changes in curve length and enclosed area are computed on the grid with \(N=1024\) and shown in Figure \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline N & \(\|\mathbf{e}_{h}\|_{\infty}\) & order & \(\|\mathbf{e}_{h}\|_{2}\) & order \\ \hline 64 & 7.99e-03 & - & 3.99e-03 & - \\ \hline 128 & 3.80e-03 & 1.07 & 1.79e-04 & 1.16 \\ \hline 256 & 1.88e-03 & 1.02 & 8.42e-04 & 1.09 \\ \hline 512 & 9.45e-04 & 0.99 & 4.12e-05 & 1.03 \\ \hline 1024 & 4.72e-04 & 1.00 & 2.05e-05 & 1.01 \\ \hline \end{tabular} \end{table} Table 1: Numerical error and convergence order of the 2D MCF for a circle-shaped initial curve. 7. It can be observed that the enclosed area loss rate compares favorably with the theoretic result, whose slope is \(m_{ref}=-2\pi\). We also chose a five-fold star-shaped initial curve, which is given by \[\begin{cases}x=a(\kappa+\eta\sin(m\theta))\cos(\theta),\\ y=b(\kappa+\eta\sin(m\theta))\sin(\theta),\end{cases}\theta\in[0,2\pi), \tag{38}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(\|\mathbf{e}_{h}\|_{\infty}\) & order & \(\|\mathbf{e}_{h}\|_{2}\) & order \\ \hline 128 & 2.45e-02 & - & 1.84e-02 & - \\ \hline 256 & 1.04e-02 & 1.24 & 7.95e-03 & 1.21 \\ \hline 512 & 4.31e-03 & 1.27 & 3.31e-03 & 1.26 \\ \hline 1024 & 1.55e-03 & 1.48 & 1.15e-03 & 1.52 \\ \hline \end{tabular} \end{table} Table 2: Numerical error and convergence order of the 2D MCF for an ellipse-shaped initial curve. Figure 6: Time evolution of the MCF for an ellipse-shaped curve. Figure 7: Time evolution of curve length and enclosed area of the ellipse-shaped curve. with \(a=1.0,b=1.0,\kappa=0.8,\eta=0.2\) and \(m=5\). The bounding box and time step size are the same as those in the last case. Estimated numerical errors and convergence orders are summarized in Table 3. The evolution history of the curve and changes in curve length and enclosed area are presented in Figure 8 and 7, respectively. The numerical results are also consistent with theoretical results. For this case, we compare the present method with an explicit time-advancing scheme for the mean curvature flow, which discretizes the equation \[\frac{d}{dt}\begin{pmatrix}x\\ y\end{pmatrix}=-\frac{x_{\theta}y_{\theta\theta}-x_{\theta\theta}y_{\theta}}{(x _{\theta}^{2}+y_{\theta}^{2})^{2}}\begin{pmatrix}y_{\theta}\\ -x_{\theta}\end{pmatrix},\quad\theta\in[0,2\pi), \tag{39}\] with forward Euler scheme and central differences for temporal and spatial derivatives, respectively. The time step size for the explicit method is chosen \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(\|e_{h}\|_{\infty}\) & order & \(\|e_{h}\|_{2}\) & order \\ \hline 128 & 5.45e-03 & - & 3.21e-03 & - \\ \hline 256 & 2.60e-03 & 1.07 & 1.46e-03 & 1.14 \\ \hline 512 & 1.21e-03 & 1.10 & 6.60e-04 & 1.15 \\ \hline 1024 & 5.00e-04 & 1.28 & 2.26e-05 & 1.55 \\ \hline \end{tabular} \end{table} Table 3: Numerical error and convergence order of the 2D MCF for a five-fold star-shaped initial curve. Figure 8: Time evolution of the MCF for a five-fold star-shaped curve. to ensure numerical stability, using the adaptive time step \[\Delta t=0.8(\min_{\theta}\Delta s)^{2}, \tag{40}\] where \(\Delta s\) denotes the Euclidean distance between two adjacent grid nodes. We properly optimize the codes for both methods and collect the required CPU times for solving the mean curvature flow to the final time \(T=0.1\). The control point number at time \(t\) is denoted by \(M_{t}\). Numerical results are summarized in Table 4. It can be observed that while the forward Euler method is faster on coarse grids, the present method becomes more efficient than the forward Euler method as the point number increases. It can be explained by the complexities of the two methods. To compute the solution to a fixed final time, since the time step size can be chosen as linearly proportional to the spatial grid size for the present method, the computational complexity is \(\mathcal{O}(N^{2})\). However, the computational complexity of the forward Euler scheme is \(\mathcal{O}(N^{3})\) due to high-order constraints on time step size. In fact, the forward Euler method fails for long-time computation since the required time step size quickly decreases to \(10^{-6}\) due to the curve shortening phenomenon and poor mesh quality. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Present method} & \multicolumn{2}{|c|}{Forward Euler method} \\ \hline \(N\) & \(M_{0}\) & \(M_{T}\) & CPU times(secs) & \(M_{0}=M_{T}\) & CPU times(secs) \\ \hline 128 & 392 & 224 & 9.66e-03 & 224 & 3.73e-03 \\ \hline 256 & 786 & 444 & 3.54e-02 & 448 & 1.73e-02 \\ \hline 512 & 1574 & 890 & 1.34e-01 & 896 & 1.34e-01 \\ \hline 1024 & 3150 & 1776 & 5.15e-01 & 1792 & 1.02e+00 \\ \hline 2048 & 6300 & 3548 & 2.08e+00 & 3584 & 8.30e+00 \\ \hline \end{tabular} \end{table} Table 4: CPU time comparison between the present method and an explicit time advancing scheme. Figure 9: Time evolution of curve length and enclosed area of the five-fold star-shaped curve. ### Three space dimensional examples For three space dimensional mean curvature flows, we test the convergence rate of the method by considering a simple case, a sphere-shaped initial surface. Similar to two space dimensional case, this configuration has an exact solution: the surface maintains a sphere, and the radius \(r(t)\) satisfies \[r(t)=\sqrt{1-2t}. \tag{41}\] Numerical errors are estimated at \(T=0.2\) and summarized in Table 5. We also solve the mean curvature flow in three space dimensions for more examples. In the following numerical examples, the bounding boxes partitioned into a Cartesian grid are all chosen as \(\mathcal{B}=[-1.2,1.2]^{3}\). The time step is chosen as \(\Delta t=0.05\Delta x\). In the first case, we set the initial shape as an ellipsoid which is given by \[\Gamma=\left\{(x,y,z)\Big{|}\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2} }{c^{2}}-1=0\right\}, \tag{42}\] with \(a=1.0,b=0.7,c=0.5\). Numerical error and convergence order estimated at \(t=0.2\) are summarized in Table 6. The time evolution of the surface and its area and enclosed volume are presented in Figure 10 and 11, respectively. One can observe that the major axis of the ellipsoid decreases faster compared with the other two axes and the ellipsoid becomes very close to a sphere. Surface area decreases with time, which is consistent with the theoretical result. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(\|\mathbf{e}_{h}\|_{\infty}\) & order & \(\|\mathbf{e}_{h}\|_{2}\) & order \\ \hline 128 & 1.58e-03 & - & 3.78e-04 & - \\ \hline 256 & 1.32e-03 & 0.26 & 1.74e-04 & 1.12 \\ \hline 512 & 3.70e-04 & 1.83 & 6.36e-05 & 1.45 \\ \hline \end{tabular} \end{table} Table 6: Numerical error and convergence order of the 3D MCF for an ellipsoid-shaped initial surface. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline N & \(\|\mathbf{e}_{h}\|_{\infty}\) & order & \(\|\mathbf{e}_{h}\|_{2}\) & order \\ \hline 64 & 4.72e-03 & - & 1.42e-03 & - \\ \hline 128 & 3.46e-03 & 0.45 & 7.46e-04 & 0.93 \\ \hline 256 & 1.45e-03 & 1.25 & 3.63e-04 & 1.04 \\ \hline 512 & 9.18e-04 & 0.66 & 1.82e-04 & 1.00 \\ \hline 1024 & 3.47e-04 & 1.40 & 8.81e-05 & 1.05 \\ \hline \end{tabular} \end{table} Table 5: Numerical error and convergence order of the 3D MCF for a sphere-shaped initial surface. In the second case, we chose a genus 1 torus-shaped initial surface. The surface is given by \[\Gamma=\left\{(x,y,z)\Big{|}\left(c-\sqrt{x^{2}+y^{2}}\right)^{2}+z^{2}-a^{2}=0 \right\}, \tag{43}\] with \(a=0.34,c=0.8\). Numerical error and convergence order are summarized in Table 7. The time evolution of the surface and its area and enclosed volume are presented in Figure 12 and 13, respectively. Driven by mean curvature, the torus-shaped surface becomes thinner with time. Figure 11: Time evolution of the surface area and enclosed volume of the ellipsoid-shaped surface. Figure 10: Time evolution of the MCF for an ellipsoid-shaped surface. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(\|\mathbf{e}_{h}\|_{\infty}\) & order & \(\|\mathbf{e}_{h}\|_{2}\) & order \\ \hline 128 & 1.65e-03 & - & 3.08e-04 & - \\ \hline 256 & 1.29e-03 & 0.36 & 1.64e-04 & 0.91 \\ \hline 512 & 8.26e-04 & 0.64 & 5.94e-05 & 1.47 \\ \hline \end{tabular} \end{table} Table 7: Numerical error and convergence order of the 3D MCF for a torus-shaped initial surface. Figure 12: Time evolution of the MCF for a torus-shaped surface. Figure 13: Time evolution of the surface area and enclosed volume of the torus-shaped surface. In the final case, the initial surface is a four-atom molecular-shaped surface which is given by \[\Gamma=\left\{(x,y,z)\Big{|}c-\sum_{k=1}^{4}\exp\left(-\frac{|\mathbf{x}-\mathbf{x}_{k}|^{ 2}}{r^{2}}\right)=0\right\}, \tag{44}\] with \(\mathbf{x}_{1}=(\sqrt{3}/3,0,-\sqrt{6}/12)\), \(\mathbf{x}_{2}=(-\sqrt{3}/6,0.5,-\sqrt{6}/12)\), \(\mathbf{x}_{3}=(-\sqrt{3}/6,-0.5,-\sqrt{6}/12)\), \(\mathbf{x}_{4}=(0,0,\sqrt{6}/4)\) and \(c=0.5\), \(r=0.5\). The numerical error and convergence order are summarized in Table 8. The time evolution of the surface and its area and enclosed volume are presented in Figure 14 and 15, respectively. ## 6 Discussion This work presents a Cartesian grid-based alternating direction implicit method for solving mean curvature flows in two and three space dimensions. The method decomposes a hypersurface into multiple overlapping subsets for which new evolution equations are derived by adding extra tangential velocities. The new formulations for the moving hypersurface only require solving a sequence of scalar quasi-linear parabolic PDEs on planar domains, which is one dimensional lower than the original formulation. The overlapping subsets of the hypersurface can be represented in terms of height functions of Monge patches which are discretized with Cartesian grids. With this representation of the hypersurface, an ADI-type semi-implicit time integration method is proposed such that the subsets can be evolved alternately. The convergence of the proposed method is validated by numerical experiments. The results show that the ADI method is efficient compared with an explicit scheme since it does not have high-order stability constraints on time step size. Mean curvature flows for various hypersurfaces in two and three space dimensions are also presented, including one whose initial configuration is a genus 1 surface. Although the method in this paper is designed for solving mean curvature flows, it is expected to be able to solve more moving interface problems described by geometric evolution laws, such as the anisotropic mean curvature flow, the surface diffusion flow, and the Willmore flow. Further, for problems that involve moving interfaces and bulk PDEs simultaneously, such as the Stefan problem and two-phase Stokes flow, the method can also be applicable if combined with a PDE solver such as the kernel-free boundary integral method [34]. **Funding** W. Y. is financially supported by the National Key R&D Program of China, Project Number 2020YFA0712000, the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDA25010405), the National Natural Science Foundation of China (Grant No. DMS-11771290) and the Science Challenge Project of China (Grant No. TZ2016002). S. L. Figure 15: Time evolution of the surface area and enclosed volume of the molecular-shaped surface. is partially supported by the U.S. National Science Foundation, Division of Mathematical Sciences grants DMS-1720420 and DMS-2309798. **Data availibility** Enquiries about data availability should be directed to the authors. ## Declarations **Conflict of interest** We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.
2309.17354
A Layered Architecture Enabling Metaverse Applications in Smart Manufacturing Environments
The steady rollout of Industrial IoT (IIoT) technology in the manufacturing domain embodies the potential to implement smarter and more resilient production processes. To this end, it is expected that there will be a strong reliance of manufacturing processes on cloud/edge services so as to act intelligently and flexibly. While automation is necessary to handle the environment's complexity, human-in-the-loop design approaches are paramount. In this context, Digital Twins play a crucial role by allowing human operators to inspect and monitor the environment to ensure stability and reliability. Integrating the IIoT with the Metaverse enhances the system's capabilities even further, offering new opportunities for efficiency and collaboration while enabling integrated management of assets and processes. This article presents a layered conceptual architecture as an enabler for smart manufacturing metaverse environments, targeting real-time data collection and representations from shopfloor assets and processes. At the bottom layer, our proposal relies on middleware technology, serving differentiated Quality of Service (QoS) needs of the Operation Technology (OT) monitoring processes. The latter contributes to feeding a virtual layer where data processes reside, creating representations of the monitored phenomena at different timescales. Metaverse applications can consume data by tapping into the metaverse engine, a microservice-oriented and accelerated Platform as a Service (PaaS) layer tasked with bringing data to life. Without loss of generality, we profile different facets of our proposal by relying on two different proof-of-concept inspection applications aimed at real-time monitoring of the network fabric activity and a visual asset monitoring one.
Armir Bujari, Alessandro Calvio, Andrea Garbugli, Paolo Bellavista
2023-09-29T16:01:08Z
http://arxiv.org/abs/2309.17354v1
# A Layered Architecture Enabling Metaverse Applications in Smart Manufacturing Environments ###### Abstract The steady rollout of Industrial IoT (IIoT) technology in the manufacturing domain embodies the potential to implement smarter and more resilient production processes. To this end, it is expected that there will be a strong reliance of manufacturing processes on cloud/edge services so as to act intelligently and flexibly. While automation is necessary to handle the environment's complexity, human-in-the-loop design approaches are paramount. In this context, Digital Twins play a crucial role by allowing human operators to inspect and monitor the environment to ensure stability and reliability. Integrating the IIoT with the Metaverse enhances the system's capabilities even further, offering new opportunities for efficiency and collaboration while enabling integrated management of assets and processes. This article presents a layered conceptual architecture as an enabler for smart manufacturing metaverse environments, targeting real-time data collection and representations from shopfloor assets and processes. At the bottom layer, our proposal relies on middleware technology, serving differentiated Quality of Service (QoS) needs of the Operation Technology (OT) monitoring processes. The latter contributes to feeding a virtual layer where data processes reside, creating representations of the monitored phenomena at different timescales. Metaverse applications can consume data by tapping into the metaverse engine, a microservice-oriented and accelerated Platform as a Service (PaaS) layer tasked with bringing data to life. Without loss of generality, we profile different facets of our proposal by relying on two different proof-of-concept inspection applications aimed at real-time monitoring of the network fabric activity and a visual asset monitoring one. Smart Manufacturing, Digital Twin, Metaverse, Industrial IoT, Middleware ## I Introduction We are in the middle of an industrial revolution, i.e., of Industry 4.0 (14.0)-based intelligent and cooperative Cyber-Physical Systems (CPSs), underpinned by a digital transformation that will affect all industries. At the core of this revolution is the convergence of flexible and multi-faceted communication technologies, novel computing techniques such as cloud/edge computing, and the application of the IoT vision to industrial manufacturing systems [1]. As a result, industrial devices and machines will rely on heterogeneous wireless/wired technologies to communicate with applications running on global cloud/local edge platforms, thus enabling new efficient solutions, e.g., for predictive maintenance and process optimization. The application of big data and AI has been successfully adopted to streamline and optimize non-manufacturing processes and is now being expanded into the industrial sector. Today's manufacturing environment involves machines that can communicate independently and generate data at a rapid pace. This information can be utilized proactively to enhance control, and business processes in manufacturing, engineering, supply chain, and product life cycle management [2]. A significant challenge in realizing this goal is the outdated and inflexible division between technology departments involved in product manufacturing and those focused on management tasks. Indeed, industrial automation has taken a traditional approach, opting for a strict separation between Operation and Information Technology domains (OT & IT) [3]. Until now, IT technologies such as cloud/edge computing, Service-Oriented Architectures (SOA), and virtualization have been used in the industrial domain in limited ways, only where stringent requirements were not necessary [4]. However, it is becoming clear that smart manufacturing environments will have a significant impact only with a complete convergence of OT/IT, allowing for full utilization of data intelligence and recent computing and communication technologies. While recognizing the OT/IT synergy as a major milestone in tackling the complexity of this evolving environment, human-in-the-loop design approaches are paramount. Despite the many benefits that data intelligence can provide to society and the economy, the technology behind it can also have negative or unpredictable impacts, bringing about new risks to individuals. A synergetic development of both AI-driven and human-centric approaches is key to driving a safe, resource-efficient, and sustainable manufacturing domain [5]. To this end, a central component toward this objective is the concept of the digital twin; a virtual replica of a physical asset that can be monitored and optimized in real-time. Data streams often connect this digital replica and the physical counterpart, feeding and continuously updating the digital model, used as a descriptive or predictive tool for planning and operational purposes [6]. Integrating the shopfloor with the Metaverse enhances the system's capabilities even further, offering new opportunities for efficiency and collaboration while enabling integrated management of assets and processes [7]. In a Metaverse, digital replicas of real-life objects and processes can be used for various purposes such as testing, monitoring, and collaboration through AR/VR technology. In this work, we present a layered conceptual architec ture and accompanying software prototype conceived to enable a human-in-the-loop manufacturing environment enriched thanks to real-time data sourced from shopfloor processes. Data are stored at different layers of the architecture, having different scopes, and are brought to life via virtual representations of assets, depending on the layer and the application used to access it. In synthesis, the architecture consists of multiple layers, including a bottom layer that utilizes middleware technology to serve the different Quality of Service (QoS) requirements of the OT monitoring processes, implementing a beyond state-of-the-art OT/IT convergence mechanism. Moving up is the virtual layer tasked with the creation of multi-faceted representations of the monitored phenomena at different timescales. Herein are logically collocated the repositories containing the virtual representations of assets at different levels of granularity, e.g., individual machines or production cells within the shopfloor, etc. Finally, at the application layer reside the immersive and collaborative applications. The applications leverage the metaverse engine, a microservice-oriented and accelerated Platform as a Service (PaaS) layer that fuses and renders synchronized information sources from the virtual layer. Without loss of generality, we profile different facets of our proposal by relying on two different proof-of-concept inspection applications aimed at real-time monitoring of the network fabric activity and a physical asset monitoring one. ## II Background In this section, we provide a concise survey on the paradigm shift advocated by Industry 4.0, motivating the need for a human-centric approach, as recently advocated by many nation-state initiatives on their next industrial vision. Next, we provide a concise survey on the technology behind some of the adopted design choices. ### _Towards a Human-in-the-loop Manufacturing Environment_ The automation pyramid serves as a reference model for dealing with the challenges posed by complex and heterogeneous manufacturing systems and consists of different levels, each representing a specific stage of automation [8]. A simplified specification consists of four different hierarchy levels (see left part of Fig. 1). The sensing and actuation units are located on Level 1. They commonly operate within milliseconds or seconds to control the physical manufacturing processes to reach the required quality demands. Monitoring, supervision, and control is the task of Level 2 systems. Production environments mainly use programmable logic controllers (PLCs). Depending on the individual production system, PLCs operate within hours down to less than periods of seconds. As Level 2 directly controls the actuators, PLCs embedded logic may implement short-term adaptions of the process(es). Level 3 of the automation pyramid encompasses activities related to the management and control of manufacturing operations. This includes dynamic scheduling of jobs, optimization of production processes, and data aggregation and distribution, all encompassed in a Manufacturing Execution System (MES), which operates within one day or shift and must be capable of reacting to unexpected events, e.g., machine breakdowns. The level also includes SCADA systems that collect real-time data, data-driven analysis, and decision support systems to optimize processes. This level bridges lower levels of control and monitoring and higher levels of strategic decision-making and planning. The top Level 4 consists of an Enterprise Resource Planning (ERP) tool, which executes the required tasks and manages inventory and resources. As the ERP defines the long-term production utilization, the long-term operations (optimizations) are also determined here. Cyber-physical systems, which merge the physical and virtual world through embedded hardware and software systems, present a new, non-hierarchical approach to production (see right part of Fig. 1). This change is due to the need to better highlight and distinguish the networks that connect the layers and to reflect that some technologies may no longer reside in the facility but in local/remote edge-cloud computing environments. This has been made possible thanks to the convergence of the so-called disaggregation trend in the IT industry and the market penetration of deterministic and high-throughput communication technologies such as Time Sensitive Networking (TSN) and 5G [9, 10]. In this _automation pillar_, processes that do not require hard, real-time control can now be run in the cloud with virtual PLCs, as shown in the graphic. The graphic also depicts how, in the IIoT, data can now be shared more easily through all levels rather than sequentially from one layer to the next. The hierarchical architecture of the automation pyramid is still present and very common in production systems. This is also indicated by the adoption of the automation pyramid within the more recent RAMI 4.0 reference model for Industry 4.0 [11]. However, the paradigm shift has already begun by recognizing the OT/IT convergence problem, paving the way for the use of descriptive and predictive Digital Twin technology in manufacturing systems. The availability of data and recent advances in AI technology could pave the way for smarter manufacturing environments, but this evolution should go in pace with a human-centric design approach. In this context, the notion of the Metaverse could enhance an operator's capabilities even further, offering new opportunities for efficient collaboration while enabling integrated Fig. 1: Transition to Industry 4.0. management of assets and processes in the manufacturing domain. Our proposal goes in this direction, providing the basic building blocks for enabling an immersive, human-in-the-loop manufacturing environment. ### _Data Homogenization and Transport from the Shopfloor_ The OPC Unified Architecture (OPC UA) is an industrial automation standard that aims to solve the issue of incompatibility among different Ethernet technologies (such as PROFINET, EtherCAT, and Modbus-TCP) by providing a unified and consolidated view of assets and processes [12, 13]. This platform-independent standard can facilitate interoperability among vendors. The initial OPC UA standard employed a Client/Server paradigm: an OPC UA server offers access to data and functions structured in an object-oriented information model, where clients interacted with the information model through standardized services. Based on the request-response method, this communication model does not satisfy the needs of some industrial (control) applications as it creates a strong coupling between different system components and does not meet the performance demands of (hard) real-time systems. To overcome these limitations, Part 14 of the OPC UA specification introduces an extension based on the Publish/Subscribe (Pub/Sub) communication paradigm [14]. In this model, an application can act as either a publisher or a subscriber (or both), where the publisher sources the data, and the subscriber consumes it. Communication between publishers and subscribers is message-based, with publishers sending messages to a message-oriented middleware without prior knowledge of the subscribers. The latter express interest in certain types of data without having specific knowledge of the publishers. This message-based communication is ideal for applications that value location independence and scalability. OPC UA is a significant IIoT protocol for addressing communication needs at the OT layer, but it does not fully meet the requirements of the IT layer. Here, there is a need for solutions and frameworks that can handle high-volume data transfers in a secure and reliable manner, which are not priorities of the OPC UA standard. To meet these requirements, we rely on Apache Kafka, an open-source Message Oriented Middleware that uses a Pub/Sub communication model enabling a many-to-many communication pattern [15]. Kafka is well-suited for scenarios where scalable and loosely-coupled systems must work together. In this setting, producers publish messages or batches of messages on a channel called topic, while consumers can read the messages by subscribing to a specific topic. Topics in Kafka make use of partitions, which can be thought of as an infinite log file that is not immediately flushed to disk, leading to highly efficient I/O messaging and having the capability to force strong ordering constraints on message delivery. In the following, we present a detailed overview of the proposed layered architecture, discussing some design choices and technological building blocks. ## III A Layered Architectural Approach The conceptual architecture, depicted in Fig. 2, takes a structured approach to integrate the virtual and the physical realms of next-generation manufacturing systems. In the following, we provide an in-depth description of the various layers, discussing their functional components and some concrete technological choices. ### _Operational Technology Layer_ The _Physical layer_ encompasses Level 1 of the automation pyramid, including controllers, machinery, sensors, and processes that make up the industrial shop floor. In addition to these physical components, the physical layer includes users and human operators. Moreover, the layer includes various interfaces, such as haptic technology, VR/AR headsets, and touchscreens, enabling interaction between the physical and the virtual world. One of the key aspects of our proposal is the implementation of a scalable and reliable solution addressing OT/IT convergence. This convergence is made possible by relying on an OT middleware, which serves as a bridge to gather and elaborate data before they are sent to the virtual layer. One of the main problems when dealing with the convergence of OT and IT is that OT systems often employ a variety of communication protocols, as previously discussed. Therefore, the role of the OT middleware is to standardize the data collection process, abstracting from the details of the specific protocol. Another common challenge is data format heterogeneity and resource configuration modeling. We address these challenges by using a standard and common information model (object model) available in the OPC UA dictionary. We anticipate our testbed uses a deterministic network fabric (TSN), and communication on the shopfloor relies on the OPC UA PubSub profile. As part of the OT middleware, one or more Gateway components are responsible for listening to OPC UA PubSub endpoints and managing data flows between the physical and virtual layers. The Gateway uses different topics and partition levels to prioritize specific traffic flows, such as monitoring data or controlling data traffic in the network. To increase the reliability of the data collection process, the Gateway implements a replication mechanism. For example, monitoring and control topics are assigned a single partition with a high degree of replication. In contrast, data topics from raw sensor elements are assigned multiple partitions with a lower degree of replication. The OT middleware is also responsible for forwarding the data to the IT level, which logically spans across the upper layers of the architecture. To achieve this, our solution relies on Apache Kafka, a message-oriented middleware, well-suited for scenarios where scalable and loosely coupled systems must interoperate. ### _Virtual Layer_ The virtual layer is conceptually located above the physical layer and serves as the metaverse engine's primary data access layer and support component. It integrates essential functions utilized by the upper layers in the construction of virtual worlds. Going into more detail, the virtual layer is composed of three main sub-blocks: _digital twin_, _digital assets_, and _metadata repository_. #### Iii-B1 Digital Twin The digital twin subcomponent is a crucial part of the virtual layer in the factory environment. It creates and maintains digital twins of all the physical machines, products, and facility assets. This allows for a comprehensive representation of the factory's operations. In addition to creating digital twins of individual assets, this subcomponent maintains a global view of the facility, including both the plant and the network. This allows for a holistic understanding of the entire factory's operations and enables the identification of interdependencies and bottlenecks in the production process. Here, the processing module collects data from the lower layers and transforms it with varying spatial and temporal resolution levels, e.g., allowing for a more detailed and accurate representation of the physical assets and processes. For example, the processing module can collect and aggregate data at a high pace, providing real-time monitoring of machine performance. On the other hand, it can be dynamically reconfigured to adjust the scope of the data forwarding processes, feeding data from single production units to an entire production cell, and providing a detailed representation of current processes in the factory. Combining the various data perspectives generated by the processing layer to create a unified and multi-faceted view of the operations within the factory can represent a challenge. To overcome this, the data fusion layer utilizes various techniques, such as statistical, rule-based, and ontology-aided methods, to synthesize the data into a single, consistent representation of a particular phenomenon. For example, data from different parts of a machine or the machine and its products can be combined to identify the root cause of process issues or inefficiencies. Furthermore, simulation engines are used in this layer to perform several kinds of analysis to gain deeper insights into the potential future evolution of the (sub)systems. #### Iii-B2 Digital Asset This part consists of an indexed repository, which includes digital representations of physical assets. This data encompasses a wide range of information, including geometric structures, physical attributes, and technical specifications, and may consist of both structured and unstructured information. The current prototype employs various storage technologies to enable data injection into the data lake to promote efficient data management. For instance, we rely on a NoSQL database technology such as MongoDB to store device configurations or other static information. In contrast, data sourced from the shopfloor are stored in time-series databases such as Prometheus. Additionally, object storage solutions such as MinIO or Ceph are deployed for storing digital resources such as CAD/CAE and 3D models and multimedia data (i.e., audio/video files). #### Iii-B3 Metadata Repository An important data repository that allows the association of different sources of information, at different levels of spatial and temporal granularity, between objects. Specifically, the _Static Binding_ stores the binding between the data sources (Kafka topics) and the actual assets generating them. To uniquely identify assets and their local in the RAMI 4.0 hierarchy model, we adopt IEC 62264 identifiers. This allows us to keep track of assets and their deployment context, e.g., the production cell where they are located. Another essential piece of information in this hierarchical binding is the Kafka topic, a human-readable string associated with the asset identifier. Last is the _App Bindings_ module, which stores information about the resources that an application can access, thus allowing the retrieval of data streams related to a particular resource and its model from the _Digital Asset_ repository and to overlay this information on the application itself. This wiring presents a flexible structure, allowing Metaverse applications to interact with objects in space, performing zoom-in and out operations while retaining context and overlaying real-time data of interest. ### _The Metaverse Engine_ This component acts as a bridge between the virtual world and the application layer. It comprises a series of subsystems typically found in engines for augmented or virtual reality experiences and interactive applications. These subsystems, depicted in Fig. 3 include physics simulators, rendering engines, streaming services, and other similar components deployed as a series of microservices that can be combined to form more complex services or entire applications. To ensure that each service can meet its QoS/E requirements, the engine can be replicated and distributed across multiple infrastructure nodes. The second part of the Metaverse Engine is a resource-aware orchestrator that enables the deployment of the microservices Fig. 2: Proposed conceptual architecture. and infrastructure components based on the requirements of individual microservices [16]. The orchestrator leverages a deployment strategy that maps service requirements to the underlying resources and guarantees end-to-end QoS/E specifications of various applications. For instance, in the case of shared applications, where multiple users can interact with the virtual world simultaneously, the services can be associated with groups of players to ensure a better overall experience. The engine has synchronization and state management mechanisms to support various deployment options. The synchronization mechanism manages interactions between the metaverse engine services and the virtual world services, as well as between platform users and these services. The state manager ensures that the consistency of the state is maintained, both for individual applications and for the various microservices, especially in the case of their replication across multiple infrastructure nodes. The engine is underpinned by a communication bus that enables the exchange of data between the engine's microservices and the applications running within the platform through an event or message system. For instance, a rendering service may be notified of modifications to a scene following a simulation executed by a physics service. The communication bus can utilize different communication and acceleration or memory/storage access technologies, based on node availability and the requirements of individual services, in a manner that is transparent to them. On the network communication side, the bus can utilize technologies such as DPDK, and RDMA, rely on the classic TCP/IP network stack, or IPC mechanisms such as shared memory [17]. Concerning memory/storage access, the engine can employ acceleration technologies such as SPDK or GPUDirect to enhance access and transfer of large data and models. These elements and mechanisms, such as orchestration, synchronization, and the communication bus, are crucial to support the application hub model (Sec. III-D), where different metaverse applications can coexist and be utilized seamlessly by users and operators of the smart factory. ### _Application Layer and Use-cases_ The central component of the application layer is the application hub, a logically centralized repository of applications that can exploit the metaverse engine. The hub can be accessed by industrial plant operators, offering a user-friendly interface that facilitates interaction with the metaverse engine and the ability to switch between applications seamlessly. A notable feature of the hub and its applications is the capability to display information selectively in the user interface, such as runtime data streams and network performance. The application context is established at application design time, and the state is stored and retrieved from the _App Bindings_ available at the Virtual Layer. Furthermore, the hub grants access to individual machines and controllers, enabling real-time monitoring of ongoing production processes. In essence, the hub and its applications serve as an _enriched_ access point to the virtual layer and its subcomponents outlined in Sec. III-B. Fig. 4 shows a simplified instance of the proposed architecture for two use cases to understand better how applications interact with the metaverse engine. The first proof-of-concept application is an AR solution for visualizing the real-time network fabric servicing control and best-effort flows on the shopfloor network. The app. provides the means to monitor the behavior of the network and/or appliances, overlaying and updating the rendered image via real-time data from the control plane. The pictorial representation showcases a graph-like structure, displaying the different data flows, such as control and sensor data. Visualizing the network in an AR environment allows operators and maintenance personnel to quickly identify and understand the relationships between various devices and data flows. This can lead to improved decision-making and increased operational efficiency, reducing downtime and improving the overall performance of the critical industrial network. In addition to visualizing the network fabric and real activities therein, the application provides the means for operators to manage the network in real-time. The second proof-of-concept application aims to enhance the efficiency and accuracy of maintenance tasks in industrial settings through video streaming rendered through an augmented reality (AR) appliance. The solution involves streaming high-resolution video data from industrial cameras to an edge node for further processing. The processed video is then transmitted to an AR headset worn by an operator performing maintenance on industrial shopfloor appliances. AR technology enhances the operator's situational awareness by providing real-time, hands-free access to machine information and data. Operators can visualize machine components and systems in real-time, reducing the risk of human error. As an example, in Fig. 4 is shown the visual inspection application which is deployed at the edge, receiving and processing the video frames acquired by the industrial camera(s) deployed on the shop floor. The metaverse engine and optimized middleware support ensure that the video is of sufficient quality for display via the AR headset. In this scenario, both video and metric Fig. 3: An example of a service chain that utilizes the metaverse engine, where the rendering service acquires data inputs in the form of a time-series data which are superimposed on a video stream (Flow #1), spatial contextual information (Flow #2), and user inputs. The service either forwards the raw frame directly to the client so as to visualize the scene through an AR headset or employs a supplementary service for encoding and transmitting the frame using widely adopted streaming protocols (e.g., RTP, WebRTC). data sourced by the digital twin of the asset are combined so to show a more representative view of the phenomenon. ## IV Preliminary Evaluation This section presents a preliminary evaluation of our proposal in a teal testbed. We first discuss the experimental settings and successively present the obtained results. ### _Settings_ The deployment environment consists of five nodes, each serving different functionalities within the OT and IT layers. The IT layer includes the virtual and application layers. The OT layer comprises three interconnected nodes, linked by a mesh topology using 1Gbit links. Each OT node is equipped with an Ubuntu 22.04.1 LTS operating system, an Intel Core i5-2400 CPU @ 3.10GHz processor, and 8GB of RAM. Nodes 1 and 2 are dedicated to traffic simulation, utilizing software packages to emulate realistic industrial machine traffic, as described in a previous work [18]. On the other hand, Node 3 hosts a Gateway entity, acting as a bridge between the OT and IT layers. The edge nodes, Node 4 and Node 5 share the same operating system and have an 18-core Intel i9-10980XE CPU @ 3.00GHz, along with 64GB of memory. They are directly connected via two 100Gbps Mellamox DX-6 NICs, ensuring minimal network overhead. Additionally, Nodes 4 and 5 serve as the deployment location for the IT subsystems. The first scenario involves transmitting operational data from a simulated industrial asset at Node 1 via the OPC UA PubSub protocol to the OPC UA Subscriber at Node 2. This scenario simulates a typical sensor-to-controller scenario in an industrial environment, where the internal operational status of the industrial asset is extracted and transmitted using the OPC UA protocol. The Gateway component at Node 3 subscribes to these messages (OPC UA PubSub), then transmits them to the Kafka deployment at Node 4, which the custom Kafka consumer will eventually receive at Node 5. The second scenario focuses on simulating the streaming of a dedicated video surveillance system designed to monitor a workshop. Within this scenario, the rendering and streaming services of the Metaverse Engine are deployed on Node 4, while a client application for receiving the video stream is located on Node 5. The objective of this scenario is to evaluate the engine's ability to provide immersive user experiences. Notably, the direct 100Gbps link connecting the two nodes allows for an assessment of the performance metrics influenced by the engine's acceleration technology. The outcome of these two scenarios will provide crucial insights into the performance and efficiency of the proposed architecture in managing operational data and its suitability for delivering immersive experiences. In both scenarios, to accurately measure the time taken for the transmission of the messages, the nodes are synchronized by using the Precision Time Protocol (PTP). This allows us to extract fine-grained metrics, and ensure that the results are accurate and reliable. ### _Results_ The proposed system's efficacy is assessed by measuring the message latency between the OT and the application layer, under varying traffic loads. The results, corresponding to the network observability application, are presented in Fig. 5 and Fig. 6. Fig. 5 illustrates the latency between two simulated machines, Node 1 and Node 2, in the OT layer, as the number Fig. 4: The image represents a smart factory where data sourced from various machines and processes is collected to feed the Virtual Plane. The digital twins are local instances deployed at edge/cloud nodes and can be transferred across them on demand. The upper layer shows a representation of two virtual applications: (i) a network (observability) and (ii) a mixed reality app. used to inspect potential problems on a shopfloor machine. of messages/second (real-time diagnostic information) is increased from \(400\) to \(1200\). The latency is calculated as the time difference between receiving and sending messages at the application level. Our results show that the latency between the two machines remains stable and in the sub-millisecond range, which is the required latency for communication between machines or PLCs in the OT layer. Fig. 6 displays the end-to-end latency between the OT layer and the Kafka consumer at the IT layer (Node 5) for the same message rate. The results show a latency that is an order of magnitude higher than the latency in the OT layer. This increase in latency is expected due to the number of software components the message must traverse, especially the latency introduced by the Kafka MOM topics, which have been configured to ensure a reliable and ordered delivery of messages conveyed from the OT. This configuration is particularly important for safety-critical data. The effects of this feature are more pronounced when increasing the number of messages/second, leading to an ever-increasing number of queued messages. To address this rate imbalance, the OT layer can be equipped with selective pre-processing capabilities, such as filtering and aggregation, reducing the burden at the OT/IT interface [19]. It is worth noting that although the latency at the IT layer is higher than in the OT layer, it is still less than \(10\,\mathrm{ms}\), an acceptable threshold for many interactive applications, and sufficient to meet most use case requirements. Nevertheless, the fact that the latency grows significantly when the message rate increases highlights the need for further optimization and tuning, particularly in critical applications where low latency and data integrity are essential. In the second experiment, we evaluate the performance of the streaming component (Metaverse engine) of the testbed by comparing two communication mechanisms for transmitting images of varying resolutions (from HD to 8K). One mechanism utilizes DPDK acceleration technology, while the other utilizes UDP traffic to simulate RTP-based communication [17]. We then analyze two performance metrics: (i) the number of frames per second (FPS) and (ii) the average end-to-end latency per frame transmission. The obtained results indicated an excellent performance of the streaming service in terms of both latency (Fig. 7) and supported FPS (Fig. 8), particularly in the case when relying on DPDK. The system can support frame rates above 100 FPS for images up to 4K resolution and exceed 1000 FPS for lower-quality images. Moreover, the observed end-to-end latency never exceeds \(10\,\mathrm{ms}\) for images up to 4K resolution. These findings demonstrate that the system satisfies the requirements of next-generation interactive applications. ## V Related Work To the best of our knowledge, the design and implementation of a comprehensive solution targeting Metaverse applications in the smart manufacturing domain are entirely novel in the existing literature. This section summarizes the prior research that influenced the authors' proposal, inspiring some architectural and technological choices. The European Reference Architectural Model Industry 4.0 (RAMI 4.0) [11] is a widely recognized standard that emphasizes the need for a close alignment between IT and OT. RAMI 4.0 provides a high-level reference architecture encompassing various Industry 4.0 scenarios. The reference communication layer of RAMI 4.0 leverages the OPC UA standard as the sole solution to ensure interoperability in the realm of OT [13]. Fig. 5: Machine-to-consumer communication latency under varying message load of the IT layer. Fig. 6: Machine-to-machine communication latency under varying message load of the OT layer. Fig. 7: End-to-end latency for the streaming service as the resolution of the transmitted image increases (from HD to 8K). In [20], the authors conducted a systematic literature review to assess the impact and effectiveness of using Augmented Reality (AR) in real industrial processes, investigating the applicability and usefulness of AR technology in the industry. The study found that AR is a growing trend and is being employed so to improve process flexibility, monitoring, and inspection capabilities, streamlining operations. In [18], the authors propose a multi-layer architecture for monitoring industrial equipment in customer plants. The architecture uses two Apache Kafka installations, one in OT and one in IT, to gather near-real-time data. Dedicated software components called _hmi-forwarders_ interface with Modbus-TCP machinery, exporting data to the OT Kafka instance, which then forwards it to the appropriate Kafka topic in the IT layer. The CODEG framework for cloud-oriented gaming, proposed by DeGiovanni et al. [21], addresses the limitations of conventional monolithic game engines, advocating for a distributed framework that can exploit the full potential of the heterogeneous resources in the cloud continuum. The implementation of the CODEG framework involves the integration of multiple game engine modules into the available network infrastructure, ranging from core to edge resources. ## VI Conclusion and Future Work In this article, we presented a conceptual architecture and discussed the accompanying software prototype and functional components. The proposal aims at enabling an immersive smart manufacturing environment, embodying a human-in-the-loop approach. A preliminary evaluation was presented, validating the framework components as a whole by relying on two proof-of-concept immersive application scenarios. Our ongoing efforts are directed toward expanding the Metaverse engine and the applications described in this study to enable the creation of shared virtual environments where multiple users can engage in synchronous sessions.
2309.11725
FluentEditor: Text-based Speech Editing by Considering Acoustic and Prosody Consistency
Text-based speech editing (TSE) techniques are designed to enable users to edit the output audio by modifying the input text transcript instead of the audio itself. Despite much progress in neural network-based TSE techniques, the current techniques have focused on reducing the difference between the generated speech segment and the reference target in the editing region, ignoring its local and global fluency in the context and original utterance. To maintain the speech fluency, we propose a fluency speech editing model, termed \textit{FluentEditor}, by considering fluency-aware training criterion in the TSE training. Specifically, the \textit{acoustic consistency constraint} aims to smooth the transition between the edited region and its neighboring acoustic segments consistent with the ground truth, while the \textit{prosody consistency constraint} seeks to ensure that the prosody attributes within the edited regions remain consistent with the overall style of the original utterance. The subjective and objective experimental results on VCTK demonstrate that our \textit{FluentEditor} outperforms all advanced baselines in terms of naturalness and fluency. The audio samples and code are available at \url{https://github.com/Ai-S2-Lab/FluentEditor}.
Rui Liu, Jiatian Xi, Ziyue Jiang, Haizhou Li
2023-09-21T01:58:01Z
http://arxiv.org/abs/2309.11725v2
# FluentEditor: Text-Based Speech Editing by Considering Acoustic and Prosody Consistency ###### Abstract Text-based speech editing (TSE) techniques are designed to enable users to edit the output audio by modifying the input text transcript instead of the audio itself. Despite much progress in neural network-based TSE techniques, the current techniques have focused on reducing the difference between the generated speech segment and the reference target in the editing region, ignoring its local and global fluency in the context and original utterance. To maintain the speech fluency, we propose a fluency speech editing model, termed _FluentEditor_, by considering fluency-aware training criterion in the TSE training. Specifically, the _acoustic consistency constraint_ aims to smooth the transition between the edited region and its neighboring acoustic segments consistent with the ground truth, while the _prosody consistency constraint_ seeks to ensure that the prosody attributes within the edited regions remain consistent with the overall style of the original utterance. The subjective and objective experimental results on VCTK demonstrate that our _FluentEditor_ outperforms all advanced baselines in terms of naturalness and fluency. The audio samples and code are available at [https://github.com/Ai-S2-Lab/FluentEditor](https://github.com/Ai-S2-Lab/FluentEditor). Rui Liu\({}^{1}\), Jiatian Xi\({}^{1}\), Ziyue Jiang\({}^{2}\), Haizhou Li\({}^{3,4}\)\({}^{1}\) Inner Mongolia University, Hohhot, China \({}^{2}\) Zhejiang University, China \({}^{3}\) Shenzhen Research Institute of Big Data, School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China \({}^{4}\) National University of Singapore, Singapore [email protected], [email protected], [email protected], [email protected] Speech Editing, Fluency Modeling, Acoustic Consistency, Prosody Consistency ## 1 Introduction Text-based speech editing (TSE) [1] allows for modification of the output audio by editing the transcript rather than the audio itself. With the rapid development of the internet, audio-related media sharing has become a prevalent activity in our daily lives. Note that TSE can bring great convenience to the audio generation process and be applied to a variety of areas with personalized voice needs, including video creation for social media, games, and movie dubbing. Over the past few years, many attempts adopted text-to-speech (TTS) systems to build neural network-based TSE models. For example, the CampNet [2] conducts mask training on a context-aware neural network based on Transformer to improve the quality of the edited voice. \(A^{3}T\)[3] suggests an alignment-aware acoustic and text pretraining method, which can be directly applied to speech editing by reconstructing masked acoustic signals through text input and acoustic text alignment. More recently, the diffusion model has gradually become the backbone of the NN-based TSE with remarkable results. For example, EdiTTS [4] takes the diffusion-based TTS model as the backbone and proposes a score-based TSE methodology for fine-grained pitch and content editing. FluentSpeech [5] proposes a context-aware diffusion model that iteratively refines the modified mel-spectrogram with the guidance of context features. However, during training, the existing approaches just constrain the Euclidean Distance [6] between the mel-spectrum to be predicted and the ground truth to ensure the naturalness of TSE. Although they consider the use of contextual information to mitigate the over-smoothing problem of edited speech, their objective functions are not designed to ensure fluent output speech [7, 8]. We consider two challenges to be tackled for effective speech fluency modeling. 1) _Acoustic Consistency_: the smoothness of the concatenation between the region to be edited and its neighboring regions should be close to the real concatenation point [9]. 2) _Prosody Consistency_: the prosody style of the synthesized audio in the region to be edited needs to be consistent with the prosody style of the original utterance [10, 11]. To address the above issues, we propose a novel fluency speech editing scheme, termed FluentEditor, by introducing the acoustic and prosody consistency training criterion to achieve natural and fluent speech editing. Specifically, 1) To achieve the acoustic consistency, we design the _Acoustic _Consistency Loss_\(\mathcal{L}_{AC}\) to calculate whether the variance at the boundaries is close to the variance at the real concatenation points. 2) To achieve the prosody consistency, we introduce the _Prosody Consistency Loss_\(\mathcal{L}_{PC}\) to let the high-level prosody features of the synthesized audio in the region to be edited be close to that of the original utterance. The high-level prosody features are extracted by the pre-trained GST-based prosody extractor [11]. The subjective and objective results on the VCTK [12] dataset show that the acoustic and prosody consistency of the FluentEditor is significantly better than the advanced TSE baselines, while the proposed FluentEditor can ensure a high degree of fluency like real speech. The main contributions of this work can be summarized as follows: 1) We propose a novel fluency speech editing scheme, termed FluentEditor; 2) We adopt the diffusion model as the backbone and introduce _Acoustic and Prosody Consistency Losses_ to conduct the fluency modeling for TSE; 3) The proposed model outperforms all advanced TSE baselines in terms of naturalness and fluency. ## 2 FluentEditor: Methodology We formulate the proposed FluentEditor, a TSE model that ensures speech fluency by considering acoustic and prosody consistency. We first introduce the overall workflow, then further elaborate the fluency-aware training criterion and the run-time inference. ### Overall Workflow As shown in Fig.1, our FluentEditor adopts the mask prediction-based diffusion network as the backbone, which consists of a text encoder, and a spectrogram denoiser. The spectrogram denoiser seeks to adopt the Denoising diffusion probabilistic model (DDPM) to learn a data distribution \(p(\cdot)\) by gradually denoising a normally distributed variable through the reverse process of a fixed Markov Chain of length \(T\). Assume that the phoneme embedding of the input phoneme sequence is \(X=(X_{1},\dots,X_{|X|})\) and the acoustic feature sequence for \(X\) is \(\hat{Y}=(\hat{Y}_{1},\dots,\hat{Y}_{|\hat{Y}|})\). The masked acoustic feature sequence \(\hat{Y}_{mask}=Mask(\hat{Y},\lambda)\) is obtained by replacing the random spans of \(\hat{Y}\) with the random vector according to a \(\lambda\) probability. Specifically, the text encoder aims to extract the high-level linguistic feature \(\mathcal{H}_{X}\) for \(X\). The spectrogram denoiser then aggregates the \(\mathcal{H}_{X}\) and the condition input \(C\) to guide the reverse process of the diffusion model \(\Theta(Y_{t}|t,C)\) (\(t\in T\)), where \(Y_{t}\) is a noisy version of the clean input \(\hat{Y}_{0}\). Similar to [5], the condition input \(C\) consists of the frame-level linguistic feature \(\mathcal{H}_{X}^{f}\), acoustic feature sequence \(\hat{Y}\), masked acoustic feature sequence \(\hat{Y}_{mask}\), speaker embedding \(e_{spk}\) and the pitch embedding \(e_{pitch}\). In the generator-based diffusion models, \(p_{\theta}(Y_{0}|Y_{t})\) is the implicit distribution imposed by the neural network \(f_{\theta}(Y_{t},t)\) that outputs \(Y_{0}\) given \(Y_{t}\). And then \(Y_{t-1}\) is sampled using the posterior distribution \(q(Y_{t-1}|Y_{t},Y_{0})\) given \(Y_{t}\) and the predicted \(Y_{0}\). To model speech fluency, we design _acoustic consistency loss_\(\mathcal{L}_{AC}\) and _prosody consistency loss_\(\mathcal{L}_{PC}\) on the basis of the original _reconstruction loss_, to ensure that the acoustic and prosody performance of speech generated in the editing area is consistent with the context and the original utterance. For reconstruction loss, we follow [5] and employ Mean Absolute Error (MAE) and the Structural Similarity Index (SSIM) [13] losses to calculate the difference between \(Y_{0}\) and the corresponding ground truth segment \(\hat{Y}_{0}\). In the following subsection, we will introduce \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\) in detail. ### Fluency-Aware Training criterion #### 2.2.1 Acoustic Consistency Loss The acoustic consistency loss \(\mathcal{L}_{AC}\) employs smoothness constraints at both the left and right boundaries for the predicted acoustic feature \(Y_{0}\). We compare the variance, \(\Delta_{\mathcal{G}_{Y_{0}}^{L}}\) and \(\Delta_{\mathcal{G}_{Y_{0}}^{R}}\), for the left and right boundaries with \(\Delta_{\mathcal{G}_{Y_{0}}^{L/R}}\) of ground truth speech at the corresponding boundaries to serve as the proxy of the overall smoothness \(\mathcal{L}_{AC}\). Specifically, \(\mathcal{L}_{AC}\) consists of \(\mathcal{L}_{AC}^{L}\) and \(\mathcal{L}_{AC}^{R}\), and we use Mean Squared Error (MSE) [14] to measure the proximity between the target segment and the ground truth: \[\begin{split}\mathcal{L}_{AC}&=\mathcal{L}_{AC}^{L} +\mathcal{L}_{AC}^{R}\\ &=\mathrm{MSE}(\Delta_{\mathcal{G}_{Y_{0}}}^{L},\Delta_{\mathcal{ G}_{Y_{0}}^{L}})+\mathrm{MSE}(\Delta_{\mathcal{G}_{Y_{0}}}^{R},\Delta_{ \mathcal{G}_{Y_{0}}^{R}})\end{split} \tag{1}\] Note that the Euclidean distance between two adjacent frames is obtained by the smoothness extractor. Take \(\Delta_{\mathcal{G}_{Y_{0}}^{L}}\) as an example, \[\Delta_{\mathcal{G}_{Y_{0}}^{L}}^{L}=\varrho_{Y_{0}^{L}}-\varrho_{Y_{0}^{Lpre}} \tag{2}\] where \(Y_{0}^{Lpre}\) denotes the speech frame preceding the left boundary of the masked region. In other words, the ending frame of the adjacent non-masked region is on the left side. To comprehensively capture the statistical properties of the audio signal, we utilize variance to describe the feature information of each Mel spectrogram frame, denoted as \(\varrho_{Y_{0}}{}^{L}\) and \(\varrho_{Y_{0}^{Lpre}}\). Similarly, we compute the smoothness constraint for the right boundary\(\mathcal{L}_{AC}^{R}\), where \(Y_{0}^{R^{new}}\) denotes the speech frame succeeding the right boundary of the masked region, in other words, the starting frame of the adjacent non-masked region on the right side. #### 2.2.2 Prosody Consistency Loss The prosody consistency loss \(\mathcal{L}_{PC}\) is responsible for capturing the prosody feature \(\mathcal{H}_{Y_{0}}^{P}\) from the predicted region \(Y_{0}\) while also analyzing the overall prosody characteristics \(\mathcal{\hat{H}}_{\hat{Y}}^{P}\) present in the original speech, then employ the MSE loss to conduct the prosody consistency constraints. \[\mathcal{L}_{PC}=\text{MSE}(\mathcal{H}_{Y_{0}}^{P},\mathcal{\hat{H}}_{\hat{Y}} ^{P}) \tag{3}\] Note that the prosody features \(\mathcal{H}_{Y_{0}}^{P}\) and \(\mathcal{\hat{H}}_{\hat{Y}}^{P}\) are obtained by the pre-trained prosody extractor. Specifically, the prosody extractor utilizes the reference encoder [11] of the Global Style Token (GST) [11] model to convert \(Y_{0}\) and \(\hat{Y}\) into high-level prosody features with fixed length for easy comparison. \[\mathcal{H}_{Y_{0}}^{P}=\text{GST}(Y_{0}),\;\;\;\hat{\mathcal{H}}_{\hat{Y}}^{P}= \text{GST}(\hat{Y}) \tag{4}\] Lastly, following [5], the total loss function is the sum of reconstruction loss and two new loss functions, \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\), across all non-contiguous masked regions, since the mask region in a sentence may include multiple non-contiguous segments [5]. In a nutshell, \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\) of the FluentEditor are introduced to ensure fluent speech with consistent prosody. ### Run-time Inference In run-time, given the original text and its speech, the user can edit the speech by editing the text. Note that we can manually define modification operations (i.e., insertion, replacement, and deletion). The corresponding speech segment of the edited word in the given text is treated as the masked regions in Fig. 1. Similar to [5], our FluentEditor reads the edited text and the remaining acoustic feature \(\hat{Y}-\hat{Y}_{mask}\) of the original speech to predict the \(Y_{0}\) for the edited word. At last, the \(Y_{0}\) and its context \(\hat{Y}-\hat{Y}_{mask}\) are concatenated as the final fluent output speech. ## 3 Experiments and Results ### Dataset We validate the FluentEditor on the VCTK [12] dataset, which is an English speech corpus uttered by 110 English speakers with various accents. Each recording is sampled at 22050 Hz with 16-bit quantization. The precise forced alignment is achieved through Montreal Forced Aligner (MFA) [15]. We partition the dataset into training, validation, and testing sets, randomly with 98%, 1%, and 1%, respectively. ### Experimental Setup The configurations of text encoder and spectrogram denoiser are referred to [5]. The diffusion steps \(T\) of the FluentEditor system is set to 8. Following GST [11], the prosody extractor comprises a convolutional stack and an RNN. The dimension of the output prosody feature of the GST-based prosody extractor is 256. Following [3], we adopt a random selection strategy, with a fixed masking rate of 80%, for masking specific phoneme spans along with their corresponding speech frames. The pre-trained HiFiGAN [16] vocoder is used to synthesize the speech waveform. We set the batch size is 16. The initial learning rate is set at \(2\times 10^{-4}\), and the Adam optimizer [17] is utilized to optimize the network. The FluentEditor model is trained with 2 million training steps on one A100 GPU. ### Evaluation Metric For subjective evaluation, We conduct a Mean Opinion Score (MOS) [18] listening evaluation in terms of speech fluency, termed _FMOS_. Note that FMOS allows the listener to feel whether the edited segments of the edited speech are fluent compared to the context. We keep the text content and text modifications consistent among different models to exclude other interference factors, only examining speech fluency. Furthermore, Comparative FMOS (C-FMOS) [18] is also used to conduct the ablation study. For objective evaluation, we utilize MCD [19], STOI [20], and PESQ [21] to measure the overall quality of the edited speech. ### Comparative Study We develop four neural TSE systems for a comparative study, that includes: 1) **CampNet**[2] propose a context-aware mask prediction network to simulate the process of text-based speech editing; 2) \(\mathbf{A^{3}T}\)[3] propose the alignment-aware acoustic-text pre-training that takes both phonemes and partially-masked spectrograms as inputs; 3) **FluentSpeech**[5] takes the diffusion model as backbone and predict the masked feature with the help of context speech; and 4) **FluentEditor (Ours)** designs the acoustic and prosody consistency losses. We also add the **Ground Truth** speech for comparison. Note that two ablation systems, that are "\(\mathbf{w}/\mathbf{o}\)\(\mathcal{L}_{AC}\)" and "\(\mathbf{w}/\mathbf{o}\)\(\mathcal{L}_{PC}\)", are built to validate the two new losses. Figure 1: The overall workflow of FluentEditor. The total loss function includes Reconstruction Loss, and Acoustic and Prosody Consistency Losses. ### Main Results **Objective results:** We select 400 test samples from the test set randomly and report the objective results in the second to fourth columns of Table 1. Note that we follow [5] and just measure the objective metrics of the masked region using the reconstructed speech. We observe that our FluentEditor achieves the best performance in terms of overall speech quality. For example, the MCD and STOI values of FluentEditor obtain optimal results and PESQ achieves suboptimal results among all systems. It suggests that the FluentEditor performs proper acoustic feature prediction for the speech region to be edited. Note that objective metrics do not fully reflect the human perception [22], we further conduct subjective listening experiments. **Subjective results:** For FMOS evaluation, we selected 50 audio samples from the test set and invited 20 listeners to evaluate speech fluency. Following [23], we test the insertion and replacement operations and present the FMOS results in the last two columns of Table 1. We find that FluentEditor consistently achieves superior fluency-related perceptual scores. For example, FluentEditor obtains the top FMOS value of 4.25 for insertion and 4.26 for replacement, that very close to that of ground truth. This demonstrates the effectiveness of the fluency-aware training criterion. By considering the acoustic and prosody consistency constraints, our FluentEditor allows for weakening the editing traces and improving the prosody performance of the edited speech. ### Ablation Study To further validate the contribution of our \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\) respectively, the subjective and objective ablation results, of insertion and replacement, are reported in Table 2. We follow the previous section to prepare the samples and listeners. It's observed that the C-FMOS and MCD values of these two ablation systems both drop when we remove the \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\) respectively, indicating that the acoustic and prosody consistency constraints play a vital role in enhancing both the naturalness and fluency of the edited speech. ### Visualization Analysis As illustrated in the Fig.2, we visualize the mel-spectrograms produced by FluentEditor and the FluentSpeech baseline1. The red boxes indicate the random masked and its reconstructed speech segment of utterance "Scottish Women appear at Eden Court, Inverness, tonight.". We can see that FluentEditor can generate mel-spectrograms with richer frequency details compared with the baseline, resulting in natural and expressive sounds, which further demonstrates the effectiveness of acoustic and prosody consistency losses. Nevertheless, we recommend that the reader listen to our speech samples1 to visualize the advantages. Footnote 1: Due to space limits, we just report the FluentSpeech baseline. More visualization results and speech samples are referred to our website: [https://github.com/Ai-S2-Lab/FluentEditor](https://github.com/Ai-S2-Lab/FluentEditor). ## 4 Conclusion In this paper, we introduce a novel text-based speech editing (TSE) model, termed FluentEditor, that involves two novel fluency-aware training criterions to improve the acoustic and prosody consistency of edited speech. The acoustic consistency loss \(\mathcal{L}_{AC}\) to calculate whether the variance at the boundaries is close to the variance at the real concatenation points, while the prosody consistency loss \(\mathcal{L}_{PC}\) to let the high-level prosody features of the synthesized audio in the region to be edited be close to that of the original utterance. The objective and subjective experiments on VCTK demonstrate that incorporating \(\mathcal{L}_{AC}\) and \(\mathcal{L}_{PC}\) yields superior results and ensures fluent speech with consistent prosody. In future work, we will consider the multi-scale consistency and further improve the FluentEditor architecture. \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline **Method** & \multicolumn{3}{c|}{**Objective Evaluation**} & \multicolumn{2}{c}{**Subjective Evaluation (FMOS)**} \\ & **MCD \((\downarrow)\)** & **STOI \((\uparrow)\)** & **PESQ \((\uparrow)\)** & **Insertion** & **Replacement** \\ \hline Ground Truth & NA & NA & NA & 4.37 \(\pm\) 0.05 & 4.42 \(\pm\) 0.01 \\ \hline CampNet & 3.85 & 0.53 & 1.38 & 3.89 \(\pm\) 0.01 & 3.94 \(\pm\) 0.03 \\ \(A^{3}T\) & 3.79 & 0.76 & 1.59 & 3.82 \(\pm\) 0.03 & 3.83 \(\pm\) 0.02 \\ FluentSpeech & 3.50 & 0.79 & 1.93 & 4.02 \(\pm\) 0.04 & 4.04 \(\pm\) 0.01 \\ \hline **FluentEditor (Ours)** & **3.47** & **0.81** & **1.85*** & **4.25 \(\pm\) 0.03** & **4.26 \(\pm\) 0.01** \\ \hline \hline \end{tabular} \end{table} Table 1: Objective and subjective evaluation results of comparative study. * means the value achieves suboptimal. \begin{table} \begin{tabular}{c|c c} \hline \hline **Method** & **C-FMOS** & **MCD \((\downarrow)\)** \\ \hline FluentEditor & **0.00** & **3.47** \\ w/o \(\mathcal{L}_{AC}\) & -0.16 & 3.48 \\ w/o \(\mathcal{L}_{PC}\) & -0.21 & 3.51 \\ \hline \hline \end{tabular} \end{table} Table 2: Objective and subjective results of ablation study. Figure 2: Visualizations of the generated mel-spectrograms by FluentEditor and FluentSpeech baseline.
2308.16405
On the departure from Monin-Obukhov surface similarity and transition to the convective mixed layer
Large-eddy simulations are used to evaluate mean profile similarity in the convective boundary layer (CBL). Particular care is taken regarding the grid sensitivity of the profiles and the mitigation of inertial oscillations in the simulation spin-up. The nondimensional gradients $\phi$ for wind speed and air temperature generally align with Monin-Obukhov similarity across cases but have a steeper slope than predicted within each profile. The same trend has been noted in several other recent studies. The Businger-Dyer relations are modified here with an exponential cutoff term to account for the decay in $\phi$ to first-order approximation, yielding improved similarity from approximately 0.05$z_i$ to above 0.3$z_i$, where $z_i$ is the CBL depth. The necessity for the exponential correction is attributed to an extended transition from surface scaling to zero gradient in the mixed layer, where the departure from Monin-Obukhov similarity may be negligible at the surface but becomes substantial well below the conventional surface layer height of 0.1$z_i$.
Michael Heisel, Marcelo Chamecki
2023-08-31T02:14:59Z
http://arxiv.org/abs/2308.16405v2
# On the departure from Monin-Obukhov surface similarity and transition to the convective mixed layer ###### Abstract Large-eddy simulations are used to evaluate mean profile similarity in the convective boundary layer (CBL). Particular care is taken regarding the grid sensitivity of the profiles and the mitigation of inertial oscillations in the simulation spin-up. The nondimensional gradients \(\phi\) for wind speed and air temperature generally align with Monin-Obukhov similarity across cases but have a steeper slope than predicted within each profile. The same trend has been noted in several other recent studies. The Businger-Dyer relations are modified here with an exponential cutoff term to account for the decay in \(\phi\) to first-order approximation, yielding improved similarity from approximately \(0.05z_{i}\) to above \(0.3z_{i}\), where \(z_{i}\) is the CBL depth. The necessity for the exponential correction is attributed to an extended transition from surface scaling to zero gradient in the mixed layer, where the departure from Monin-Obukhov similarity is negligible at the surface but becomes substantial well below the conventional surface layer height of \(0.1z_{i}\). Keywords:Surface layer Convective boundary layer Monin-Obukhov similarity Large-eddy simulation ## 1 Introduction Within the atmospheric boundary layer (ABL), the surface layer is unsurprisingly the region directly above the Earth's surface. This layer is often described in terms of its properties - approximately constant flux, negligible rotation effects, and adherence to surface scaling - rather than formally defined (Sutton 1953; Kaimal and Finnigan 1994). One common convention is to assume the surface layer extends to the lowest 10% or so of the ABL (Stull 1988; Garratt 1994), consistent with the depth of the logarithmic (log) region in more general wall-bounded flows (Pope 2000). With respect to the surface scaling property, the scaling of mean flow statistics within the surface layer is given by Monin-Obukhov similarity theory (MOST, Monin and Obukhov 1954; Foken 2006) based on the log law of the wall (von Karman 1930; Prandtl 1932; Millikan 1938). MOST predicts universal similarity for the nondimensional mean gradients: \[\frac{\partial U}{\partial z}\left(\frac{\kappa z}{u_{*}}\right) =\phi_{m}\] \[\frac{\partial\theta}{\partial z}\left(\frac{\kappa z}{\theta_{* }}\right) =\phi_{h}, \tag{1}\] where the functions for \(\phi\) must be determined empirically. Here, \(z\) is height above the surface, \(U(z)\) is the mean horizontal wind speed, \(\theta(z)\) is the mean virtual potential temperature, \(u_{*}\) is the surface friction velocity scale, \(\theta_{*}\) is the surface temperature scale, and \(\kappa\) is the von Karman constant. The velocity and temperature scales are related to the surface momentum flux as \(\overline{u^{\prime}w^{\prime}}_{s}=-u_{*}^{2}\) and surface heat flux as \(\overline{w^{\prime}\theta^{\prime}}_{s}=-u_{*}\theta_{*}\) such that Eq. (1) is often referred to as the flux-gradient relations. Owing to the assumed absence of other length, velocity, or temperature scales in the theory, \(\phi_{m}\) and \(\phi_{h}\) are considered functions only of \(\zeta=z/L\) defined using the Obukhov (1946) length \[L=\frac{u_{*}^{2}\theta_{s}}{\kappa g\theta_{*}}, \tag{2}\] where \(\theta_{s}\) is the mean surface temperature and \(g\) is the gravitational constant. Following the introduction of MOST, evaluations of field measurements from meteorological towers have largely corroborated the surface layer theory and universality of \(\phi_{m,h}(\zeta)\). For a convective ABL (CBL) with \(L<0\) typical of daytime conditions, several experimental campaigns and reevaluations proposed power-law relations for \(\phi_{m}\) and \(\phi_{h}\) with some variability in the fitted parameters but a consistent general form for the functions (see, e.g., Dyer and Hicks 1970; Businger et al. 1971; Carl et al. 1973; Yaglom 1977; Hogstrom 1988; Wilson 2001; Katul et al. 2011). The most common of these empirical relations are the Businger-Dyer profiles for convective conditions (Businger et al. 1971; Dyer 1974) \[\phi_{m} =(1-16\zeta)^{-0.25}\] \[\phi_{h} =(1-16\zeta)^{-0.5}, \tag{3}\] where the values of the constants depend modestly on the analysis. Agreement of field measurements with MOST and the empirical relations can be further improved by accounting for additional effects not considered in the idealized theory, e.g. time-dependent variability from large-scale turbulence (Salesky and Anderson 2020) and anisotropy due to complex conditions (Stiperski and Calaf 2023). More recently, direct numerical simulations and large-eddy simulations (LES) of the CBL have produced consistent trends that support MOST to first-order approximation but also reveal possible shortcomings: gradient statistics align with the Businger-Dyer profiles when comparing results across different simulated conditions (Maronga and Reuder, 2017), but the decay in \(\phi(\zeta)\) within each individual profile is steeper than predicted by Eq. (3) (Khanna and Brasseur, 1997; Pirozzoli et al., 2017; Li et al., 2018). Accordingly, it has been proposed that surface layer gradients - particularly for velocity - may additionally depend on the boundary layer depth due to the influence of large-scale motions from the well-mixed layer that forms the bulk of the CBL (Khanna and Brasseur, 1997; Johansson et al., 2001). This idea is indirectly supported by observed trends in field measurements that suggest a parameter space beyond \(\zeta\) is required to account for variability in gradient statistics (Salesky and Chamecki, 2012). The connection between turbulent eddies in the mixed layer and surface layer statistics has been substantiated by several analyses of turbulent structure in the CBL. Conditional statistics show that the steep decay in \(\phi(\zeta)\) and deviations from MOST noted above are predominately associated with large-scale turbulent events such as downdrafts (Li et al., 2018; Fodor et al., 2019). In more general terms, these deviations are related to the modulation of near-surface turbulence by buoyancy-driven eddies from aloft (Smedman et al., 2007; Gao et al., 2016; Salesky and Anderson, 2018; Liu et al., 2019; Dupont and Patton, 2022). A signature of the boundary layer depth also appears in the velocity and temperature spectra within and above the surface layer (McNaughton et al., 2007; Chowdhuri et al., 2019). There have been relatively few attempts to model the modulation of surface layer gradients. Gryning et al. (2007) combined surface scaling with a constant mixed layer length scale, but the foremost goal was to extend similarity to higher positions above the surface layer. Salesky and Anderson (2020) corrected Eq. (3) for local-in-time deviations due to large-scale fluctuations. Li et al. (2021) quantified the deviation as a nonlocal transport through the framework of eddy diffusivity models. Cheng et al. (2021) and Liu et al. (2022) both introduced a correction to \(\phi\) prescribed as a function of \(z_{i}/L\), where \(z_{i}\) is the base height of the stable capping inversion and is typically used to define the CBL depth. One cautionary note regarding the previous findings and models is that several of the studies used simulations confined to relatively low Reynolds number compared to the ABL (Pirozzoli et al., 2017; Li et al., 2018; Fodor et al., 2019; Cheng et al., 2021). Considering the log law only emerges for high Reynolds numbers (Marusic et al., 2013; Sillero et al., 2013; Lee and Moser, 2015), the results may reflect a combination of buoyancy effects and finite Reynolds number corrections to the log law. Additionally, for wall-modeled LES studies the grid convergence of surface layer statistics is often not scrutinized (Maronga and Reuder, 2017). The presence of these limitations precludes a careful quantitative comparison of deviations from MOST observed across the literature. In the present work, recurring trends in \(\phi(\zeta)\) observed from simulations are further evaluated using new LES of the idealized dry CBL. In consideration of the effects noted above, the LES is for the inviscid limit and includes a detailed test of grid sensitivity. To account for the observed trends, the dependencies of \(\phi_{m}\) and \(\phi_{h}\) in Eq. (1) are expanded to include \(z_{i}\) following the suggestion of Khanna and Brasseur (1997) and recent evidence of revised similarity for stably stratified conditions (Heisel and Chamecki, 2023). The goal is to empirically explain the behavior of the simulated mean profile statistics in the broader context of the transition from the surface layer to local free convection, while also rec onciling the simulation results with the widespread support for MOST from field experiments discussed above. The proposed explanation - an accelerated decay in \(\phi\) as a function of \(z/z_{i}\) - has a limited effect on statistics very close to the surface where many field measurements are acquired, is qualitatively consistent with the profile shapes seen in recent simulation studies, and accounts for the profile transition between the surface and mixed layers with reasonable accuracy. The remainder of the article is organized as follows: the new LES cases are described in Sect. 2; mean profile similarity is assessed in Sect. 3; implications for resistance laws in the mixed layer and for the definition of the surface layer are discussed in Sect. 4; finally, a summary is given in Sect. 5. ## 2 Large-eddy simulations The present simulations were conducted using standard practices for representing an idealized dry convective ABL (see, e.g., Deardorff 1972; Moeng and Sullivan 1994; Sullivan et al. 1994; Noh et al. 2003; Salesky et al. 2017): a range of unstable conditions was achieved by imposing different combinations of fixed surface heat flux \(Q_{*}=\overline{u^{\prime}w^{\prime}}_{s}\) and geostrophic wind speed \(U_{g}\), and the boundary layer was confined by a stable capping inversion. The inversion was introduced through the initial temperature profile as defined in Sullivan and Patton (2011) using the same lapse rate \(\Gamma=0.08\) K m\({}^{-1}\). Additional imposed parameters include the aerodynamic roughness length \(z_{o}=0.1\) m and Coriolis frequency \(f=1\times 10^{-4}\) s\({}^{-1}\). Six primary cases with varying \(Q_{*}\) and \(U_{g}\) and the resulting scaling parameters are summarized in Table 1. A seventh case (E1) is the same as case E, but with the initial capping inversion positioned 200 m lower as seen in the \(z_{i}\) values that were determined from the height of the minimum heat flux. The range of simulated conditions span from relatively weak (\(-z_{i}/L=2.5\)) to moderately strong (\(-z_{i}/L=39\)) convection. All cases employed a numerical grid with 1024\(\times\)1024\(\times\)512 points and corresponding domain dimensions of 12\(\times\)12\(\times\)2 km. Dimensional profiles of the mean horizontal wind speed and potential temperature are respectively shown in Figs. 1a and 1b for the seven simulated cases. An extensive mixed layer with approximately uniform wind speed and air tempera \begin{table} \begin{tabular}{c c c c c c c c} \hline Case & \(U_{g}\) & \(Q_{*}\) & \(u_{*}\) & \(-\theta_{*}\) & \(-L\) & \(z_{i}\) & \(-z_{i}/L\) \\ & (m s\({}^{-1}\)) & (K m s\({}^{-1}\)) & (m s\({}^{-1}\)) & (K) & (m) & (m) & (–) \\ \hline A & 15 & 0.1 & 0.81 & 0.12 & 415 & 1040 & 2.5 \\ B & 15 & 0.17 & 0.82 & 0.21 & 256 & 1080 & 4.2 \\ C & 15 & 0.24 & 0.83 & 0.29 & 188 & 1110 & 5.9 \\ D & 12 & 0.24 & 0.71 & 0.34 & 119 & 1110 & 9.3 \\ E1 & 9 & 0.24 & 0.56 & 0.43 & 60 & 926 & 15.5 \\ E & 9 & 0.24 & 0.58 & 0.41 & 65 & 1110 & 17.0 \\ F & 6 & 0.24 & 0.44 & 0.54 & 29 & 1110 & 38.8 \\ \hline \end{tabular} \end{table} Table 1: Key scaling parameters for large-eddy simulations (LES) of the convective atmospheric boundary layer (CBL) on a 1024\(\times\)1024\(\times\)512 numerical grid: imposed geostrophic wind speed \(U_{g}\), imposed surface heat flux \(Q_{*}\), friction velocity \(u_{*}\), surface temperature scaling \(\theta_{*}\), Obukhov length \(L\), boundary layer depth based on the capping inversion height \(z_{i}\), and bulk instability parameter \(z_{i}/L\). ture is present for all cases. The temperature in the mixed layer increases with \(Q_{*}\) and is highest for case E1 with a shallower CBL that can be heated more quickly. Hereafter, each case is referred to using its alphabetical label A-F indicated in the figure legends alongside the \(z_{i}/L\) values. Unless otherwise noted, "velocity" refers to the magnitude of the horizontal components \(U_{x}\) and \(U_{y}\), and "temperature" refers to the virtual potential temperature. Further details on the numerical code and procedure used to generate the Fig. 1 profiles are given below. The flow solver and numerics are a modified version of the LESGO code (Albertson and Parlange, 1999). The code features include a staggered vertical grid, time integration using the Adams-Bashforth method, vertical differentiation using second-order-accurate finite differencing, pseudospectral horizontal differentiation with full de-aliasing, fluctuation damping in the top 25% of the domain (Nieuwstadt et al., 1993), Lagrangian averaged scale-dependent modeling of subgrid-scale (SGS) stresses (Bou-Zeid et al., 2005), and SGS heat flux modeling using a constant Prandtl number \(\mathrm{Pr}_{sgs}=0.4\). A more detailed description is given elsewhere (Kumar et al., 2006; Salesky et al., 2017). The surface stress in the wall-modeled LES is estimated locally in space and time using MOST and Eq. (3) for momentum (Moeng, 1984). The wall model for \(u_{*}\) is subject to so-called overshoot and log-law mismatch (Mason and Thomson, 1992; Brasseur and Wei, 2010; Larsson et al., 2015). To mitigate this effect, the wall model was evaluated using velocity values at \(0.05z_{i}\) rather than the first grid point (Kawai and Larsson, 2012), where \(z_{i}\) in this case is based on the initial temperature profile and for simplicity does not change with time. Based on a comparative test, it was found that using values farther from the surface reduces the wall model overshoot but does not noticeably improve the convergence discussed below. The final methodology considerations are the procedure to spin up the simulations from initial conditions and the sensitivity of the results to grid resolution. Both of these aspects are crucial to the repeatability and validity of the study and an extended discussion is given for each in the two subsections below. Figure 1: Mean profiles of horizontal wind speed \(U=(U_{x}^{2}+U_{y}^{2})^{1/2}\) and virtual potential temperature \(\theta\). In this and later figures, the legend corresponds to the \(z_{i}/L\) stability parameter and alphabetical label for each convective LES case in Table 1. ### Simulation spin-up and inertial oscillations A common approach for simulating the CBL is to impose an initially geostrophic velocity field with no boundary layer, i.e. with \(U_{x}(z)=U_{g}\) and \(U_{y}(z)=0\). The simulations are then spun up for a duration in the range of \(tw_{*}/z_{i}\approx 5\) to 30 turnover times until statistics appear stationary (see, e.g., Moeng and Sullivan, 1994; Noh et al., 2003; Salesky et al., 2017; Maronga and Reuder, 2017; Li et al., 2018; Liu et al., 2023, among others), where \(z_{i}\) and the convective velocity \(w_{*}\)(Deardorff, 1970) represent properties of the large-scale eddies that govern dynamics in the bulk of the CBL. While the reasoning in this approach is sound, it neglects the presence of inertial oscillations resulting from the initial velocity field (e.g., Shibuya et al., 2014; Momen and Bou-Zeid, 2017), noting the oscillations are not relevant to free convection cases with no geostrophic wind. When the \(U_{x}\) and \(U_{y}\) fields differ significantly from the quasi-equilibrium condition, an oscillatory response arises from the Coriolis force and the oscillations dampen in time due to the surface stress (Schroter et al., 2013). An example is shown in Fig. 2, where the trivial initial profile in 2a (spin-up 1, yellow) results in an oscillating time evolution of average surface shear stress \(\tau_{s}\) (2b), mixed layer wind speed \(U_{m}\) (2c), and stability \(z_{i}/L\) (2d). The same time evolution occurs during spin-up of the conventionally neutral ABL (Pedersen et al., 2014; Liu et al., 2021). For the present example, case A in Table 1 is employed on a coarser \(200\times 200\times 100\) numerical grid with \(f=1.4\times 10^{-4}\) s\({}^{-1}\) increased to a typical polar value and the initial inversion height reduced by 200 m in the same manner as case E1. The latter two changes were necessary to prevent the CBL from reaching the damping layer before completing an inertial period \(2\pi/f\). Parameters with overbars (\(\overline{\cdot}\)) indicate the long-term average value over the full period. Ending the simulations near \(tw_{*}/z_{i}\approx 20\) when the statistics are momentarily stationary would lead to shear stress and wind speed values that are significantly out of equilibrium with the conditions given by \(Q_{*}\), \(z_{o}\), \(U_{g}\), \(f\), and the initial \(z_{i}\). Of particular importance to the present analysis is the Obukhov length \(L\) which varies by approximately 50% within the first half oscillation for the example in Fig. 2d. The amplitude of inertial oscillations can be reduced by using initial \(U_{x}\) and \(U_{y}\) velocity fields that are closer to the quasi-equilibrium. To this end, an ad hoc procedure using multiple spin-up trials was developed to determine appropriate initial conditions for the final simulations. For the initial velocity profiles of the second spin-up, the mean conditions at \(tf=\pi\) in the first spin-up were rescaled along \(z\) to match the original \(z_{i}\) as shown in the inset of Fig. 2a. Additionally, the original initial temperature profile (Sullivan and Patton, 2011) was used rather than a rescaled temperature profile in order to restore the two initial inversion layers. The profile re-scaling and new spin-ups can be repeated as necessary until the initial condition is close to the equilibrium. In Fig. 2, the oscillations are significantly reduced for spin-up 2, and there are limited changes in the initial profiles and time evolution between spin-ups 3 and 4, suggesting the proper initial velocity field has been reached. It is also important to note that the wind speed and scaling parameters slowly increase over time in Figs. 2b-d in response to the growth of the CBL through entrainment. One consequence of the long spin-up trials is that a low-level jet can develop within the entrainment layer directly above the heat flux minimum, particularly for weaker convection. This feature is not apparent from \(U_{x}\) and \(U_{y}\) for the profiles at \(tf=\pi\) in the inset of Fig. 2a (yellow lines), and is more evident from the horizontal wind magnitude \(U\). The jet is excluded from the rescaled profiles by assuming a linear trend in the top 100 m of \(U_{x}\) and \(U_{y}\) as seen in Fig. 2a. The jet is far from the region of interest below the mixed layer and understanding its emergence is outside the scope of the present study. While Fig. 2 is predominately for demonstration purposes, the same spin-up procedure was employed for the cases in Table 1 to approximately determine the quasi-equilibrium wind profiles. For each simulated condition, four spin-up trials were completed on a grid with \(200\times 200\times 100\) points: the first with initial conditions \(U_{x}=U_{g}\) and \(U_{y}=0\), and the subsequent trials using the rescaled profiles as described above. The rescaled profiles resulting from the fourth trial were then used as initial conditions for the final simulations. The final simulations were spun-up on a larger grid with 400\(\times\)400\(\times\)200 points for approximately \(tw_{*}/z_{i}\approx\) 15 turnover times. This duration ranges from 140 to 180 physical minutes for the different cases and is long enough for the flux profiles to develop, but short enough to avoid interference of the damping layer on the Figure 2: Demonstration of the employed spin-up procedure and the impact of initial conditions on inertial oscillations. For the initial velocity profiles in (a), the resulting average surface shear stress \(\tau_{s}\) (b), mixed layer wind speed \(U_{m}\) (c), and stability \(z_{i}/L\) (d) are plotted over time for one inertial period. During a given spin-up, the velocity profiles at \(tf=\pi\) are rescaled to the original \(z_{i}\) and used as the initial profiles for the subsequent spin-up, as shown in the inset of (a). Parameters with overbars (\(\overline{\cdot}\)) indicate the long-term average value over the full period. growing CBL. Because the initial \(U_{x}\) and \(U_{y}\) profiles are close to the equilibrium in the final spin-up, the inertial oscillations are minimized and it is not necessary to simulate a full inertial period. After the spin-up, the velocity and temperature fields were interpolated onto the final grid with 1024\(\times\)1024\(\times\)512 points, and the simulations were continued for an additional 70 physical minutes. A short period is required for small-scale turbulence to develop within the finer grid, such that the first 10 minutes are excluded from the analysis. The time-averaged statistics for each case were computed across the last 60 minutes of the simulations. The time-averaged statistics are featured in Fig. 1 and all later results. ### Grid sensitivity of near-surface statistics For wall-modeled LES, flow statistics near the surface can be significantly biased by the wall model and grid resolution. To avoid inclusion of such biases in the analysis, the present section assesses the specific range of heights near the surface where the mean and gradient statistics are approximately converged with respect to grid resolution. More general discussions of grid resolution effects and mesh sensitivity are available elsewhere (e.g., Davidson, 2009; Sullivan and Patton, 2011; Berg et al., 2020; Wurps et al., 2020). To test the sensitivity of results to grid resolution, cases A, C, and E from Table 1 were repeated for the series of grid sizes summarized in Table 2. For a given case, all resolutions used the same initial velocity profiles and spin-up as determined from Sect. 2.1, with the different resolution introduced for the final 70 minutes of simulation. The sensitivity analysis only directly uses the two finest grids, but the additional coarser grids are useful for identify general trends discussed later in the context of the results. Mean profiles of wind speed and air temperature are shown in Fig. 3 for all tested grid sizes, with a vertical discplacement between different cases for visualization. The profiles are plotted as the diabatic term (Panofsky, 1963) \begin{table} \begin{tabular}{c c c c} \hline \hline \(N_{x}\times N_{y}\times N_{z}\) & \(L_{x}\times L_{y}\times L_{z}\) & \(\Delta_{x}\times\Delta_{y}\times\Delta_{z}\) & \(\Delta/z_{i}\) \\ (–) & (km) & (m) & (–) \\ \hline 100 \(\times\) 100 \(\times\) 50 & 12 \(\times\) 12 \(\times\) 2 & 120 \(\times\) 120 \(\times\) 40 & 0.080 \\ 200 \(\times\) 200 \(\times\) 100 & 12 \(\times\) 12 \(\times\) 2 & 60 \(\times\) 60 \(\times\) 20 & 0.040 \\ 400 \(\times\) 400 \(\times\) 200 & 12 \(\times\) 12 \(\times\) 2 & 30 \(\times\) 30 \(\times\) 10 & 0.020 \\ 600 \(\times\) 600 \(\times\) 300 & 12 \(\times\) 12 \(\times\) 2 & 20 \(\times\) 20 \(\times\) 6.7 & 0.013 \\ 800 \(\times\) 400 \(\times\) 400 & 12 \(\times\) 12 \(\times\) 2 & 15 \(\times\) 15 \(\times\) 5 & 0.010 \\ 1024 \(\times\) 1024 \(\times\) 512 & 12 \(\times\) 12 \(\times\) 2 & 12 \(\times\) 12 \(\times\) 3.9 & 0.0078 \\ \hline \hline \end{tabular} \end{table} Table 2: Numerical grids used to test the resolution convergence of flow statistics for cases A, C, and E in table 1. Grid properties include the number of nodes \(N_{j}\) along each direction \(j=x\), \(y\), and \(z\), the domain size \(L_{j}\), corresponding grid resolution \(\Delta_{j}\), and effective resolution \(\Delta=(\Delta_{x}\Delta_{y}\Delta_{z})^{1/3}\) relative to the inversion height for case A. Results reported in the later results are based on the finest resolution. \[\psi_{m} = \frac{1}{\kappa}\left[\log\left(\frac{z}{z_{o}}\right)-\frac{U}{u_{* }}\right]=\int\limits_{-z_{o}/L}^{-\zeta}\frac{1-\phi_{m}(\zeta^{\prime})}{ \zeta^{\prime}}d\zeta^{\prime}\] \[\psi_{h} = \frac{1}{\kappa}\left[\log\left(\frac{z}{z_{o}}\right)-\frac{ \theta-\theta_{s}}{\theta_{*}}\right]=\int\limits_{-z_{o}/L}^{-\zeta}\frac{1- \phi_{h}(\zeta^{\prime})}{\zeta^{\prime}}d\zeta^{\prime} \tag{4}\] that quantifies the difference between the local mean value and the log law, where \(\psi\) increases with convection. The value used here for the von Karman constant is \(\kappa=0.39\)(Marusic et al., 2013). To more directly compare the profiles across resolution, the normalization in Fig. 3 uses fixed scaling parameters \(u_{*}\) and \(\theta_{*}\) from the finest grid size rather than the parameter values resulting from each individual resolution. While there is an order of magnitude increase in resolution from the coarsest to finest grids in Table 2, the corresponding increase in \(u_{*}\) is only 6.0% for case A and 2.6% for case E. The difference in \(u_{*}\) between the two largest grids is 0.14% for case A and 0.15% for case E. The criterion used to determine the converged region of the mean profiles is the percent difference in \(\psi\) between the grids with 800\(\times\)800\(\times\)400 points and 1024\(\times\)1024\(\times\)512. Specifically, the difference in \(\psi\) must be less than 1% for this 28% increase in resolution. The vertical lines in Fig. 3 indicate the lowest height where this criterion is met, which varies from 0.04\(z_{i}\) (case E) to 0.06\(z_{i}\) (case A). The converged regions for cases B and D were inferred through interpolation, and the lower height in meters for case E was directly used for cases E1 and F. Later Figure 3: Nondimensional mean wind speed (a) and air temperature (b) with varying grid resolution for cases A (top), C (middle) and E (bottom). Means are expressed as the diabatic term \(\psi\) defined in Eq. (4). Short vertical lines indicate the start of the converged region where the change in \(\psi\) is less than 1% between the finest two grids. The means are normalized using the von Kármán constant \(\kappa=0.39\). figures either exclude statistics in the near-surface region where convergence is not observed or clearly differentiate the unconverged results. Finally, while it is not apparent from Fig. 3, the mean wind speed and air temperature in the mixed layer are converged for the 600\(\times\)600\(\times\)300 grid in case A, and at coarser resolutions for stronger convection. A similar assessment is made for the nondimensional gradient profiles \(\phi(z)\) in Fig. 4. The general concave shape of the profiles at the surface matches closely with previous observations (e.g., Bou-Zeid et al. 2005; Maronga and Reuder 2017) and demonstrates that the influence of the wall model, SGS model, and near-surface resolution extends well beyond the first grid points. The criterion used here for convergence of the gradient statistics is that the difference in \(\phi\) must be less than 0.01 between the grids with 800\(\times\)800\(\times\)400 points and 1024\(\times\)1024\(\times\)512. Using the same 1% threshold as above was found to be overly conservative given the very small magnitude of the derivatives far from the surface and would result in the defined converged regions beginning at moderately higher positions. Under weak convection in case A, the gradient statistics do not converge within the traditional surface layer below 0.1\(z_{\mathrm{i}}\). With increased convection, only the upper portion of the surface layer exhibits grid convergence for the relatively fine resolution employed here. Figure 4 demonstrates the challenge in using wall-modeled LES to critically evaluate near-surface behavior with a high degree of certainty. Accordingly, the conclusions drawn in the present work are confined to robust trends that extend beyond the surface layer. With the threshold of 0.01, the converged regions of \(\phi\) begin approximately 20 to 30 points away from the surface. The exact number of points is expected to depend on numerous variables including the LES wall and SGS models, grid size, grid aspect ratio, and convergence criterion, such that the quantitative outcomes of Figs. 3 and 4 are considered to be specific to this study. Figure 4: Nondimensional gradients of velocity (a) and air temperature (b) with varying grid resolution for cases A (top), C (middle) and E (bottom). Gradients are expressed as \(\phi\) defined in Eq. (1). Short vertical lines indicate the start of the converged region where the change in \(\phi\) is less than 0.01 between the finest two grids. ## 3 Mean velocity and temperature similarity The profiles in Fig. 1, generated using the spin-up procedure outlined in Sect. 2.1, exhibit convergence with respect to grid resolution in the upper portion of the surface layer as discussed in Sect. 2.2. Similarity in the region with converged statistics, including in the convective matching layer above the surface layer, is now evaluated in further detail. ### Comparison with Businger-Dyer relations The diabatic mean wind speed and air temperature profiles predicted by the Businger-Dyer relations in Eq. (3) result directly from the integration in Eq. (4) (Paulson, 1970): \[\psi_{m}=2\log\bigg{(}\frac{1+x}{2}\bigg{)}+\log\bigg{(}\frac{1+x ^{2}}{2}\bigg{)}-2\tan^{-1}x+\frac{\pi}{2}\] \[\psi_{h}=2\log\bigg{(}\frac{1+x^{2}}{2}\bigg{)}. \tag{5}\] Here, \(x=\phi_{m}^{-1}=(1-16\zeta)^{0.25}\) and the contribution \(\psi(-z_{o}/L)\) from the lower limit of the integral is neglected under the condition \(-z_{o}/L\ll 1\). In Figs. 5a and 5b, the relations in Eq. (5) are compared with \(\psi\) computed directly from the LES profiles using Eq. (4). The range of heights included for each profile in Figs. 5a and 5b spans from the bottom of the converged region identified from Fig. 3 up to \(0.3z_{i}\). The upper limit extends beyond the traditional definition for the surface layer and is included for consistency with the later analysis. The height \(0.1z_{i}\) is indicated in all figures where necessary to facilitate the distinction of trends within and above the traditional surface layer. While Eq. (5) appears to conform to the general shape of each LES profile, there is a distinct stability trend in which the profiles become increasingly offset from the prediction with weakening convection. The displacement between cases is amplified in the gradient \(\phi\) profiles shown in Figs. 5c and 5d. Consistent with previous observations, the general trend across cases appears to follow a curve similar to the Businger-Dyer profiles, but the decay in \(\phi\) along each individual profile is significantly steeper (Maronga and Reuder, 2017; Pirozzoli et al., 2017; Li et al., 2018). Unlike the referenced studies, however, the entirety of each \(\phi\) curve is below the Businger-Dyer profiles. This difference is likely due to a combination of excluding the near-surface points from the present cases and the significant effect of inertial oscillations on the relevant normalization parameters \(u_{*}\) and \(L\) as seen in Fig. 2. Figure 5 demonstrates that MOST and \(\zeta\) alone cannot fully account for the LES profile trends. Scaling adjustments to the definitions of \(\zeta\) and/or \(\phi\) are required for the profiles within the surface layer to collapse along a common curve. At the same time, alignment with Businger-Dyer relations in field experiments and across different cases in simulation studies suggest that Eq. (3) provides a reasonably accurate foundation for similarity in the mean profiles. In the following section, the trends in \(\phi\) are considered in the context of the extended profile to identify a possible similarity framework that is compatible with these past and present findings. ### Gradient profile trends The nondimensional \(\phi\) profiles in Figs. 5c and 5d are replotted in Fig. 6 as a function of relative position \(z/z_{i}\) within the CBL. The axes are shown with both logarithmic (top) and log-linear (bottom) scaling to facilitate interpretation of the profile shapes in different regions. The transparent extension of each curve is the region that did not meet the resolution convergence criterion detailed in Sect. 2.2 and Fig. 4. The velocity gradients approaching the mixed layer exhibit incomplete statistical convergence, leading to random error along the profiles. Recalling that the observed values \(\phi\lesssim O(0.1)\) are the product of the gradient and the height \(z\sim O(100\) m) in Eq. (1), the dimensional gradient is exceedingly small in this region. Additional computing resources are not currently available to continue the simulations for a longer duration. Conclusions made from the gradient statistics are limited to trends that exceed the observed variability within each profile. For the region near and below \(0.1z_{i}\) in Figs. 6a and 6b, there is noticeable curvature in the profile for weak convection in case A, but \(\phi(z)\) increasingly resembles a power law with increasing convection. Further, the slope of the profiles, i.e. the exponent of the power law, does not vary across the tested cases. These trends are consistent with the form of the Businger-Dyer profiles in Eq. (3), where the contribution of 1 vanishes with increasing convection and the exponent is assumed Figure 5: Nondimensional profiles as a function of \(\zeta=z/L\), compared to Businger-Dyer relations (thin lines) from Eqs. (3) and (5). Rows correspond to the mean \(\psi\) (a,b) computed from Eq. (4) and gradient \(\phi\) (c,d) computed from Eq. (1), and columns correspond to momentum (a,c) and heat (b,d). Each profile includes heights from the bottom of the converged region determined in Sect. 2.2 up to \(0.3z_{i}\), and short lines indicate the height \(z=0.1z_{i}\) for reference. constant. With respect to grid resolution, the approximate power law in \(\phi(z)\) under moderate convection is only apparent for the two largest grids tested in Table 2. At higher positions, the mean gradients in Figs. 6a and 6b appear to be governed by an accelerating decay in \(\phi\) that results in the cutoff \(\phi(z{=}0.4z_{i})\approx 0\). The position of the cutoff is approximately constant as a fraction of \(z_{i}\), whereas the start of the cutoff seen in Figs. 5c and 5d unambiguously varies with \(L\). This has important implications for the approach to free convection and \(-\zeta\rightarrow\infty\): rather than the mixed layer starting at lower positions and the local free convection reaching closer to the surface with decreasing \(L\), Fig. 6 suggests the near-surface region maintains a fixed height and \(\phi\) decreases with increasing convection until it vanishes in the free convection limit. The shape of \(\phi\) approaching the mixed layer in Figs. 6c and 6d is approximately linear, particularly for the temperature. The linear trend implies a logarithmic decay in \(\phi\) and a dimensional gradient that resembles \(-\log(z)/z\). The height where the logarithmic decay gains leading order importance over the approximate power law appears to depend on stability and emerges at lower heights for the weaker convection cases. Importantly, the decay observed here is specific to a CBL with a well-defined mixed layer as seen in Fig. 1. For the tested grid sizes in Table 2, the logarithmic trend begins to appear with an overestimated slope for the grid with Figure 6: Nondimensional gradient profiles as a function of \(z/z_{i}\). Rows correspond to logarithmic (a,b) and log-linear (c,d) axis scaling, and columns correspond to momentum \(\phi_{m}\) (a,c) and heat \(\phi_{h}\) (b,d). Transparent regions of each curve are considered unconverged for the present resolution based on the assessment in Fig. 4. 400\(\times\)400\(\times\)200 points, and the slope is approximately converged for the grid with 600\(\times\)600\(\times\)300 points. The trends in Fig. 6, i.e. the consistency with Eq. (3) closer to the surface and the logarithmic decay approaching the mixed layer height, show the potential to augment the existing Businger-Dyer profiles with an additional term that enforces the accelerated decay in \(\phi\) with increasing \(z/z_{i}\). In this sense, the term is a correction for the transition between the surface and mixed layers, where previous findings (e.g., Salesky and Chamecki, 2012; Pirozzoli et al., 2017; Li et al., 2018) and Fig. 5 indicate the correction is necessary even within the traditional surface layer below \(0.1z_{i}\). The existing model corrections for \(\phi\) discussed in the introduction do not account for the specific trends observed in Fig. 6. For instance, the corrections based on \(z_{i}/L\) displace \(\phi\) by a constant value for a given case and do not consider the shape of the cutoff (Cheng et al., 2021; Liu et al., 2022). The cutoff is also not well described by an inverse summation of length scales (Gryning et al., 2007). A preliminary effort to empirically model the gradient cutoff is given in the following section. ### Preliminary model for extended similarity Deeper within the ABL in the roughness sublayer (RSL), the mean profiles are influenced by complex turbulent drag and mixing interactions associated with the local surface roughness. In this sublayer, it is standard to correct for the mean similarity in a multiplicative manner as \(\phi_{m}(\zeta)\varphi_{m}(z/z_{*})\), where \(\phi_{m}\) accounts for atmospheric stability and \(\varphi_{m}\) is a correction based on the relative position within the RSL depth \(z_{*}\)(e.g., Garratt, 1980; Cellier and Brunet, 1992; Molder et al., 1999). The most common functional form for \(\varphi\) is an exponential that models the decay in the gradient within the RSL as the surface is approached (Garratt, 1980; Harman and Finnigan, 2007; Mo et al., 2022). The same form is adopted here, except the purpose of the exponential in this case is to ensure the gradient decreases towards zero approaching the mixed layer rather than within the RSL. The revised similarity relations are given as \[\phi_{m}(\zeta,z/z_{i}) =(1-b_{m}\zeta)^{-0.25}\exp\left(-c_{m}\frac{z}{z_{i}}\right)\] \[\phi_{h}(\zeta,z/z_{i}) =a_{h}(1-b_{h}\zeta)^{-0.5}\exp\left(-c_{h}\frac{z}{z_{i}}\right), \tag{6}\] where \(a\), \(b\), and \(c\) are fitted constants and the leading constant \(a_{h}\) for temperature accounts for the turbulent Prandtl number. The exponent values in Eq. (3) are adopted here, noting that testing a wide range of exponents resulted in values within \(\pm 0.05\) of \(0.25\) and \(0.5\) and only a nominal increase in the coefficient of determination \(R^{2}\) for the fit. The exponential is included in the definition of \(\phi\) in Eq. (6) rather than as a separate \(\varphi\) function because it is part of the same stability correction. The constant \(c\) determines the height relative to \(z_{i}\) where the exponential becomes small. A value \(c>2\) is expected such that the cutoff function decreases to small values within the lower half of the CBL. To evaluate the applicability of the revised similarity relations, the expressions for \(\phi\) in Eq. (6) and their integral \(\psi\) defined in Eq. (4) were fitted to the LES profiles. The integral for \(\psi\) was computed numerically in the absence of a simple analytical solution. The reason for including \(\psi\) in the fitting procedure is to assess the extension of \(\phi\) down to \(z=z_{o}\). While the near-surface region cannot be fitted directly, Eq. (6) must have the correct cumulative magnitude below the fitted region in order to align with the LES profiles for \(\psi\). The cost function for the nonlinear fitting algorithm was the total residual between the predicted and observed \(\phi\) and \(\psi\) values compiled for all cases simultaneously. The fit result therefore represents the range of convective conditions rather than any individual case. The fitting procedure was conducted separately for the velocity and temperature statistics. Due to the complexity of the equations and the use of numerical integration, the algorithm was unable to converge when multiple parameters were undefined. Accordingly, the fit was designed to optimize \(c\) with \(a\) and \(b\) as prescribed inputs, and was repeated for a range of \(a\) and \(b\) values. The values presented here are those with the highest resulting \(R^{2}\), with \(a_{m}=1\) assumed for velocity. As noted above, the power law exponents in Eq. (6) were also varied before the traditional values were selected for simplicity. Finally, heights up to \(0.3z_{i}\) were included in the fits under the assumption that Eq. (6) approximates the transition across an extended range up to the convective mixed layer. The preliminary values resulting from the fit to the velocity profiles are \(b_{m}\approx 22\), \(c_{m}\approx 3.7\), and \(R^{2}=0.974\). The values for the temperature profiles are \(a_{h}\approx 0.93\), \(b_{h}\approx 14\), \(c_{h}\approx 2.9\), and \(R^{2}=0.992\). The higher \(R^{2}\) for temperature is likely due in part to the additional fitted parameter \(a_{h}\) and the better statistical convergence of \(\phi\) in Fig. 6 relative to velocity. Owing to the lack of near-surface points, the present values for \(a_{h}\) and \(b\) are not suggested as replacements to existing values. Further, the difference between \(c_{m}\approx 3.7\) and \(c_{h}\approx 2.9\) is not given a physical interpretation here. It may result simply from the fact that the smaller Businger-Dyer exponent \(-0.25\) for momentum requires a larger cutoff correction to have similar \(\phi\) near the mixed layer. The main outcome of the fit is to demonstrate the systematic improvement of the profile prediction with the inclusion of a \(z/z_{i}\) cutoff. Figure 7 compares the mean profiles predicted from the integral of Eq. (6) (dashed lines) with the LES profiles. The dotted lines in Figs. 7a and 7b are Eq. (6) with the fitted values, but excluding the exponential cutoff. When plotted as \(\psi\), these dotted lines collapse along the solid black lines in Figs. 7c and 7d defined by the integral of the equations given in the legend. The expression with the exponential cutoff leads to significant improvements in the alignment with the LES profiles in Figs. 7a and 7b across all heights within the converged region, particularly for heights above \(0.1z_{i}\). While there are some discrepancies, most notably for cases A and F, the predictions resulting from Eq. (6) provide a reasonable approximation of the mean profiles for an extended range from below \(0.1z_{i}\) to above \(0.3z_{i}\) and near the start of the convective mixed layer. The diabatic term \(\psi\) in Figs. 7c and 7d demonstrates the departure of the mean profiles from surface layer similarity as a result of the accelerated decay in the gradients. The dashed lines representing Eq. (6) all begin at \(\psi(z{=}z_{o})=0\) and become increasingly dissimilar from Monin-Obukhov scaling similarity with increasing \(z/z_{i}\), i.e. the \(\psi\) values spread farther apart. This dissimilarity is well predicted by the exponential cutoff correction. Figure 8 evaluates the gradient profiles predicted from Eq. (6) (dashed lines) in the same manner as the mean values in Fig. 7. As before, the dotted lines in Figs. 8a and 8b exclude the exponential cutoff and are equivalent to the solid black lines in Figs. 8c-f defined by the equations in the legends. Equation (6) matches closely with the \(\phi\) curves in Figs. 8a and 8b compared to the Businger-Dyer profiles without a cutoff correction. However, the exponential cutoff does not fully account for the entirety of the gradient decay, as seen in the deviations from the LES profiles that emerge between 0.2-0.3\(z_{i}\). As discussed previously, the dimensional value of the gradients in this range of heights is very small such that the discrepancy may be of limited practical importance. For instance, the discrepancy in \(\phi\) approaching the mixed layer is not readily seen in the Figs. 7a and 7b mean profiles. The plots of \(\phi(\zeta)\) in Figs. 8c and 8d show the departure from Monin-Obukhov similarity in the LES profiles compared with the prediction from the exponential cutoff. The exponential cutoff closely approximates the decay in the gradients within and above the surface layer for the fitted cases. To collapse the nondimensional gradients along a single curve, it is necessary to group the gradient and cutoff correction as \(\phi\exp{(cz/z_{i})}\). The product, shown in Figs. 8e and 8f, now aligns reasonably well with the Businger-Dyer profiles. Most of the residual differences occur near \(0.3z_{i}\) and are due to the discrepancies seen in Figs. 8a and 8b and discussed above. Figure 7: Comparison of LES mean profiles (thick lines) with the best fit of the Eq. (6) integral (dashed lines) and the same equation without the exponential cutoff (dotted lines). Rows correspond to the mean value (a,b) and the diabatic term \(\psi\) (c,d), and columns correspond to momentum (a,c) and heat (b,d). Transparent regions of the curves in (a,b) are considered unconverged for the present resolution based on the assessment in Fig. 3. Each profile in (c,d) includes heights from the bottom of the converged region up to \(0.3z_{i}\), and short lines indicate the height \(z=0.1z_{i}\) for reference. Figure 8 provides promising evidence for expanding the similarity parameter space to include \(z/z_{i}\). The correction in Eq. (6) provides profile predictions for an extended range approaching the mixed layer and may account for the departures from MOST observed in previous simulation studies (Khanna and Brasseur, 1997; Pirozzoli et al., 2017; Li et al., 2018). However, the generality of the results remain unproven at this point and further comparison with other simulations and measurements is required. Figure 8: Comparison of LES \(\phi\) profiles (thick lines) with the best fit of Eq. (6) (dashed lines) and the same equation without the exponential cutoff (dotted lines). Rows correspond to the profiles plotted versus \(z/z_{i}\) (a,b), plotted versus \(\zeta\) (c,d), and after compensating for the exponential cutoff (e,f). Columns correspond to momentum (a,c,e) and heat (b,d,f). Transparent regions of the curves in (a,b) are considered unconverged for the present resolution based on the assessment in Fig. 4. Each profile in (c-f) includes heights from the bottom of the converged region up to \(0.3z_{i}\), and short lines indicate the height \(z=0.1z_{i}\) for reference. If Eq. (6) is applicable beyond the present cases, one interesting note is that the cutoff correction is independent of stability. The LES cases in Table 1 span the transition from relatively weak convection with thermal rolls to moderately strong convection with cells (Etling and Brown, 1993; Atkinson and Zhang, 1996; Khanna and Brasseur, 1998; Salesky et al., 2017). For the current results, the correction at a given height \(z/z_{i}\) is the same regardless of the convective regime, indicating that the different large-scale structures (i.e. rolls or cells) impinging on the surface layer reduce the average gradient by the same fraction. ## 4 Discussion ### Implications for mixed layer resistance dependencies While there is extensive theory for resistance laws in the geostrophic drag and heat transfer across the entire ABL (see, e.g. Monin, 1970; Yamada, 1976; Arya, 1977), there are relatively fewer studies relating mean wind speed \(U_{m}\) and temperature \(\theta_{m}\) in the convective mixed layer to surface properties. The derived scaling for \(U_{m}/u_{*}\) and \((\theta_{m}-\theta_{s})/\theta_{*}\) depends on underlying assumptions, in particular regarding the bottom height \(z_{m}\) of the mixed layer. The resistance formulas can be expressed from the mean values at \(z=z_{m}\): \[\frac{U_{m}}{u_{*}} = \frac{1}{\kappa}\left[\log\left(\frac{z_{m}}{z_{o}}\right)-\psi_ {m}\left(\frac{z_{m}}{L},\frac{z_{m}}{z_{i}}\right)\right]\] \[\frac{\theta_{m}-\theta_{s}}{\theta_{*}} = \frac{1}{\kappa}\left[\log\left(\frac{z_{m}}{z_{o}}\right)-\psi_ {h}\left(\frac{z_{m}}{L},\frac{z_{m}}{z_{i}}\right)\right], \tag{7}\] noting that traditional approaches do not include the \(z/z_{i}\) correction for \(\psi\). If \(z_{m}\propto z_{i}\) is assumed, the mixed layer values depend directly on both \(\log\left(z_{i}/z_{o}\right)\) and \(\psi(z_{i}/L)\)(Garratt et al., 1982). Alternatively, if \(z_{m}\propto-L\) is assumed based on arguments of local free convection (Wyngaard et al., 1971), the approximate result depends solely on \(\log\left(-L/z_{o}\right)\)(Zilitinkevich et al., 1992; Tong and Ding, 2020; Liu et al., 2023). The results in Sect. 3.2 and Fig. 6 yield \(z_{m}\approx 0.4z_{i}\) for all LES cases based on the logarithmic decay of the gradients. Considering Eq. (6) aligns with the mean profiles up to the mixed layer in Figs. 7a and 7b, the present findings indicate that the mixed layer values in Eq. (7) depend on both \(z_{i}/z_{o}\) and \(z_{i}/L\) in a complex manner. The mixed layer mean values based on \(z_{m}=0.4z_{i}\) are shown in Fig. 9. Included for comparison are the predicted values from numerical integration of Eq. (6) up to \(z_{m}\). It may be possible to evaluate further the integral definition for \(\psi(z_{m}/L,z_{m}/z_{i})\)(see, e.g., Physick and Garratt, 1995; De Ridder, 2010), but for the present discussion the primary concern is the \(\log\left(z_{m}/z_{o}\right)\) term in Eq. (7) that is already uncoupled from \(\psi\). The velocity statistics in Fig. 9 are supplemented with results from two recent LES studies (Tong and Ding, 2020; Liu et al., 2023). The \(U_{m}\) values reported in Tong and Ding (2020) are based on a model parameter that differs from the mean mixed layer value (see, e.g., their Fig. 3), such that the \(U_{m}\) values used in Fig. 9 were inferred from their CBL profiles. The differing log-linear slope in Fig. 9a for the present LES and the cases from Liu et al. (2023) indicate that \(\log{(-L/z_{o})}\) is not the sole determinant of \(U_{m}/u_{*}\). In particular, increasing \(z_{i}/z_{o}\) leads to a vertical shift in the resistance value. The predicted values (closed symbols) for the external references use the same parameter values fitted to the present LES. The combination of the corrected similarity expression in Eq. (6) and the fixed height \(z_{m}=0.4z_{i}\) lead to an accurate prediction of the differing trends noted above for Fig. 9a. The alignment with the reference studies is promising regarding the potential generality of the exponential cutoff correction. The roughness dependence leading to a vertical shift in the mixed layer resistance value can be offset by plotting the diabatic term \(\psi_{m}\) as in Fig. 9c. In this format, the data are approximately aligned along a common curve, noting that the residual differences may be due in part to the spin-up procedure that can significantly affect \(L\) (Fig. 2) and differences between SGS models as observed in Tong and Ding (2020). Importantly, \(\psi\) will also vary with roughness as \(z_{o}\) becomes large, but \(z_{o}\ll z_{m}\) for these data and its contribution to \(\psi\) is negligible. Figure 9: Dependence of the mean velocity \(U_{m}\) and temperature \(\theta_{m}\) in the convective mixed layer. Rows correspond to the mean values versus \(L/z_{o}\) (a,b) and their diabatic terms versus \(z_{i}/L\) (c,d), and columns correspond to velocity (a,c) and temperature (b,d). Results are included for the present LES and two reference LES studies indicated by open symbols, where each are compared to predicted values based on Eq. (6) indicated by closed transparent symbols. Observed and predicted values are based on the mixed layer bottom height \(z_{m}=0.4z_{i}\). Symbol color corresponds to \(z_{i}/z_{o}\). Fig. 9 supports \(z_{m}\propto z_{i}\) and the dependence of \(U_{m}/u_{*}\) on both \(z_{i}/z_{o}\) and \(z_{i}/L\). The same conclusion cannot be made for temperature due to lack of independent data across a range of \(z_{i}/z_{o}\). However, the mixed layer temperature resistance is included for the present LES in Figs. 9b and 9d for completeness. ### Implications for the surface layer height The extended logarithmic decay of \(\phi\) as the gradients vanish to zero in Fig. 6 provide a consistent criterion for defining the bottom of the mixed layer where free convection occurs, but defining the top of the surface layer \(z_{SL}\) is more ambiguous. The \(z/z_{i}\) cutoff correction in Eq. 6 is non-negligible within the full range of heights analyzed here, from approximately \(0.05z_{i}\) up to \(z_{m}\), where the correction is required to account for simulation trends observed in Fig. 8 and in previous studies (e.g., Khanna and Brasseur, 1997; Pirozzoli et al., 2017; Li et al., 2018). Upon further considering the nonlocal contribution of large-scale eddies to the decay in the gradients at these heights (Li et al., 2018; Fodor et al., 2019), it is possible that the extended range from \(z_{SL}\) (not yet defined) to \(z_{m}\) is a transition region resulting from the coexistence of local and nonlocal eddies governed by different mechanisms and scales. Here, the exponential cutoff approximates the transition in the mean profiles from Monin-Obukhov similarity in the surface layer to the zero gradient condition characterizing the mixed layer. This transition is depicted in Fig. 10. Assuming the exponential cutoff and fitted constant \(c\) in Fig. 10a represents the appropriate correction to surface similarity, the correction is less than 10% only for heights in the lowest few percent of the CBL depth. For \(z_{i}\sim O(1\ \mathrm{km})\), these heights correspond to the lowest 30 m or so of the atmosphere where many field measurements are sampled. On one hand, this indicates that the correction is not significant for many previous field campaigns and explains why the original Businger-Dyer profiles align well with experimental data to within the uncertainty and scatter of the results. On the other hand, a 10% correction at \(0.03z_{i}\) is non-negligible and emphasizes the need to reconsider the conventional height for \(z_{SL}\). In Fig. 10, the approximation \(z_{SL}\approx 0.1z_{m}\approx 0.04z_{i}\) is applied under the assumption Figure 10: Depiction of the transition region between the top of the surface layer \(z_{SL}\) and the bottom of the mixed layer \(z_{m}\). (a) Value of the exponential cutoff in the gradient profiles for momentum and heat. (b) Contribution of both surface scaling and local free convection events to the mean gradients within the transition region, where the relative contribution of each varies with height in conjunction with the cutoff function in (a). that \(z_{SL}\ll z_{m}\) is required for the contribution of nonlocal eddies to the gradient - and the resulting correction factor - to be small. However, the correction is greater than 10% at this height such that a more stringent definition may be warranted. Regardless of the exact definition for \(z_{SL}\), the traditional estimate \(z_{SL}=0.1z_{i}\) is insufficient for convective conditions based on the growing body of evidence discussed in the introduction. The question of defining \(z_{SL}\) is complemented by recent evidence that mean profile similarity in the surface layer of the stable ABL also depends on the boundary layer depth (Heisel and Chamecki, 2023). While the flow structure for stable conditions is considerably different, the same general reasoning applies: z-less stratification above the surface layer indirectly influences the gradients below \(0.1z_{i}\). Here, free convection turbulence above the surface layer influences the gradients below \(0.1z_{i}\) in a more direct manner. The transition region depicted in Fig. 10b coincides with the range of heights previously discussed as a local free convection layer (Tennekes, 1970; Kader and Yaglom, 1990) or convection matching layer (Panofsky, 1978; Kaimal and Finnigan, 1994). However, there is a distinction in the observed scaling, at least for the first-order statistics. In the traditional local convection layer with \(-L\ll z\ll z_{i}\), the \(z\) scaling is still relevant but the velocity and temperature variables no longer depend on \(u_{*}\)(Wyngaard et al., 1971). In the present study, the exponential cutoff reduces the gradient based on the relative position within the transition region, but does not alter the scaling for \(\phi\) or \(\zeta\). As an analogy, the fluxes \(-\overline{u^{\prime}w^{\prime}}/u_{*}^{2}\) decay with \(z/z_{i}\) but scale with \(u_{*}\) throughout the CBL depth. In this sense, the gradients maintain their surface scaling up to \(z_{m}\) despite the incomplete similarity due to the \(z/z_{i}\) correction demonstrated in Fig. 8. This \(z/z_{i}\) correction corresponds to the presence of large-scale eddies such as downdrafts (Li et al., 2018) and updrafts (Fodor et al., 2019) whose governing scale \(w_{*}\) is oriented along \(z\)(Deardorff, 1970). Within the mixed layer, the ensemble of these predominately vertical motions results in zero mean gradient. If the free convection events that extend into the transition region also have a collective mean gradient close to 0, the events would contribute to a decrease in the overall \(\phi\) without incurring a statistically meaningful transition in scaling from \(u_{*}\) to \(w_{*}\). The decrease in \(\phi\) would then directly depend on the relative probability of free convection events that increases with height in a manner consistent with the exponential in Fig. 10a. Importantly, the same argument cannot be extended to the higher-order variance statistics and an analysis of the variances is outside the scope of the present work. There is uncertainty in Fig. 10 with respect to changes in the behavior with increasing instability. The present analysis suggests the exponential cutoff and parameter \(c\) do not vary with \(z_{i}/L\). As noted previously, this suggests the correction and the relative gradient contributions are independent of changes in the eddy topology from roll structures to vertical cells. Further, with \(z_{m}\propto z_{i}\) the surface-scaling region in Fig. 10b must increasingly resemble the mixed layer turbulent structure in order for the surface layer to vanish in free convection. These implications should be evaluated across a wider range of \(z_{i}/L\) and for a greater number of cases before conclusions are drawn. ## 5 Summary The present work uses a series of seven LES cases to study mean profile similarity in the lower half of the convective boundary layer. The cases represent a dry, barotropic idealized CBL under weak to moderately strong convection with mid-latitude Coriolis frequency, a stable capping inversion, and a well-defined mixed layer. An ad hoc spin-up procedure is used to mitigate inertial oscillations in the final simulations (Fig. 2), and the grid converge of near-surface profile statistics is closely examined (Figs. 3 and 4). The simulations reveal the same qualitative trends in the nondimensional gradients \(\phi\) seen in other recent simulation-based studies (Fig. 5): the results generally align with Monin-Obukhov similarity across the different cases, but the individual profiles each exhibit a steeper slope than predicted from existing similarity relations (Pirozzoli et al., 2017; Maronga and Reuder, 2017; Li et al., 2018). In other words, MOST captures variability in \(\phi\) across a range of \(L\) and fixed \(z\), but does not fully account for the variability across \(z\) for fixed \(L\). The behavior of the \(\phi\) profiles above the surface layer indicates that the steeper slope is associated with a broader trend in \(z/z_{i}\) that reduces the gradient towards 0 at the height of the mixed layer (Fig. 6). To account for this trend, the well-known Businger-Dyer profiles are revised in Eq. (6) with an exponential cutoff similar to corrections for similarity in the roughness sublayer (Garratt, 1980; Harman and Finnigan, 2007). The revised expressions, with fitted parameters \(c_{m}\approx 3.7\) and \(c_{h}\approx 2.9\) for the exponential term \(\exp{(-cz/z_{i})}\), result in significantly improved similarity for the mean (Fig. 7) and gradient (Fig. 8) profiles from approximately \(0.05z_{i}\) up to the mixed layer near \(0.4z_{i}\). The correction is expected to be small close to the surface where most point measurements are acquired in field experiments, which may explain why the consistent \(z/z_{i}\) trend seen in simulations is not readily apparent from field observations. Further, the parameter space probed by field measurements often spans a wide range of \(L\) at a limited series of fixed heights, which as noted above can yield curves that closely follow MOST relations. In addition to the improved similarity, there are three important implications arising from the revised relations in Eq. (6). First, the mean values in the mixed layer depend on both \(\log{(z_{i}/z_{o})}\) and \(\zeta\) (Fig. 9). Second, the exponential correction accounts for an extended transition region in the mean profiles between the surface layer and the mixed layer (Fig. 10), where this region is strongly influenced by large-scale buoyancy-driven eddies (Li et al., 2018; Fodor et al., 2019). Owing to the effect of these eddies, the correction only becomes small for \(z/z_{i}\sim O(0.01)\), such that the common assumption of \(0.1z_{i}\) for the surface layer height is too large for idealized convective conditions. Third, the start of the mixed layer at a fixed fraction of \(z_{i}\) (Fig. 6) suggests the surface layer turbulent structure changes with increasing convection in order to match the mixed layer under free convection, but extending the analysis to stronger convection is required to validate the last point. While the fitted results in Figs. 7 and 8 are promising, the present analysis includes a limited number of cases and lacks reliable statistics in the bottom half of the surface layer. Equation (6) is thus considered to be a preliminary effort to model the trends in \(\phi\) observed in Figs. 5 and 6. Several other functional forms were evaluated to more accurately account for the logarithmic decay in \(\phi\), but the collective profile data were found to be prone to overfitting, where the functional dependencies were borne by the fitted parameters. Accordingly, the proposed correction is limited to an extension of the widely-tested Businger-Dyer profiles, and the correction has only one nondimensional parameter (\(c_{m}\) or \(c_{h}\)) that has a clear physical interpretation corresponding to the height where the cutoff reaches a given magnitude. Further analysis with additional datasets may support a more sophisticated model that better matches the full gradient cutoff up to \(0.4z_{i}\) in Figs. 8a and 8b, but for the available data in the present study only a simpler model is warranted. ## Declarations The authors have no competing interests to declare. Profiles of the LES flow statistics will be published in a public repository following the initial peer review process. ###### Acknowledgements. The authors gratefully acknowledge high-performance computing support from Cheyenne (doi:10.5065/D6RX99HX) provided by the National Center for Atmospheric Research Computational Information Systems Laboratory. Additionally, M. H. acknowledges start-up support from the School of Civil Engineering at the University of Sydney and M. C. acknowledges partial funding support from the Biological and Environmental Research program of the U.S. Department of Energy (DE-SC0022072).
2305.19912
Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data
This paper presents Structure Aware Dense Retrieval (SANTA) model, which encodes user queries and structured data in one universal embedding space for retrieving structured data. SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining. It contrastively trains language models to represent multi-modal text data and teaches models to distinguish matched structured data for unstructured texts. 2) Masked Entity Prediction, which designs an entity-oriented mask strategy and asks language models to fill in the masked entities. Our experiments show that SANTA achieves state-of-the-art on code search and product search and conducts convincing results in the zero-shot setting. SANTA learns tailored representations for multi-modal text data by aligning structured and unstructured data pairs and capturing structural semantics by masking and predicting entities in the structured data. All codes are available at https://github.com/OpenMatch/OpenMatch.
Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, Ge Yu
2023-05-31T14:45:25Z
http://arxiv.org/abs/2305.19912v1
# Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data ###### Abstract This paper presents Structure Aware DeNse ReTrievAl (SANTA) model, which encodes user queries and structured data in one universal embedding space for retrieving structured data. SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining. It contrastively trains language models to represent multi-modal text data and teaches models to distinguish matched structured data for unstructured texts. 2) Masked Entity Prediction, which designs an entity-oriented mask strategy and asks language models to fill in the masked entities. Our experiments show that SANTA achieves state-of-the-art on code search and product search and conducts convincing results in the zero-shot setting. SANTA learns tailored representations for multi-modal text data by aligning structured and unstructured data pairs and capturing structural semantics by masking and predicting entities in the structured data. All codes are available at [https://github.com/OpenMatch/OpenMatch](https://github.com/OpenMatch/OpenMatch). ## 1 Introduction Dense retrieval has shown strong effectiveness in lots of NLP applications, such as open domain question answering (Chen et al., 2017), conversational search (Qu et al., 2020; Yu et al., 2021), and fact verification (Thorne et al., 2018). It employs pretrained language models (PLMs) to encode unstructured data as high-dimensional embeddings, conduct text matching in an embedding space and return candidates to satisfy user needs (Xiong et al., 2021; Karpukhin et al., 2020). Besides unstructured data, structured data, such as codes, HTML documents and product descriptions, is ubiquitous in articles, books, and Web pages, and plays the same important roles in understanding text data. Learning the semantics behind text structures to represent structured data is crucial to building a more self-contained retrieval system. The structured data modeling stimulates researchers to build several benchmarks to evaluate model performance, such as code search and product search (Husain et al., 2019; Reddy et al., 2022). The structured data retrieval tasks require models to retrieve structured data according to user queries. Dense retrieval (Karpukhin et al., 2020; Li et al., 2022) shows a promising way to build a retrieval system on structured data by encoding user queries and structured data in an embedding space and conducting text matching using the embedding similarity. Nevertheless, without structure-aware pretraining, most PLMs lack the necessary knowledge to understand structured data and conduct effective representations for retrieval (Feng et al., Figure 1: Dense Retrieval Pipeline on Structured Data. 2020; Hu et al., 2022; Gururangan et al., 2020). Lots of structure-aware pretraining methods are proposed to continuously train PLMs to be structure-aware and better represent structured data Wang et al. (2021); Feng et al. (2020). They design task-specific masking strategies and pretrain PLMs with mask language modeling. Nevertheless, only using mask language modeling may not sufficiently train PLMs to conduct effective representations for structured data Li et al. (2020); Fang et al. (2020). Some natural alignment signals between structured and unstructured data, such as code-description documentation and product description-bullet points, provide an opportunity to pretrain the structured data representations. Using these alignment signals, PLMs can be contrastively trained Wu et al. (2020); Karpukhin et al. (2020) to match the representations of aligned structured and unstructured data and understand the semantics of structured data with the help of natural language. In this paper, we propose **S**tructure **A**ware **DeN**se **Re**Triev**Al (SANTA), a dense retrieval method on structured data. As shown in Figure 1, SANTA encodes queries and structured data in an embedding space for retrieval. SANTA designs two pretraining tasks to continuously train PLMs and make PLMs sensitive to structured data. The Structured Data Alignment task contrastively trains PLMs to align matched structured-unstructured data pairs in the embedding space, which helps to represent structured data by bridging the modality gap between structured and unstructured data. The Masked Entity Prediction task masks entities and trains PLMs to fill in the masked parts, which helps to capture semantics from structured data. Our experiments show that SANTA achieves state-of-the-art in retrieving structured data, such as codes and products. By aligning structured and unstructured data, SANTA maps both structured and unstructured data in one universal embedding space and learns more tailored embeddings for multi-modal text data matching. The masked entity prediction task further guides SANTA to capture more crucial information for retrieval and better distinguish structured and unstructured data. Depending on these pretraining methods, SANTA can even achieve comparable retrieval results with existing code retrieval models without finetuning, showing that our structure-aware pretraining can benefit structured data understanding, multi-modal text data representation modeling and text data matching between user queries and structured data. ## 2 Related Work Dense retrieval Yu et al. (2021); Karpukhin et al. (2020); Xiong et al. (2021); Li et al. (2021) encodes queries and documents using pretrained language model (PLM) Devlin et al. (2019); Liu et al. (2019); Raffel et al. (2020) and maps them in an embedding space for retrieval. However, during retrieving candidates, the documents can be passages in natural language Nguyen et al. (2016); Kwiatkowski et al. (2019), images Chen et al. (2015), structured data documents Lu et al. (2021) or multi-modal documents Chang et al. (2021), which challenges existing dense retrieval models to handle different kinds of modalities of knowledge sources to build a self-contained retrieval system. Existing work Guo et al. (2021) also builds dense retrievers for retrieving structured data and mainly focuses on learning representations for code data. Leaning more effective representations with PLMs is crucial for dense retrieval Gao and Callan (2021); Luan et al. (2021), thus several continuous training models are proposed. They usually employ mask language modeling to train PLMs on structured data and help to memorize the semantic knowledge using model parameters Wang et al. (2021); Feng et al. (2020); Roziere et al. (2021). CodeBERT uses replaced token detection Clark et al. (2020) and masked language modeling Devlin et al. (2019) to learn the lexical semantics of structured data Lu et al. (2021). DOBF Roziere et al. (2021) further considers the characteristics of code-related tasks and replaces class, function and variable names with special tokens. CodeT5 Wang et al. (2021) not only employs the span mask strategy Raffel et al. (2020) but also masks the identifiers in codes to teach T5 Raffel et al. (2020) to generate these identifiers, which helps better distinguish and comprehend the identifier information in code-related tasks. Nevertheless, the mask language modeling Devlin et al. (2019) may not sufficiently train PLMs to represent texts and show less effectiveness in text matching tasks Chen and He (2021); Gao et al. (2019); Li et al. (2020); Reimers and Gurevych (2019); Li et al. (2020). The recent development of sentence representation learning methods has achieved convincing results Fang et al. (2020); Yan et al. (2021). The work first constructs sentence pairs using back-translation Fang et al. (2020), some easy deforma tion operations (Wu et al., 2020), original sequence cropping (Meng et al., 2021) or adding dropout noise (Gao et al., 2021). Then they contrastively train PLMs to learn sentence representations that can be used to distinguish the matched sentence pairs with similar semantics. ## 3 Methodology In this section, we introduce our **S**tructure **A**ware **De**N**e **R**T**rievAl (SANTA) model. First, we introduce the preliminary of dense retrieval (Sec. 3.1). And then we describe our structure-aware pretraining method (Sec. 3.2). ### Preliminary of Dense Retrieval Given a query \(q\) and a structured data document \(d\), dense retriever (Karpukhin et al., 2020; Xiong et al., 2021) encodes queries and structured data documents with pretrained language models (Devlin et al., 2019; Liu et al., 2019) and maps them in an embedding space for retrieval. Following previous work (Ni et al., 2022), we can use T5 (Raffel et al., 2020) to encode the query \(q\) and structured data document \(d\) as low dimensional representations \(h_{q}\) and \(h_{d}\), using the representation of the first token from the decoder: \[h_{q}=\texttt{TS}(q);h_{d}=\texttt{TS}(d). \tag{1}\] Then we can calculate the similarity score \(f(q,d)\) between the representations of query \(h_{q}\) and structured data document \(h_{d}\): \[f(q,d)=sim(h_{q},h_{d}), \tag{2}\] where \(sim\) is the dot product function to calculate the relevance between query \(q\) and structured data document \(d\). Finally, we can finetune the representations of query and document by minimizing the loss \(\mathcal{L}_{\text{DR}}\): \[\mathcal{L}_{\text{DR}}=-\log\frac{e^{f(q,d^{+})}}{e^{f(q,d^{+})}+\sum_{d^{-} \in\mathcal{D}^{-}}e^{f(q,d^{-})}}, \tag{3}\] where \(d^{+}\) is relevant to the given query \(q\). \(\mathcal{D}^{-}\) is the collection of irrelevant structured data documents, which are sampled from inbatch negatives (Karpukhin et al., 2020) or hard negatives (Xiong et al., 2021). ### Structure Aware Pretraining Existing language models are usually pretrained on unstructured natural languages with masked language modeling (Devlin et al., 2019; Liu et al., 2019). Nevertheless, these models struggle to better understand the semantics represented by data structures, which limits the effectiveness of language models in representing structured data for retrieval (Feng et al., 2020; Wang et al., 2021). To get more effective representations for structured data, we come up with structure-aware pretraining methods, aiming to help language models better capture the semantics behind the text structures. As shown in Figure 2, we continuously fine Figure 2: The Structure-Aware Pretraining Methods of SANTA. We use both Structured Data Alignment (SDA) and Masked Entity Prediction (MEP) methods for pretraining. tune T5 using two pretraining tasks by minimizing the following loss function \(\mathcal{L}\): \[\mathcal{L}=\mathcal{L}_{\text{SDA}}+\mathcal{L}_{\text{MEP}}, \tag{4}\] where \(\mathcal{L}_{\text{SDA}}\) and \(\mathcal{L}_{\text{MEP}}\) are two loss functions from structured data alignment (SDA) (Sec. 3.2.1) and masked entity prediction (MEP) (Sec. 3.2.2), which are two subtasks of our structure-aware language model pretraining method. #### 3.2.1 Structured Data Alignment The structured data alignment task teaches language models to optimize the embedding space by aligning structured data with unstructured data. For the structured data document \(d\), there are usually some natural language passages that share the same semantics with \(d\), _e.g._ the descriptions of codes and bullet points of products. With the help of these text passages \(p\) in natural language, we can enhance the model's ability in representing structured data by continuously training language models to align the semantics between structured and unstructured data. Through text data alignment, the representations of structured data are benefited from the intrinsic natural language knowledge of pretrained language models. Specifically, we can use T5 to encode the text passage and structured data document as \(h_{p}\) and \(h_{d}\), respectively, calculate the similarity score \(f(p,d)\) between text passage \(p\) and structured data document \(d\), and then continuously train language models using the contrastive loss \(\mathcal{L}_{\text{SDA}}\): \[\begin{split}&\mathcal{L}_{\text{SDA}}=-\log\frac{e^{f(p,d^{+})}}{e^{ f(p,d^{+})}+\sum_{d^{-}\in D^{-}}e^{f(p,d^{-})}}\\ &=-f(p,d^{+})+\log(e^{f(p,d^{+})}+\sum_{d^{-}\in D^{-}}e^{f(p,d^{ -})}),\end{split} \tag{5}\] where \(D^{-}\) consists of the irrelevant structured data sampled from in-batch negatives. As shown in Eq. 5, the structured data alignment training task helps to optimize the pretrained language models to assign similar embedding features to \(<p,d^{+}>\) pairs and pull \(d^{-}\) away from \(p\) in the embedding space Wang and Isola (2020). Such a contrastive training method can bridge the semantic gap between structured and unstructured data and map them in one universal embedding space, benefiting learning representations of multi-modal text data Liu et al. (2023). #### 3.2.2 Masked Entity Prediction The masked entity prediction guides the language models to better understand the semantics of structured data by recovering masked entities. SANTA masks entities for continuous training instead of using the random masking in mask language modeling Devlin et al. (2019); Raffel et al. (2020). As shown in previous work Sciavolino et al. (2021); Zhang et al. (2019), entity semantics show strong effectiveness in learning text data representations during retrieval. Thus, we first recognize mentioned entities that appeared in the structured data document \(X_{d}=\{x_{1},\text{ent}_{1},x_{2},\text{ent}_{2},...,\text{ent}_{n}\}\) and mask them as the input for T5 encoder module: \[X_{d}^{\text{mask}}=\{x_{1},\text{<mask>}_{1},x_{2},\text{<mask>}_{2},..., x_{n}\}, \tag{6}\] where <mask>\({}_{i}\) is a special token to denote the \(i\)-th masked span. We replace the same entity with the same special token. Then we continuously train T5 to recover these masked entities using the following loss function: \[\mathcal{L}_{\text{MEP}}=\sum_{j=1}^{k}-\log P(Y_{d}(t_{j})|X_{d}^{\text{mask }},Y_{d}(t_{1,...,j-1})), \tag{7}\] where \(Y_{d}(t_{j})\) denotes the \(j\)-th token in the sequence \(Y_{d}\). And \(Y_{d}=\{\text{<mask>}_{1},\text{ent}_{1},...,\text{<mask>}_{n},\text{ent}_{n}\}\) denotes the ground truth sequence that contains masked entities. During training, we optimize the language model to fill up masked spans and better capture entity semantics by picking up the necessary information from contexts to recover the masked entities, understanding the structure semantics of text data, and aligning coherent entities in the structured data Ye et al. (2020). ## 4 Experimental Methodology In this section, we describe the datasets, evaluation metrics, baselines, and implementation details in our experiments. **Dataset.** The datasets in our experiments consist of two parts, which are used for continuous training and finetuning, respectively. _Continuous Training._ During continuous training, two datasets, CodeSearchNet Husain et al. (2019) and ESCI (large) Reddy et al. (2022), are employed to continuously train PLMs to conduct structure-aware text representations for codes and shopping products. In our experiments, we regard code documentation descriptions and product bullet points as unstructured data for aligning structured data, codes and product descriptions, during training. More details of pretraining data processing are shown in Appendix A.2. _Finetuning._ For downstream retrieval tasks on structured data, we use Adv (Lu et al., 2021), and ESCI (small) (Reddy et al., 2022) to finetune models for code search and product search, respectively. All data statistics are shown in Table 1. Each query in ESCI (small) has 20 products on average, which are annotated with four-class relevance labels: Exact, Substitute, Complement, and Irrelevant. We also establish a two-class testing scenario by only regarding the products that are annotated with the Exact label as relevant ones. **Evaluation Metrics.** We use MRR@100 and NDCG@100 to evaluate model performance, which is the same as the previous work (Lu et al., 2021; Reddy et al., 2022; Feng et al., 2020). **Baselines.** We compare SANTA with several dense retrieval models on code search and product search tasks. We first employ three pretrained language models to build dense retrievers for structured data retrieval, including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2020), which are widely used in existing dense retrieval models (Karpukhin et al., 2020; Xiong et al., 2021; Ni et al., 2022). All these models are trained with in-batch negatives (Karpukhin et al., 2020). For the code search task, we also compare SANTA with three typical and task-specific models, CodeBERT (Feng et al., 2020), CodeT5 (Wang et al., 2021) and CodeRetriever (Li et al., 2022). CodeBERT inherits the BERT architecture and is trained on code corpus using both mask language modeling and replaced token detection. CodeT5 employs the encoder-decoder architecture for modeling different code-related tasks and teaches the model to focus more on code identifiers. CodeRetriever is the state-of-the-art, which continuously trains GraphCodeBERT (Guo et al., 2021) with unimodal and bimodal contrastive training losses. **Implementation Details.** This part describes the experiment details of SANTA. We initialize SANTA with T5-base and CodeT5-base for product search and code search. For masked entity prediction, we regard code identifiers and some noun phrases as entities in codes and product descriptions, respectively. More details about identifying entities are shown in Appendix A.3. During continuous training, we set the learning rate as 1e-4 and 5e-5 for product search and code search, and the training epoch as 6. During finetuning, we conduct experiments by training SANTA using inbatch negatives and hard negatives. we set the training epoch to 60 and learning rate to 5e-5 for product search, while the training epoch and learning rate are 6 and 1e-5 for code search. And we follow ANCE (Xiong et al., 2021), start from inbatch finetuned SANTA (Inbatch) model and continuously finetune it with hard negatives to conduct the SANTA (Hard Negative) model. The learning rates are set to 1e-5 and 1e-6 for product search and code search. These hard negatives are randomly sampled from the top 100 retrieved negative codes/product descriptions from the SANTA (Inbatch) model. All models are implemented with PyTorch, Huggingface transformers (Wolf et al., 2019) and OpenMatch (Yu et al., 2023). We use Adam optimizer to optimize SANTA, set the batch size to 16 and set the warmup proportion to 0.1 in our experiments. ## 5 Evaluation Results In this section, we focus on exploring the performance of SANTA on code search and product search tasks, the advantages of SANTA in representing structured data, and the effectiveness of proposed pretraining methods. ### Overall Performance The performance of SANTA on structured data retrieval is shown in Table 2. SANTA shows strong zero-shot ability by comparing its performance with finetuned models and achieving 6.8% improvements over finetuned CodeT5 on code search. Such impressive improvements demonstrate that our pretrained strategies have the ability to enable the advantages of PLMs in representing structured data without finetuning. After finetuning, SANTA maintains its advantages by achieving about 8% and 2% improvements over CodeT5 and T5 on code search and \begin{table} \begin{tabular}{l|r|r r} \hline \multirow{2}{*}{**Split**} & \multicolumn{2}{c}{**Code Search**} & \multicolumn{2}{c}{**Product Search**} \\ \cline{2-4} & Query-Code Pair & Query & Product \\ \hline Train & 251,820 & 18,277 & 367,946 \\ Dev & 9,604 & 2,611 & 51,706 \\ Test & 19,210 & 8,956 & 181,701 \\ \hline \end{tabular} \end{table} Table 1: Data Statistics of Model Finetuning. product search, respectively. It shows the critical role of structure-aware pretraining, which makes language models sensitive to text data structures and better represents structured data. On code retrieval, SANTA outperforms the state-of-the-art code retrieval model CodeRetriever with 4.3% improvements under the same inbatch training setting. SANTA also beats CodeRetriever (AR2), which is finetuned with more sophisticated training strategies (Zhang et al., 2022) and the larger batch size. Besides, we show the retrieval performance of SANTA on CodeSearch dataset in Appendix A.4. ### Ablation Study In this subsection, we conduct ablation studies to further explore the roles of different components in SANTA on retrieving structured data. We start from CodeT5/T5 models and continuously train CodeT5/T5 using two proposed training tasks, Masked Entity Prediction (MEP) and Structured Data Alignment (SDA) to show their effectiveness in teaching models to better learn semantics from structured data. Meanwhile, we compare MEP with the random span masking strategy (Raffel et al., 2020; Wang et al., 2021) to evaluate the effectiveness of different masking strategies. The retrieval performance in both zero-shot and finetuning settings is shown in Table 3. Compared with our baseline model, MEP and SDA show distinct performance in structured data retrieval. As expected, MEP shows almost the same performance as the baseline model. It shows that only mask language modeling usually shows less effectiveness in learning representations for structured data, even using different masking strategies. Different from MEP, SDA shows significant improvements in both structured data retrieval tasks, especially the code retrieval task. Our SDA training method contrastively trains T5 models using the alignment relations between structured data and unstructured data, which helps to bridge the modality gap between structured and unstructured data, maps structured and unstructured data in one universal embedding space, and learns more effective representations for retrieval. When adding additional task MEP to T5 (w/ SDA), the retrieval performance of SANTA is consistently improved. This phenomenon shows that mask language modeling is still effective to teach T5 to better capture the structure semantics and conduct more effective text representations for structured data by filling up the masked entities of structured data. We also compare different masking strategies that are used during mask language modeling. Our entity masking strategy usually outperforms the random span masking strategy, showing the crucial role of entities in structured data understanding. With the masked entity prediction task, SANTA achieves comparable ranking performance with finetuned models, which \begin{table} \begin{tabular}{l|c c c} \hline \hline & \multicolumn{2}{c}{**Code**} & \multicolumn{2}{c}{**Product**} \\ \cline{2-4} **Model** & \multicolumn{2}{c}{MRR} & \multicolumn{2}{c}{NDCG} \\ \cline{2-4} & \multicolumn{1}{c}{MRR} & Two-C & Four-C \\ \hline \multicolumn{4}{l}{_Zero-Shot_} \\ \hline BERT (Devlin et al., 2019) & 0.20 & 71.46 & 72.45 \\ RoBERTa (Liu et al., 2019) & 0.03 & 71.25 & 72.24 \\ CodeBERT (Feng et al., 2020) & 0.03 & - & - \\ CodeRetriever (Li et al., 2022) & 34.7 & - & - \\ T5 (Raffel et al., 2020) & 0.03 & 70.21 & 71.25 \\ CodeT5 (Wang et al., 2021) & 0.03 & - & - \\ SANTA & **46.1** & **76.38** & **77.14** \\ \hline \multicolumn{4}{l}{_Fine-Tuning_} \\ \hline BERT (Devlin et al., 2019) & 16.7 & 78.29 & 79.06 \\ RoBERTa (Liu et al., 2019) & 18.3 & 79.59 & 80.29 \\ CodeBERT (Feng et al., 2020) & 27.2 & - & - \\ CodeRetriever & 43.0 & - & - \\ CodeRetriever (AR2) (Li et al., 2022) & 46.9 & - & - \\ T5 (Raffel et al., 2020) & 23.8 & 79.77 & 80.46 \\ CodeT5 (Wang et al., 2021) & 39.3 & - & - \\ SANTA (Inbatch) & 47.3 & 80.76 & 81.41 \\ SANTA (Hard Negative) & **47.5** & **82.59** & **83.15** \\ \hline \hline \end{tabular} \end{table} Table 2: Retrieval Effectiveness of Different Models on Structured Data. For product search, there are two ways to evaluate model performance. Two-C regards the query-product relevance as two classes, Relevant (1) and Irrelevant (0). Four-C is consistent with the ESCI dataset (Reddy et al., 2022) and sets the relevance labels with the following four classes: Exact (1), Substitute (0.1), Complement (0.01), and Irrelevant (0). \begin{table} \begin{tabular}{l|c|c c} \hline \hline & \multicolumn{2}{c}{**Code**} & \multicolumn{2}{c}{**Product**} \\ \cline{2-4} **Model** & \multicolumn{2}{c}{MRR} & \multicolumn{2}{c}{NDCG} \\ \cline{2-4} & \multicolumn{1}{c}{Two-C} & Four-C \\ \hline \multicolumn{4}{l}{_Zero-Shot_} \\ \hline T5 (Baseline) & 0.03 & 70.21 & 71.25 \\ T5 (w/ MEP) & 0.03 & 70.56 & 71.58 \\ T5 (w/ SDA) & 45.01 & 76.64 & 77.40 \\ SANTA (Span Mask) & 35.88 & **77.37** & **78.11** \\ SANTA (Entity Mask) & **46.08** & 76.38 & 77.14 \\ \hline \multicolumn{4}{l}{_Fine-Tuning_} \\ \hline T5 (Baseline) & 39.30 & 79.77 & 80.46 \\ T5 (w/ MEP) & 38.46 & 79.50 & 80.29 \\ T5 (w/ SDA) & 46.98 & 80.42 & 81.11 \\ SANTA (Span Mask) & 42.11 & 80.31 & 80.99 \\ SANTA (Entity Mask) & **47.28** & **80.76** & **81.41** \\ \hline \hline \end{tabular} \end{table} Table 3: The Retrieval Performance of Ablation Models of SANTA on Structured Data Retrieval. Masked Entity Prediction (MEP) and Structured Data Alignment (SDA) are two pretrained tasks that are proposed by SANTA. aware pretraining is starting to benefit downstream tasks, such as structured data retrieval. The next experiment further explores how these pretraining strategies guide models to learn representations of structured/unstructured data. ### Embedding Visualization of Structured and Unstructured Data This section further explores the characteristics of embedding distributions of structured and unstructured data learned by SANTA. As shown in Figure 3, we first conduct experiments to show the retrieval effectiveness of CodeT5 and SANTA under the zero-shot setting. The ranking probability distribution of relevant query-code pairs is shown in Figure 3(a). Even though CodeT5 is pretrained with code text data, it seems that CodeT5 learns ineffective representations for structured data, assigns a uniform ranking probability distribution for all testing examples and fails to pick up the related structured data for the given queries. On the contrary, SANTA assigns much higher ranking probabilities to matched structured documents, demonstrating that our structured data alignment task has the ability to guide the model to conduct more effective text data representations to align queries with its relevant structured documents. Then we plot the embedding distribution of structured data in Figure 3(b). Distinct from the embedding distribution of CodeT5, the embeddings learned by SANTA, are more distinguishable and uniform, which are two criteria of learning more effective embedding space under contrastive training (Li et al., 2021; Wang and Isola, 2020). Then we present the embedding distribution of documentation texts and their corresponding codes in Figure 4. Overall, depending on our structure-aware pretraining methods, SANTA conducts a more uniform embedding space than CodeT5 and makes the representations of structured and unstructured data more distinguished in the embedding space. Then we analyze the effectiveness of our continuous training methods, Masked Entity Prediction (MEP) and Structured Data Alignment (SDA). By comparing Figure 4(b) with Figure 4(a), our structured data alignment task indeed helps PLMs to align the representations of code and documentation, which reduces the distance between matched unstructured-structured data pairs and mixes the multi-modal embeddings thoroughly in the embedding space. After adding the masked entity prediction training task to CodeT5 (w/ SDA) (from Figure 4(b) to Figure 4(d)), the embedding distributions of code and documentation become distinguished again, demonstrating that masked entity prediction can help models capture different semantics from different data modalities to represent unstructured/structured data. Besides, by comparing Figure 4(d) with Figure 4(c), the structured data alignment task also makes the boundary of the embedding clusters of code and documentation clearer. The main reason lies in that these embeddings are assigned to appropriate positions for aligning matched code-documentation pairs with the help of our structured data alignment task. Figure 4: Embedding Visualization of Different Models using T-SNE. We randomly sample 32 codes and 32 code documentation texts from the testing set of code retrieval and plot their embedding distribution. Figure 3: Retrieval Effectiveness on Code Search. We sample several query-code pairs from the test split of code search data and show the ranking probability distribution of query-related codes in Figure 3(a). Then Figure 3(b) presents the learned embedding space of structured data of codes. ### Attention Mechanism of SANTA This section presents the attention mechanism of SANTA during encoding structured data. In Figure 5, we randomly sample a small piece of code and a text sequence of product descriptions to plot the attention distribution. The attention weight distributions on code search are shown in Figure 5(a). Compared with CodeT5, CodeT5 (w/ SDA) and SANTA calibrate the attention weights from the "if" token to the ">" token. The ">" token is a logical operation, which indicates the usage of the code. SANTA thrives on the structured data alignment task and captures these important semantic clues to represent codes. Compared with CodeT5 (w/ SDA), SANTA decreases its attention weights on code identifiers, such as "x" and "y", and shares more attention weights to "If" and ">". These identifiers can be replaced with attribute ones and are less important than these logical operations to understand code semantics. Thus, SANTA adjusts its attention weights to logical tokens to understand structured data, which is benefited from pretraining with the masked entity prediction task. Figure 5(b) shows the attention distribution on product search. T5 (w/ SDA) assigns more attention weights to the product attribute "Green" than T5, as well as highlights the sequence boundary tokens of product attributes. Nevertheless, for the product "privacy fence screen", "Large" is a more important attribute than "Green". SANTA captures such semantic relevance, which confirms that our masked entity prediction task indeed helps to improve the semantic understanding ability of language models on structured data. \begin{table} \begin{tabular}{l|l|l} \hline **Model** & **SANTA** & **CodeT5/15** \\ \hline **Query** & Construct the command to poll the driver status & \\ \hline Rank & 1 & \\ \hline Snippet &... arg\_0.. connection [ ’master’ ] ] if arg\_0 & def Func ( arg\_0 ) : return os. path. join (.. driver\_id : arg\_1 += [ ’-status’, arg\_0 ] & get\_user\_config\_dir ( arg\_0. app\_name, arg\_0 \\ & _driver\_id ] else : raise AirflowException ( ”- &. app\_author ), arg\_0. filename ) \\ & Invalid status: attempted to poll driver... & \\ \hline **Query** & Attempt to copy path with storage. & 1 \\ \hline Rank & 1 & 1 \\ \hline Snippet &... if arg\_2 in arg\_0. copied\_files : return arg\_0. &... arg\_0 ) : if arg\_0._api\_arg : arg\_1 = str ( arg\_0 \\ & log ( ”Skipping ’\%s” (already –copied earlier)” ) \% &... _api\_arg ) else : arg\_1 = arg\_0. & _name if arg\_0. \\ & arg\_1 ) if not arg\_0. delete\_file ( arg\_1, arg\_2, arg\_3 ) : return arg\_4 = arg\_3. & _parent : return ’/. join ( filter ( None, [ arg\_0. \\ & & \\ \hline **Query** & \#1 black natural hair dye without ammonia or peroxide & \\ \hline Rank & 1 & 1 \\ \hline Snippet &... autcorlor Haircolor Hair Dye - Light Burdock, &... Autstrint Permanent Hair Color 5N Light Chestnut Brown (Pack of 1), Ammonia Free, Vegan, Crutly Free, up to 100\% Gray Coverage, Long Lasting Results... \\ \hline **Query** & \textbackslash{}�een fence without holes & 2 \\ \hline Rank & 2 &... Windscreen Cover Fabric Shade Tarp Netting \\ \hline Snippet &... Material: HDPE+Brass Color: Green Size(L x &... Webscreen Cloft - Commercial Grade 170 GSM - Cable \\ & W): About 6’x50” Package included: Garden fence & Mesh Cloth - Commercial Grade 170 GSM - Cable \\ & privacy screen*1 Straps*80... & Zip Ties Included - We Make Custom Size.. \\ \hline \end{tabular} \end{table} Table 4: Case Studies. We sample four cases from the test datasets of code search and product search to show the effectiveness of SANTA. The matched text phrases are highlighted. Figure 5: Visualization of Attention Distribution of SANTA. The cross attention weight distributions from the decoder module to encoded token embeddings are plotted. Darker blue indicates a higher attention weight. ### Case Studies Finally, we show several cases in Table 4 to analyze the ranking effectiveness of SANTA. In the first case, SANTA directly matches queries and codes through the text snippet "poll the driver status". It demonstrates that SANTA has the ability to distinguish the differences between code and documentation and pick up the necessary text clues for matching queries and codes. Then the second case illustrates that SANTA is effective in understanding codes by capturing the structure semantics of codes and matching queries and codes by capturing some keywords in codes, such as "copied" and "path". The last two cases are from product search and the product description is more like natural language. SANTA also shows its effectiveness on identifying some important entities, such as "Hair Dye" and "fence screen", to match queries and products. ## 6 Conclusion This paper proposes SANTA, which pretrains language models to understand structure semantics of text data and guides language models to map both queries and structured texts in one universal embedding space for retrieval. SANTA designs both structured text alignment and masked entity prediction tasks to continuously train pretrained language models to learn the semantics behind data structures. Our experiments show that SANTA achieves state-of-the-art on code and product search by learning more tailored representations for structured data, capturing semantics from structured data and bridging the modality gap between structured and unstructured data. ## Limitations Even though SANTA shows strong effectiveness on learning the representation of structured data, it heavily depends on the alignment signals between structured and unstructured data. Such alignment relations can be witnessed everywhere, but the quality of constructed pairs of structured and unstructured data directly determines the effectiveness of SANTA. Besides, we use the product bullet points and code descriptions as the unstructured data in our experiments, which is designed for specific tasks and limits the model's generalization ability. On the other hand, SANTA mainly focuses on evaluating the structured data understanding ability through text data representation and matching. It is still unclear whether SANTA outperforms baseline models in all downstream tasks, such as code summarization and code generation. ## Acknowledgments This work is supported by the Natural Science Foundation of China under Grant No. 62206042, No. 62137001 and No. 62272093, the Fundamental Research Funds for the Central Universities under Grant No. N2216013 and No. N2216017, China Postdoctoral Science Foundation under Grant No. 2022M710022, and National Science and Technology Major Project (J2019-IV-0002-0069).
2309.04328
CR-ENTREES -- Cosmic-Ray ENergy TRansport in timE-Evolving astrophysical Settings
In order to understand observable signatures from putative cosmic-ray (CR) sources in-source acceleration of particles, their energy and time-dependent transport including interactions in an evolving environment and their escape from source have to be considered, in addition to source-to-Earth propagation. We present the code CR-ENTREES (Cosmic-Ray ENergy TRansport in timE-Evolving astrophysical Settings) that evolves the coupled time- and energy-dependent kinetic equations for cosmic-ray nucleons, pions, muons, electrons, positrons, photons and neutrinos in a one-zone setup of (possibly) non-constant size, with user-defined particle and photon injection laws. All relevant interactions, particle/photon escape and adiabatic losses are considered in a radiation-dominated, magnetized astrophysical environment that is itself evolving in time. Particle and photon interactions are pre-calculated using event generators assuring an accurate interactions and secondary particle production description. We use the matrix multiplication method for fast radiation and particle energy transport which allows also an efficient treatment of transport non-linearities due to the produced particles/photons being fed back into the simulation chain. Examples for the temporal evolution of the non-thermal emission from AGN jet-like systems with focus on proton-initiated pair cascades inside an expanding versus straight jet emission region, are further presented.
A. Reimer, L. Merten, M. Boughelilba, P. Da Vela, S. Vorobiov, J. P. Lundquist
2023-09-08T13:48:13Z
http://arxiv.org/abs/2309.04328v1
# CR-ENTREES - Cosmic-Ray ENergy TRansport in time-Evolving astrophysical Settings ###### Abstract In order to understand observable signatures from putative cosmic-ray (CR) sources in-source acceleration of particles, their energy and time-dependent transport including interactions in an evolving environment and their escape from source have to be considered, in addition to source-to-Earth propagation. We present the code CR-ENTREES (Cosmic-Ray ENergy TRansport in timE-Evolving astrophysical Settings) that evolves the coupled time- and energy-dependent kinetic equations for cosmic-ray nucleons, pions, muons, electrons, positrons, photons and neutrinos in a one-zone setup of (possibly) non-constant size, with user-defined particle and photon injection laws. All relevant interactions, particle/photon escape and adiabatic losses are considered in a radiation-dominated, magnetized astrophysical environment that is itself evolving in time. Particle and photon interactions are pre-calculated using event generators assuring an accurate interactions and secondary particle production description. We use the matrix multiplication method for fast radiation and particle energy transport which allows also an efficient treatment of transport non-linearities due to the produced particles/photons being fed back into the simulation chain. Examples for the temporal evolution of the non-thermal emission from AGN jet-like systems with focus on proton-initiated pair cascades inside an expanding versus straight jet emission region, are further presented. + Footnote †: journal: PoS: 38th ICRC 2023 ## 1 Introduction With more than 60% of all the detected sources in the \(\gamma\)-ray sky belonging to the class of jetted active galactic nuclei (AGN), and their proposed contribution to the PeV neutrino and ultra-high energy cosmic-ray (UHECR) sky, a deep exploration of the multi-messenger nature of this source class is central. Where and how are the high-energy messengers produced? How are the charged particles accelerated to such extreme energies and how can they escape the magnetized jet environment? What is the overall composition of the jet? A tool that supports the investigation of these questions is presented here: CR-ENTREES, a code for fully time-dependent Cosmic-Ray ENergy TRansport in timE-Evolving astrophysical Settings. CR-ENTREES is used as the base code for the heavy nuclei propagation code of [9] (see this proceedings). ## 2 Propagation Physics CR-ENTREES solves the following nonlinear system of coupled (integro-differential) Fokker-Planck transport equations: \[\partial_{t}F_{N}+\dot{F}_{N}^{\rm esc}+\partial_{E}[(\dot{E}_{ \rm loss}F_{N})]+\dot{F}_{N}^{\rm dec}=Q_{N}^{\rm inj,pr}\] \[\partial_{t}F_{\mu,\pi,K}+\dot{F}_{\mu,\pi,K}^{\rm esc}+\partial_ {E}[(\dot{E}_{\rm loss}F_{\mu,\pi,K})]+\dot{F}_{\mu,\pi,K}^{\rm dec}=\dot{F}_{ \mu,\pi,K}^{p\gamma;h}\] \[\partial_{t}F_{e}+\dot{F}_{e}^{\rm esc}+\partial_{E}[(\dot{E}_{ \rm loss}F_{e})]=Q_{e}^{\rm inj,pr}+\dot{F}_{e}^{\gamma\gamma}+\dot{F}_{e}^{p \gamma}\] \[\partial_{t}F_{\gamma}+\dot{F}_{\gamma}^{\rm esc}+\dot{F}_{ \gamma}^{\gamma\gamma}=\dot{F}_{\gamma}^{\rm em}+\dot{F}_{\gamma}^{p\gamma;h}\] Here, \(F_{\rm X}=F_{\rm X}(E,t)\) is the energy (E) and time (t) dependent density of the cosmic-ray (CR) nucleons \(X=N\), electrons and positrons \(X=e\), muons, pions, kaons \(X=\mu,\pi,K\), respectively, and photons \(X=\gamma\). \(Q_{N,e}^{\rm inj,pr}=Q_{N,e}^{\rm inj,pr}(E,t)\) describes the source function of the primary nucleons (N) and pairs (e), and \(\dot{E}_{\rm loss}=\dot{E}_{\rm loss}(F_{\gamma}(\epsilon,t),B(t);E,t)\) the continuous loss processes that potentially depend on the (possibly evolving) magnetic field strength \(B(t)\) and a energy (\(\epsilon\)) and time dependent target radiation field. Injection of secondary particles and photons produced in hadronic (h) and electromagnetic (em) particle-photon, photon-photon and particle-field interactions are considered with the density rates \(\dot{F}_{e}^{p\gamma}=\dot{F}_{e}^{p\gamma}(F_{\gamma}(\epsilon,t);E,t)\), \(\dot{F}_{\mu,\pi,K}^{p\gamma;h}\), \(\dot{F}_{e}^{\gamma\gamma}=\dot{F}_{e}^{\gamma\gamma}(F_{\gamma}(\epsilon,t);E,t)\), \(\dot{F}_{\gamma}^{\gamma\gamma}=\dot{F}_{\gamma}^{\gamma\gamma}(F_{\gamma}( \epsilon,t);\epsilon,t)\), \(\dot{F}_{\gamma}^{p\gamma}=\dot{F}_{\gamma}^{p\gamma}(F_{\gamma}(\epsilon,t); \epsilon,t)\). The rigorous treatment of a time-evolving emission volume, magnetic field and target radiation field within the present setup causes the non-linearity of this transport equation system. The user of CR-ENTREES can choose to consider the continuous loss processes inverse Compton scattering, Bethe-Heitler pair production and synchrotron radiation of all charged particles, and the catastrophic loss processes photomeson production, particle decay and escape. For a conical jet setup adiabatic losses are taken into account as well. We note that in the absence of nucleons the system becomes identical to a Synchrotron-Self Compton model. ## 3 The Model While CR-ENTREES (thanks to its modular implementation) is flexible to be adapted to a large range of geometrical setups where CR transport occurs, its first purpose is to describe a moving homogeneous one-zone emission region within a straight or conical outflow/jet. Here, CR transport is treated in the co-moving frame. Hence, all input parameters (except the outflow speed - see below) are considered in this frame. ### Input: Characterizing geometry and environment The moving (with given constant speed \(\beta_{\rm J}c\)) spherical emission region of fixed or time-evolving (in case of a conical outflow) radius \(R\) contains a homogeneous magnetic field of strength \(B\) and (assumed) isotropic target radiation field for particle-photon interactions. Magnetic field evolution is currently implemented with a \(B(t)\propto t^{-1}\) scaling (with \(t\) the co-moving propagation time) in case of a conical jet setup. The target photon field density distribution is discretized on a fixed 161 log-equal spacing energy grid in the range \(10^{-10\ldots 6}\)eV. A (diluted) blackbody with given temperature \(T\) or power-law spectra (normalized using a given energy density) with up to 2 break energies (and corresponding power-law indices) can be chosen to fill this target field. Alternatively, this field can be filled for each energy bin by the user. The total target radiation field is then determined from the internal (jet) radiation field (calculated in each time step) and the user-defined target field. ### Input: Characterizing particle injection Power-law spectra (of possibly exponential cutoff) of any particle type (or photon) within a user-defined energy range and index are injected into the simulation chain. These injection spectra (and all particle spectra during propagation) are discretized on a fixed 300 log-equal spacing energy grid in the range \(10^{-3\ldots 12}\)GeV. Currently, up to 2 particle populations can be injected. For their normalization the number ratio of these populations and the particle-to-field energy density ratio must be provided by the user. All input parameter values are provided by the user in a dedicated steering file. ### Energy loss processes and secondary particle production We use Monte Carlo event generators (photomeson production and decay processes: [1]; Bethe-Heitler pair production: modified version of [2]; inverse Compton scattering and photon-photon pair production: [3]) to pre-calculate the yields and interaction rates (which are stored in HDF5-files) of each interaction type, and discretized on a fixed 300 log-equal spacing energy grid in the range \(10^{-3\ldots 12}\)GeV for a range of energies for the target particle/photon. The corresponding yield and interaction rate for the (assumed isotropic) target radiation field in each time step are then determined by convolving over this target field. Synchrotron radiation yields and corresponding loss rates (using pitch-angle averaged terms) of all produced charged particles (i.e., charged pions, muons, kaons and electrons/positrons) are calculated following [4], [5], and [6] (for the self-absorption process). Note that particles suffering from continuous losses in combination with catastrophic losses on a fixed energy grid require a dedicated numerical treatment to let these particles move down this energy grid from high to low energies. The adiabatic loss time scale for particles of Lorentz factor \(\gamma\) in a conical jet of opening angle \(0.26/\Gamma_{\rm J}\)[7], with \(\Gamma_{\rm J}=(1-\beta_{\rm J}^{2})^{-0.5}\) the bulk Lorentz factor of the moving emission region, is calculated to \(t_{\rm ad}=\frac{\gamma^{2}}{\gamma^{2}-1}t\left(1-\frac{R_{0}}{R(t)}\right)^ {-1}\) with \(R_{0}\) the size of the emission region at propagation time \(t=0\). Note the decrease of the adiabatic loss rate with increasing propagation time. Finally, neutral particles (including photons) escape on a time scale \(t_{\rm esc,n}=\frac{3}{4}R(t)/c\), while for the charged particles' escape time scale \(t_{\rm esc,c}=\eta t_{\rm esc,n}\) with \(\eta\geq 1\), is used. All particle energy, adiabatic and escape losses are tracked as well throughout the entire simulation chain to allow for the verification of energy conservation in each simulation time step. ### Propagation method We propagate \(\gamma\)-ray, proton, neutron, electronic pair, muon, pion, kaon, muon and electron neutrino populations, discretized on a fixed 300 log-equal spacing energy grid in the range \(10^{-3\ldots 12}\)GeV, and lower energy photons on a fixed 300 log-equal spacing energy grid in the range \(10^{-18\ldots-3}\)GeV, using the matrix multiplication method of [3], [8], [2] in the framework of CR transport. Here, transfer matrices are created from the aforementioned yields and interaction probabilities which describe the change after a given time step \(\delta t\) of the density of a given particle type upon all the interactions pre-set by the user. Such explicit integration scheme, while extremely fast, requires to use time steps that are smaller than the smallest time scale of the system (the Courant-Friedrichs-Levy condition). Still, with the matrix doubling method of [8] applied, we found correct and stable results also for somewhat larger time steps. Here, energy conservation is verified after each time step. ### Code output CR-ENTREES' output encompasses the co-moving density of all propagated particle types after the pre-set propagation time, and interaction rates of all chosen processes, on the aforementioned energy grid. Post-processing of the output density within the emission volume to transform into the observer frame is carried out outside the CR-ENTREES framework. ## 4 Examples As an example for the use of CR-ENTREES we show here the development of a pair cascade initiated by a cosmic-ray proton population inside a magnetized, relativistically moving emission region along a straight, and a conical jet. In our example model the initial size of the comoving emission region amounts to \(R_{\rm emi}=3\times 10^{16}\)cm, located at a distance of \(10^{17}\)cm away from the central engine, and moves with a speed that corresponds to a bulk Lorentz factor of \(\Gamma_{\rm J}=22.4=D\) (where \(D\) is the Doppler factor). We instantaneously inject a relativistic electron population, that follows a \(\propto\gamma_{e}^{-2}\) particle energy distribution between Lorentz factors \(\gamma_{e}=1\) and \(\gamma_{e}=100\), into the magnetized (with initial magnetic field strength of 10G) emission region. These electrons build up a synchrotron radiation field, which has its power peak in the X-ray domain (see Figs. 1, 2), resembling an HBL-like synchrotron spectrum to some extend. Cosmic-ray protons, injected along with the electron population with the same spectral index but in an extended particle energy range from \(\gamma_{p}=1\) to \(\gamma_{p}=10^{9}\), initiate then, via proton-photon interactions in this evolving synchrotron radiation field, a pair cascade, which we follow with time steps of \(\delta t=10^{4}\)s. With an initial particle-to-field energy density ratio of 10, and a proton-to-electron injection number density ratio of \(10^{5}\), the normalization of the two injected particle populations are set. In order to focus on the cascade development, we switch off proton synchrotron radiation in these simulations. All other processes, including charged particle escape (on a time scale \(t_{\rm esc,c}=3/4R_{\rm emi}/c\)) are kept active. Figure 1 shows the evolution of the broadband photon SED from a straight jet moving emission region in the galaxy frame at times \(t_{\rm obs}\cdot D=10^{4}\)s, \(2\times 10^{4}\)s, \(3\times 10^{4}\)s, \(10^{5}\)s, \(2\times 10^{5}\)s, \(8\times 10^{5}\)s (black lines) and \(t_{\rm obs}\cdot D=1.8\times 10^{6}\)s, \(2.8\times 10^{6}\)s, \(5.9\times 10^{6}\)s, \(8.9\times 10^{6}\)s (grey lines) after the injection. The synchrotron radiation field, target for particle-photon interactions, reaches its maximum power within less than one dynamical time scale, its decline is much slower. Figure 3 shows the correspondingly strongly asymmetric X-ray light curve (dashed red line). Also the subsequently developing pair cascade responds with a fast flux increase, and a slower decrease past its power maximum (see the \(\gamma\)-ray light curves in Fig. 3). The 1 PeV neutrino light curve (violet line in Fig. 3) responds even slower: Its power maximum is reached within less than a day (galaxy frame), its decline starts clearly after the decrease of the photon cascade's emission. We then study such proton-initiated pair cascade development, using the same model parameters as above, in a moving emission region within now a conical jet of opening angle \(\sim 0.66^{\rm o}\). The adiabatic expansion of the emission region leads in this case to further energy losses. At the same time, the corresponding increase of the comoving escape time scales (from \(7.5\times 10^{5}\)s at injection time to \(3.7\times 10^{6}\)s at \(1.5\times 10^{7}\)s after injection at the end of our simulations) as well as the decline of the magnetic field \(\propto t^{-1}\) keeps particles and photons inside the emission region and decreases radiative losses. As a result, the increase and decrease of the target and cascade photon emission is significantly prolonged, as can be appreciated from Figs. 2, 3 of our work: E.g., the duration of the flux enhancement above one order of magnitude below the respective power maximum in the 3 energy bands shown seems a factor of 2-3 longer for the expanding conical jet as compared to the straight jet. From this study we conclude: The temporal evolution of the size of the emission region, e.g., as a result of the jet's shape, has a significant impact on the resulting multi-messenger light curves. ## 5 Conclusion We present CR-ENTREES (Cosmic-Ray ENergy TRansport in timE-Evolving astrophysical Settings), a cosmic-ray energy transport code that solves the nonlinear system of coupled Fokker-Planck transport equations for cosmic-ray nucleons, mesons, leptons, photons and neutrinos. Its flexibility allows to study the resulting multi-messenger emission from particle and photon energy propagation in an environment that is itself changing with time. In this work CR-ENTREES is used to study the temporal evolution of pair cascades, initiated by the instantaneous injection of relativistic electrons and protons, in an expanding jet, as compared to a straight jet. We find that the cascade development in an expanding emission region, accompanied with a diluted field environment, leads to a significantly prolonged outburst of multi-messenger emission, as compared to the emission from a fixed-size Figure 1: Photon SED (host galaxy frame) from the emission region moving along a straight jet with parameters as described in Sect. 4, at (comoving) times \(t=10^{4}\)s (solid black line), \(2\times 10^{4}\)s (dotted black line), \(3\times 10^{4}\)se (dashed black line), \(10^{5}\)s (dashed-dotted black line), \(2\times 10^{5}\)s (dashed-triple-dotted black line), \(8\times 10^{5}\)s (long-dashed black lines) and \(t=1.8\times 10^{6}\)s (solid grey line), \(2.8\times 10^{6}\)s (dotted grey line), \(5.9\times 10^{6}\)s (dashed grey line), \(8.9\times 10^{6}\)s (dashed-dotted grey line) after injection. The red dashed line indicates the 1 keV-slice, the blue dashed line the 100 MeV-slice and the pale blue dashed line the 1 TeV-slice through the flux-energy-time data cube. Figure 2: Photon SED (host galaxy frame) from the emission region moving along a conical jet with parameters as described in Sect. 4, at (comoving) times \(t=10^{4}\)s (solid black line), \(2\times 10^{4}\)s (dotted black line), \(3\times 10^{4}\)s (dashed black line), \(10^{5}\)s (dashed-dotted black line), \(2\times 10^{5}\)s (dashed-triple-dotted black line), \(8\times 10^{5}\)s (long-dashed black lines) and \(t=1.8\times 10^{6}\)s (solid grey line), \(2.8\times 10^{6}\)s (dotted grey line), \(5.9\times 10^{6}\)s (dashed grey line), \(8.9\times 10^{6}\)s (dashed-dotted grey line), \(1.2\times 10^{7}\)s (dashed-triple-dotted grey line), \(1.5\times 10^{7}\)s (long-dashed grey line) after injection. The red dashed line indicates the 1 keV-slice, the blue dashed line the 100 MeV-slice and the pale blue dashed line the 1 TeV-slice through the flux-energy-time data cube. Figure 3: Light curves (galaxy frame) from the straight jet (dashed lines) and conical jet (solid lines) emission region taken at photon energies of 1 keV (red lines), 100 MeV (blue lines, 1 TeV (pale blue lines) and at neutrino energy 1 PeV (violet lines). region in a straight jet. ## Acknowledgements Financial support for this project was received from the Austrian Science Fund (FWF) under grant agreement number I 4144-N27 and the Slovenian Research Agency-ARRS (project no. N1-0111). MB has for this project received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission. LM acknowledges support from the DFG within the collaborative Research Center SFB 1191 "Cosmic Interacting Matters - From Source to Signal".
2309.13654
Crystalline representations and $p$-adic Hodge theory for non-commutative algebraic varieties
Let $\mathcal{T}$ be an $\mathcal{O}_K$-linear idempotent-complete, small smooth proper stable $\infty$-category, where $K$ is a finite extension of $\mathbb{Q}_p$. We give a Breuil-Kisin module structure on the topological negative cyclic homology $\pi_i{\rm TC}^-(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_p)$, and prove a $K$-theory version of Bhatt-Morrow-Scholze's comparison theorems. Moreover, using Gao's Breuil-Kisin $G_K$-module theory and Du-Liu's $(\varphi,\hat{G})$-module theory, we prove the $\mathbb{Z}_p[G_K]$-module $T_{A_{\rm inf}}(\pi_i{\rm TC}^-(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_p)^{\vee})$ is a $\mathbb{Z}_p$-lattice of a crystalline representation. As a corollary, if the generic fibre of $\mathcal{T}$ admits a geometric realization in the sense of Orlov, we prove a comparison theorem between $K(1)$-local $K$ theory of the generic fibre and topological cyclic periodic homology theory of the special fibre with $B_{\rm crys}$-coefficients, in particular, we prove the $p$-adic representation of the $K(1)$-local $K$-theory of the generic fibre is a crystalline representation, this can be regarded as a non-commutative analogue of $p$-adic Hodge theory for smooth proper varieties proved by Tsuji and Faltings. This is the full version of arXiv:2305.00292, containing additional details and results.
Keiho Matsumoto
2023-09-24T14:41:58Z
http://arxiv.org/abs/2309.13654v2
# Crystalline representations and \(p\)-adic Hodge theory for non-commutative algebraic varieties ###### Abstract. Let \(\mathcal{T}\) be an \(\mathcal{O}_{K}\)-linear idempotent-complete, small smooth proper stable \(\infty\)-category, where \(K\) is a finite extension of \(\mathbb{Q}_{p}\). We give a Breuil-Kisin module structure on the topological negative cyclic homology \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\), and prove a \(K\)-theory version of Bhatt-Morrow-Scholze's comparison theorems. Moreover, using Gao's Breuil-Kisin \(G_{K}\)-module theory and Du-Liu's \((\varphi,\hat{G})\)-module theory, we prove the \(\mathbb{Z}_{p}[G_{K}]\)-module \(T_{A_{\operatorname{inf}}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S }[z];\mathbb{Z}_{p})^{\vee})\) is a \(\mathbb{Z}_{p}\)-lattice of a crystalline representation. As a corollary, if the generic fibre of \(\mathcal{T}\) admits a geometric realization in the sense of Orlov, we prove a comparison theorem between \(K(1)\)-local \(K\) theory of the generic fibre and topological cyclic periodic homology theory of the special fibre with \(B_{\operatorname{crys}}\)-coefficients, in particular, we prove the \(p\)-adic representation \(G_{K}\curvearrowright L_{K(1)}K(\mathcal{T}_{\widehat{K}})\otimes_{\mathbb{Z}_ {p}}\mathbb{Q}_{p}\) is a crystalline representation, this can be regarded as a non-commutative analogue of \(p\)-adic Hodge theory for smooth proper varieties proved by Tsuji and Faltings. ## 1. Introduction _Notation 1.1_.: Fix a prime \(p\). Let \(K\) be a finite extension of \(\mathbb{Q}_{p}\) with residue field \(k\). Here, \(\mathcal{O}_{K}\) is the ring of integers of \(K\) and \(\pi\in\mathcal{O}_{K}\) is a uniformizer. We write \(\mathcal{C}\) to denote the completion \(\hat{\overline{K}}\) of \(\overline{K}\) endowed with its unique absolute value extending the given absolute value on \(K\), let \(W\) be the Witt ring of \(k\), and let \(K_{0}\) be the fraction field of \(W\). Let \(\mathfrak{m}\) be the maximal ideal of \(\mathcal{O}_{K}\). For a spectrum \(S\), let \(L_{K(1)}S\) be the Bousfield localization of complex \(K\) theory at prime \(p\). Cohomology theories such as de Rham cohomology \(R\Gamma_{\operatorname{dR}}(-/K)\), Hodge cohomology \(R\Gamma_{\operatorname{Zar}}(-,\Omega_{-/K}^{*})\), \(l\)-adic cohomology \(R\Gamma_{\operatorname{et}}(-,\mathbb{Z}_{l})\) and crystalline cohomology \(R\Gamma_{\operatorname{crys}}(-/W(k))\) are important tools in the study of algebraic geometry and arithmetic geometry. On the other hand, homological invariants such as (topological) periodic homology, (topological) cyclic homology, and \(K\) theory play an important role in the study of \(C^{*}\)-algebra, Lie algebra and non-commutative geometry. There are deep and subtle links between cohomology invariants and homological invariants. One of the most well-known examples is the Atiyah-Hirzebruch spectral sequence. For a finite-dimensional CW-complex \(M\), Atiyah-Hirzebruch [1] prove that there is a spectral sequence: \[E_{2}^{i,j}=\left\{\begin{array}{ll}H_{\text{Sing}}^{i}(M,\mathbb{Z})&j\text{ even}\\ 0&j\text{ odd}\end{array}\right.\Longrightarrow K_{j-i}^{\text{top}}(M). \tag{1.1}\] The Thomason spectral sequence is an arithmetic-geometrical analogue of the Atiyah-Hirzebruch spectral sequence. For a smooth variety \(X\) over a field of characteristic \(0\), Thomason [14] shows that there exists a spectral sequence \[E_{2}^{i,j}=\left\{\begin{array}{ll}H_{\text{\'{e}t}}^{i}(X,\mathbb{Z}_{p}(l ))&j=2l\\ 0&j\text{ odd}\end{array}\right.\Longrightarrow\pi_{j-i}L_{K(1)}K(X). \tag{1.2}\] and this spectral sequence degenerates after tensoring \(\mathbb{Q}_{p}\). Besides, Hesselholt [10] shows a close relation between \(p\)-adic cohomology theory and topological cyclic homology. Bondal-Kapranov, Orlov [13] and Kontsevich [12] introduce _non-commutative algebraic geometry_ in which a dg-category (or a stable \(\infty\)-category) is studied as a _non-commutative space_. Nowadays, non-commutative algebraic geometry plays an important role in research on mirror symmetry, mathematical physics and algebraic geometry. Besides, homological invariants are well-defined for dg-categories and stable \(\infty\)-categories. It has been known that some comparison theorems between cohomology groups can be formulated naturally for dg-categories and stable \(\infty\)-categories: instead of cohomology theories, one can consider homological invariants. For a smooth proper dg-category \(\mathcal{T}\) over \(\mathbb{C}\), Kaledin [11] proves that there is an isomorphism \(\operatorname{HC}_{n}(\mathcal{T}/\mathbb{C})\simeq\bigoplus_{i\in\mathbb{Z}} \operatorname{HH}_{n+2i}(\mathcal{T}/\mathbb{C})\), which has been conjectured by Kontsevich-Soibelman [13] and can be regarded as a _non-commutative Hodge decomposition_ via Connes [12], Feigin-Tsygan [14] and Hochschild-Kostant-Rosenberg [15]. Besides, Blanc [1] conjectured that there is an equivalence \(\operatorname{Ch}^{\text{top}}\wedge_{\mathbb{S}}H\mathbb{C}:K_{\text{top}}( \mathcal{T})\wedge_{\mathbb{S}}H\mathbb{C}\to\operatorname{HP}(\mathcal{T}/ \mathbb{C})\), which can be regarded as the _non-commutative de Rham comparison theorem_, and in some cases, the equivalence is proved by A. Kahn [11]. For a smooth proper stable \(\infty\)-category \(\mathcal{T}\) over \(W\), Scholze proves that there is an isomorphism of \(W\)-modules \(\pi_{n}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\simeq\pi_{i} \operatorname{HP}(\mathcal{T}/W)\) (it has been unpublished yet), and Petrov-Vologodsky obtain the same result for any stable \(\infty\)-categories [16]. In the study of \(p\)-adic cohomology theories, crystalline comparison theory [13], [17] states that for a smooth proper variety \(X\) over \(\mathcal{O}_{K}\), the \(p\)-adic etale cohomology \(H_{\text{\'{e}t}}^{i}(X_{\mathcal{C}},\mathbb{Q}_{p})\otimes_{\mathbb{Q}_{p}} B_{\text{cys}}\) is isomorphic to \(H_{cry}^{i}(X_{k}/W)\otimes_{W}B_{\text{cys}}\), and the isomorphism is compatible with \(G_{K}\)-action, Frobenius endomorphism and filtration. We study a non-commutative version of the crystalline comparison theorem. For a commutative ring \(R\), we will refer to \(R\)-linear idempotent-complete, small stable \(\infty\)-categories simply as \(R\)-linear categories. For an \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), \(G_{K}\) acts continuously on \(\mathbb{Z}_{p}\)-module \(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\), and there is a Frobenius operator \(\operatorname{Fr}:\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[ \frac{1}{p}]\stackrel{{\text{can}}}{{\simeq}}\pi_{i} \operatorname{TC}^{-}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\stackrel{{ \varphi}}{{\to}}\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[ \frac{1}{p}]\) (see [1]). Inspired by Petrov and Vologodsky's work on non-commutative crystalline cohomology theory [17], and the study of motivic filtration of the \(K(1)\)-\(K\) theory [14] and the topological periodic homology [1], we predict the following conjecture. **Conjecture 1.2** (Non-commutative version of crystalline comparison theorem [13], [15]).: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is an isomorphism of \(B_{\mathrm{crys}}\)-module:_ \[\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\otimes_{W}B_{\mathrm{ crys}}\simeq\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}B_{ \mathrm{crys}}\] _which is compatible with \(G_{K}\)-action and Frobenius endomorphism. In particular, the \(p\)-adic representation \(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_ {p}\) is crystalline._ For a smooth proper variety \(X\) over \(\mathcal{O}_{K}\), by [14] one has a \(G_{K}\)-equivariant isomorphism \(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}\mathbb{Q }_{p}\simeq\bigoplus_{n\in\mathbb{Z}}H^{i+2n}_{\mathrm{\acute{e}t}}(X_{ \mathcal{C}},\mathbb{Z}_{p}(n))\), and similarly by [1] one has an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\simeq \bigoplus_{n\in\mathbb{Z}}H^{i+2n}_{\mathrm{cry}}(X_{k}/W[\frac{1}{p}])(n)\) of isocrystals, thus the conjecture holds for \(\mathcal{T}=\operatorname{perf}(X)\) via crystalline comparison theorem [13]. In this paper, we approach this conjecture via \(K\)-theritical version of Bhatt-Morrow-Scholze's comparison theorem [1]. We will use the language of stable \(\infty\)-categories, following Lurie [11]. **Definition 1.3**.: For an \(\mathbb{E}_{\infty}\)-ring \(R\), we let \(\operatorname{Cat}_{\infty}^{\operatorname{perf}}(R)\) denote the \(\infty\)-category of \(R\)-linear categories, where the morphisms are exact functors. **Definition 1.4**.: For an \(\mathbb{E}_{\infty}\)-ring \(R\), we let \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}(R)\) denote the \(\infty\)-category of smooth proper \(R\)-linear categories, where the morphisms are exact functors. **Definition 1.5** (See also [1, section 4.1]).: For an \(\mathbb{E}_{\infty}\)-ring \(R\), an \(R\)-linear category \(\mathcal{T}\)_admits a geometric realization_ if there are a derived scheme \(X\) over \(R\) such that the truncation \(\pi_{0}(X)\) is separated scheme of finite type over \(\pi_{0}(R)\) and there is a fully faithful admissible inclusion \(\mathcal{T}\subset\operatorname{perf}(X)\) of \(R\)-linear categories (see [1, Definition 3.1]). In many cases, a stable \(\infty\)-category is known to admit a geometric realization. For example, the derived Fukaya category of a symplectic manifold is known or expected to admit a geometric realization from the study of mirror symmetry. Besides, Lunts-Bergh-Schnurer [1] proved that the stable infinity category of perfect complexes on a smooth proper Deligne-Mumford admits a geometric realization. _Remark 1.6_.: Assume \(R\) is an algebraic closed field of characteristic \(0\). In [1, Question 4.4], Orlov asked if there exists \(R\)-linear idempotent-complete, small smooth proper stable \(\infty\)-categories which don't admit a geometric realization. This is still an important open problem. _Remark 1.7_.: If a smooth proper \(R\)-linear category \(\mathcal{T}\) admits a geometric realization \(\mathcal{T}\hookrightarrow\operatorname{perf}(X)\), the dual \(\mathcal{T}^{\operatorname{op}}\in\operatorname{Cat}_{\infty,\operatorname{ sat}}^{\operatorname{perf}}\) also admits a geometric realization \(\mathcal{T}^{\operatorname{op}}\hookrightarrow\operatorname{perf}(X)^{ \operatorname{op}}=\operatorname{perf}(X)\). Fix a sequence of elements \(\zeta_{n}\in\overline{K}\) inductively such that \(\zeta_{0}=1\) and \((\zeta_{n+1})^{p}=\zeta_{n}\), let \(\varepsilon=(\zeta_{0},\zeta_{1},\zeta_{2},...)\in\varprojlim_{\mathrm{Frob}} \mathcal{O}_{\mathcal{C}}/p\), and let \(\varepsilon=(\zeta_{0},\zeta_{1},\zeta_{2},...)\in\varprojlim_{\mathrm{Frob}} \mathcal{O}_{\mathcal{C}}/p\), and let \([\varepsilon]\in A_{\inf}\) be the Teichmuller lift. Let \(\mathfrak{S}=W[[z]]\), and let \(\theta:\mathfrak{S}\to\mathcal{O}_{K}\) be the usual map whose kernel is generated by Eisenstein polynomial \(E\) of \(\pi\). We choose \(\xi\in A_{\inf}\) of the kernel of the canonical map \(A_{\inf}\to\mathcal{O}_{\mathcal{C}}\). We write \(\mu=[\varepsilon]-1\). Let \(\varphi:\mathfrak{S}\to\mathfrak{S}\) be a Frobenius endomorphism which is Frobenius on \(W\) and sends \(z\) to \(z^{p}\). We prove a non-commutative version of Bhatt-Morrow-Scholze's Breuil-Kisin cohomology theory \(R\Gamma_{\mathfrak{S}}(-)\)[1]. **Theorem 1.8** (Theorem 2.17, Non-commutative version of [1]).: _Let \(\mathcal{T}\) be an \(\mathcal{O}_{K}\)-linear smooth proper category. Then there is a natural number \(n\) satisfying the following holds:_ 1. _For any_ \(i\geq n\)_,_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) _is a Breuil-Kisin module._ 2. _For any_ \(i\geq n\)_,_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) _is a Breuil-Kisin module of finite_ \(E\)_-height._ 3. _(_K(_1_)-local K theory comparison) Assume_ \(\mathcal{T}_{\mathcal{C}}\) _admits a geometric realization. For any_ \(i\geq n\)_, after scalar extension along_ \(\overline{\phi}:\mathfrak{S}\to A_{\inf}\) _which sends_ \(z\) _to_ \([\pi^{\flat}]^{p}\) _and is the Frobenius on_ \(W\)_, one recovers K(_1_)-local K theory of the generic fiber_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes _{\mathfrak{S},\overline{\phi}}A_{\inf}[\frac{1}{\mu}]_{p}^{\wedge}\simeq\pi_{ i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}A_{\inf}[\frac{1}{ \mu}]_{p}^{\wedge}\] 4. _(topological periodic homology theory comparison) For any_ \(i\geq n\)_, after scalar extension along the map_ \(\tilde{\phi}:\mathfrak{S}\to W\) _which is the Frobenius on_ \(W\) _and sends_ \(z\) _to_ \(0\)_, one recovers topological periodic homology theory of the special fibre:_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{p}]\otimes_{\mathfrak{S}[\frac{1}{p}],\tilde{\phi}}K_{0}\simeq\pi_{i} \operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}].\] Bhatt-Morrow-Scholze's Breuil-Kisin cohomology theory [1, Theorem 1.2] implies crystalline comparison theorem. On the other hand, Theorem 1.8 does not imply Conjecture 1.2. This difference arises as follows. On Breuil-Kisin cohomology theory, there is a \(G_{K}\)-equivariant isomorphism (see [1, Theorem 1.8 (iii)]) \[R\Gamma_{A_{\inf}}(X_{\mathcal{O}_{\mathcal{C}}})\otimes_{A_{\inf}}A_{\mathrm{ cry}}\simeq R\Gamma_{\mathrm{cry}}(X_{\mathcal{O}_{\mathcal{C}}/p}/A_{ \mathrm{cry}}). \tag{1.3}\] Combine (1.3) with [1, Theorem 1.8 (iv)], there is canonical \((G_{K},\varphi)\)-equivariant isomorphism \[R\Gamma_{\mathrm{\acute{e}t}}(X_{\mathcal{C}},\mathbb{Z}_{p})\otimes_{\mathbb{ Z}_{p}}A_{\mathrm{cry}}[\frac{1}{p\mu}]\simeq R\Gamma_{\mathrm{cry}}(X_{ \mathcal{O}_{\mathcal{C}}/p}/A_{\mathrm{cry}})[\frac{1}{p\mu}]. \tag{1.4}\] The isomorphism induces the crystalline comparison theorem (see [1, Theorem 14.4]). On the other hand, the non-commutative analogue of (1.3) becomes the following \(G_{K}\)-equivariant isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p} )\otimes_{A_{\inf}}\widehat{A_{\mathrm{cry}}}\simeq\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}/p};\mathbb{Z}_{p}), \tag{1.5}\] where \(\widehat{A_{\rm cry}}\) is the completion of \(A_{\rm cry}\) with respect to the Nygaard filtration (see [1, Definition 8.9]). The problem is that \(\mu\) is a \(0\)-divisor in \(\widehat{A_{\rm cry}}\) (see [1, Corollary 2.11 and Corollary 2.12]). Thus, the non-commutative analogue of (1.4) becomes the trivial equation. Therefore we will study \(\pi_{i}\,{\rm TC}^{-}(\mathcal{T}/\mathbb{S}[z];\ \mathbb{Z}_{p})\) in more detail. In section 3, we will show that the dual Breuil-Kisin module \(\pi_{i}\,{\rm TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) admits a Breuil-Kisin \(G_{K}\)-module structure in the sense of Gao [1]. In section 4, using Du-Liu's work on \((\varphi,\hat{G})\)-module [11], we will prove the following. **Theorem 1.9**.: _Let \(\mathcal{T}\) be an \(\mathcal{O}_{K}\)-linear smooth proper category. Then there is a natural number \(n\) satisfying the following holds for any \(i\geq n\):_ 1. _The_ \(\mathbb{Z}_{p}[G_{K}]\)_-module_ \(T_{A_{\rm inf}}(\pi_{i}\,{\rm TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p })^{\vee})\) _is a_ \(\mathbb{Z}_{p}\)_-lattice of a crystalline representation._ 2. _If_ \(\mathcal{T}_{\mathcal{C}}\) _admits a geometric realization, then there is a_ \(G_{K}\)_-equivariant isomorphism_ \[T_{A_{\rm inf}}(\pi_{i}\,{\rm TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p })^{\vee})\simeq\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee}\] _of_ \(\mathbb{Z}_{p}\)_-modules._ We prove the following as a corollary. **Theorem 1.10** (Main Theorem, Theorem 4.18).: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. If \(\mathcal{T}_{\mathcal{C}}\) admits a geometric realization, then Conjecture 1.2 holds for \(\mathcal{T}\)._ ## Acknowledgement The author would like to thank Federico Binda, Lars Hesselholt, Buntaro Kakinoki, Shunsuke Kano, Hyungseop Kim and Hiroyasu Miyazaki for helpful discussions related to this subject. The author would also like to thank Ryomei Iwasa for comments on an earlier draft and Alexander Petrov for useful comments on a draft, and Tasuki Kinjo for helpful discussions about derived schemes, Isamu Iwanari, Atsushi Takahashi and Shinnosuke Okawa for helpful discussions about non-commutative algebraic geometry. The author is deeply grateful to Hui Gao for helpful discussions about \((\varphi,\hat{G})\)-modules and for sharing his ideas about Theorem 1.9 and Theorem 1.10 with us. ## 2. Non-commutative version of Breuil-Kisin cohomology ### Breuil-Kisin modules and Breuil-Kisin cohomology theory \(R\Gamma_{\mathfrak{S}}\) Let us start by recalling the theory of Breuil-Kisin modules. **Definition 2.1**.: A Breuil-Kisin module is a finitely generated \(\mathfrak{S}\)-module \(M\) equipped with a \(\mathfrak{S}\)-linear isomorphism \[\varphi_{M}:M\otimes_{\mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}]\simeq M[ \frac{1}{E}].\] For a Breuil-Kisin module \(\mathfrak{M}\), let us denote \(\mathfrak{M}^{\vee}=\operatorname{Hom}_{\mathfrak{S}}(\mathfrak{M},\mathfrak{S})\). We note \(\mathfrak{M}^{\vee}=\operatorname{Hom}_{\mathfrak{S}}(\mathfrak{M},\mathfrak{S})\) is a Breuil-Kisin module whose Frobenius map \(\varphi_{\mathfrak{M}^{\vee}}\) is given by \[\mathfrak{M}^{\vee}\otimes_{\mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}] \simeq\operatorname{Hom}_{\mathfrak{S}[\frac{1}{E}]}(\mathfrak{M}\otimes_{ \mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}],\mathfrak{S}[\frac{1}{E}]) \stackrel{{\varphi_{\mathfrak{M}}^{\vee}}}{{\simeq}} \operatorname{Hom}_{\mathfrak{S}[\frac{1}{E}]}(\mathfrak{M}[\frac{1}{E}], \mathfrak{S}[\frac{1}{E}])\simeq\mathfrak{M}^{\vee}[\frac{1}{E}], \tag{2.1}\] where we use facts that \(\varphi:\mathfrak{S}\to\mathfrak{S}\) and \(\mathfrak{S}\to\mathfrak{S}[\frac{1}{E}]\) are flat. We note that \(\mathfrak{M}^{\vee}\) is finite free \(\mathfrak{S}\)-module, this follows from the facts that gl.dim \(\mathfrak{S}=2\) and [1, Proposition 4.3.]. **Lemma 2.2** ([18, Corollaire 11.1.14] and [1, Lemma 4.27]).: _Let \(\overline{\phi}:\mathfrak{S}\to A_{\inf}\) be the map that sends \(z\) to \([\pi^{\flat}]^{p}\) and is Frobenius on \(W\). Let \(M\) be a Breuil-Kisin module, and let \(M_{A_{\inf}}=M\otimes_{\mathfrak{S},\overline{\phi}}A_{\inf}\). Then \(M^{\prime}[\frac{1}{p}]=M_{A_{\inf}}[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p} ]}K_{0}\) is a finite free \(K_{0}\)-module equipped with a Frobenius automorphism. Fix a section \(k\to\mathcal{O}_{\mathcal{C}}/p\), then there is a (noncanonical) \(\varphi\)-equivariant isomorphism_ \[M_{A_{\inf}}\otimes_{A_{\inf}}B_{\mathrm{crys}}\simeq M^{\prime}[\frac{1}{p}] \otimes_{K_{0}}B_{\mathrm{crys}}.\] In [1], Bhatt-Morrow-Scholze constructed a cohomology theory valued in Breuil-Kisin modules for smooth proper formal schemes over \(\mathcal{O}_{K}\). Let \(\varphi_{A_{\inf}}:A_{\inf}\to A_{\inf}\) be the Frobenius endomorphism of \(A_{\inf}\). Let \(\phi:\mathfrak{S}\to A_{\inf}\) be the \(W\)-linear map that sends \(z\) to \([\pi^{\flat}]\). Note that the diagram (2.2) commutes. **Theorem 2.3** ([1]).: _Let \(X/\mathcal{O}_{K}\) be a smooth proper formal scheme. Then there exists a \(\mathfrak{S}\)-linear cohomology theory \(R\Gamma_{\mathfrak{S}}(X)\) equipped with a \(\varphi\)-semilinear map, with the following properties:_ 1. _All_ \(H^{i}_{\mathfrak{S}}(X):=H^{i}(R\Gamma_{\mathfrak{S}}(X))\) _are Breuil-Kisin modules._ 2. _(etale comparison) After scalar extension along_ \(\overline{\phi}:\mathfrak{S}\to A_{\inf}\)_, one recovers etale cohomology of the generic fiber_ \[R\Gamma_{\mathfrak{S}}(X)\otimes_{\mathfrak{S}}A_{\inf}[\frac{1}{\mu}]\simeq R \Gamma_{\mathrm{et}}(X_{\mathcal{C}},\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}} A_{\inf}[\frac{1}{\mu}]\] 3. _(crystalline comparison) After scalar extension along the map_ \(\mathfrak{S}\to W\)_, which is the Frobenius on_ \(W\) _and sends_ \(z\) _to_ \(0\)_, one recovers crystalline cohomology of the special fibre:_ \[R\Gamma_{\mathfrak{S}}(X)\otimes_{\mathfrak{S}}^{\mathbb{L}}W\simeq R\Gamma _{\mathrm{crys}}(X_{k}/W).\] _ 4. _(de Rham comparison) After scalar extension along the map_ \(\tilde{\theta}\circ\varphi:\mathfrak{S}\to\mathcal{O}_{K}\)_, one recovers de Rham cohomology:_ \[R\Gamma_{\mathfrak{S}}(X)\otimes_{\mathfrak{S}}^{\mathbb{L}}\mathcal{O}_{K} \simeq R\Gamma_{dR}(X/\mathcal{O}_{K}).\] ### Perfect modules and Kunneth formula Let \((\mathcal{A},\otimes,1_{\mathcal{A}})\) be a symmetric monoidal, stable \(\infty\)-category with biexact tensor product. Firstly we recall from [1, section 1]. **Definition 2.4** ([1, Definition 1.2]).: An object \(X\in A\) is perfect if it belongs to the thick subcategory generated by the unit. For a lax symmetric monoidal, exact \(\infty\)-functor \(F:\mathcal{A}\to\mathrm{Sp}\), \(F(1_{\mathcal{A}})\) is naturally an \(\mathbb{E}_{\infty}\)-ring. For any \(X,Y\in\mathcal{A}\), we have a natural map \[F(X)\otimes_{F(1_{\mathcal{A}})}F(Y)\to F(X\otimes Y). \tag{2.3}\] Since \(F\) is exact, if \(X\) is perfect, then the map (2.3) is an equivalence, and \(F(X)\) is a perfect \(F(1_{\mathcal{A}})\)-module. We regard \(\mathcal{O}_{K}\) as an \(\mathbb{S}[z]\)-algebra via \(z\mapsto\pi\). There is a symmetric monoidal \(\infty\)-functor \[\mathrm{THH}(-/\mathbb{S}[z];\mathbb{Z}_{p}):\mathrm{Cat}_{\infty}^{\mathrm{ perf}}(\mathcal{O}_{K})\to\mathrm{Mod}_{\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z]; \mathbb{Z}_{p})}(\mathrm{Sp}^{BS^{1}}).\] Let us study this functor from [1, section 11]. By the base change along \(\mathbb{S}[z]\to\mathbb{S}[z^{1/p^{\infty}}]\mid z\mapsto z\), there is a natural equivalence \[\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z])\otimes_{\mathbb{S}[z]}\mathbb{S}[ z^{1/p^{\infty}}]\simeq\mathrm{THH}(\mathcal{O}_{K}[\pi^{1/p^{\infty}}]/\mathbb{S}[z^{ 1/p^{\infty}}]). \tag{2.4}\] The natural map \(\mathrm{THH}(\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\to\mathbb{S}[z^{1/p^ {\infty}}]_{p}^{\wedge}\) is an equivalence (see [1, Proposition. 11.7]), we obtain equivalences \[\mathrm{THH}(\mathcal{O}_{K}[\pi^{1/p^{\infty}}];\mathbb{Z}_{p}) \simeq \mathrm{THH}(\mathcal{O}_{K}[\pi^{1/p^{\infty}}]/\mathbb{S}[z^{1/ p^{\infty}}];\mathbb{Z}_{p})\] \[\overset{\eqref{eq:fthm}}{\simeq} \mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_ {\mathbb{S}[z]_{p}^{\wedge}}\mathbb{S}[z^{1/p^{\infty}}]_{p}^{\wedge}.\] Since a morphism of homotopy groups \(\pi_{*}(\mathbb{S}[z]_{p}^{\wedge})\to\pi_{*}(\mathbb{S}[z^{1/p^{\infty}}]_{p }^{\wedge})\) is faithfully flat and there is an isomorphism \(\pi_{*}\,\mathrm{THH}(\mathcal{O}_{K}[\pi^{1/p^{\infty}}];\mathbb{Z}_{p})\simeq \mathcal{O}_{K}[\pi^{1/p^{\infty}}][u]\), where \(\deg u=2\) (see [1, section. 6]), we obtain an isomorphism \[\pi_{*}\,\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\simeq \mathcal{O}_{K}[u], \tag{2.5}\] where \(u\) has degree \(2\) (see [1, Proposition. 11.10]). **Proposition 2.5**.: _Any dualizable object in \(\mathrm{Mod}_{\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})}( \mathrm{Sp}^{BS^{1}})\) is perfect._ Proof.: By the isomorphism (2.5), \(\pi_{*}\,\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a regular noetherian ring of finite Krull dimension concentrated in even degrees. We obtain the claim by [1, Theorem 2.15]. **Proposition 2.6**.: _Let \(\mathcal{T}_{1},\mathcal{T}_{2}\) be smooth proper \(\mathcal{O}_{K}\)-linear categories and suppose \(\mathcal{T}_{1}\) is smooth and proper. Then \(\mathrm{THH}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\) is perfect in \(\mathrm{Mod}_{\mathrm{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})}( \mathrm{Sp}^{BS^{1}})\), and \(\mathrm{TC}^{-}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\) (resp. \(\mathrm{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\))_ _is a perfect \(\operatorname{TC}^{-}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\)-module (resp. \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\)-module) and the natural map_ \[\operatorname{TC}^{-}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})} \operatorname{TC}^{-}(\mathcal{T}_{2}/\mathbb{S}[z];\mathbb{Z}_{p}) \to\operatorname{TC}^{-}(\mathcal{T}_{1}\otimes_{\mathcal{O}_{K}}\mathcal{T}_ {2}/\mathbb{S}[z];\mathbb{Z}_{p})\] (_resp. \(\operatorname{TP}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})} \operatorname{TP}(\mathcal{T}_{2}/\mathbb{S}[z];\mathbb{Z}_{p})\to \operatorname{TP}(\mathcal{T}_{1}\otimes_{\mathcal{O}_{K}}\mathcal{T}_{2}/ \mathbb{S}[z];\mathbb{Z}_{p})\) _) is an equivalence._ Proof.: Note that \(\infty\)-functors \[(-)^{hS^{1}}:\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{ S}[z];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\to\operatorname{Mod}_{ \operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})}( \operatorname{Sp})\] and \[(-)^{tS^{1}}:\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{ S}[z];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\to\operatorname{Mod}_{ \operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})}( \operatorname{Sp})\] are lax symmetric monoidal, exact (see [21, Corollary I.4.3]). It is enough to show \(\operatorname{THH}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\) is perfect in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z]; \mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). Since a \(\infty\)-functor \[\operatorname{THH}(-/\mathbb{S}[z];\mathbb{Z}_{p}):\operatorname{Cat}_{\infty} ^{\operatorname{perf}}(\mathcal{O}_{K})\to\operatorname{Mod}_{\operatorname{ THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\] is symmetric monoidal, and \(\mathcal{T}_{1}\) is dualizable in \(\operatorname{Cat}_{\infty}^{\operatorname{perf}}(\mathcal{O}_{K})\) (Cf. [33, Ch. 11]). Thus \(\operatorname{THH}(\mathcal{T}_{1}/\mathbb{S}[z];\mathbb{Z}_{p})\) is dualizable in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{ Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). By Proposition 2.5, we obtain the claim. ### Breuil-Kisin module of stable \(\infty\)-categories Let us recall Antieau-Mathew-Nikolaus's comparison theorem of symmetric monoidal \(\infty\)-functors from [1]. **Proposition 2.7** ([1, Proposition 4.6]).: _Let \(\mathcal{T},\hat{\mathcal{T}}\) be symmetric monoidal \(\infty\)-categories. Let \(F_{1},F_{2}:\mathcal{T}\to\hat{\mathcal{T}}\) be symmetric monoidal functors and let \(t:F_{1}\Longrightarrow F_{2}\) be a symmetric monoidal natural transformation. Suppose every object of \(\mathcal{T}\) is dualizable. Then \(t\) is an equivalence._ In [1, Proposition 11.10], Bhatt-Morrow-Scholze showed the followings: on homotopy groups, there exists isomorphisms \[\pi_{i}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}) \simeq\mathfrak{S}[u,v]/(uv-E)\] where \(u\) is of degree \(2\) and \(v\) is of degree \(-2\), and \[\pi_{i}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\simeq \mathfrak{S}[\sigma^{\pm}]\] where \(\sigma\) is of degree \(2\). Let \(\varphi\) be the endomorphism of \(\mathfrak{S}\) determined by the Frobenius on \(W\) and \(z\mapsto z^{p}\). Scholze-Nikolaus [21] construct two maps \[\operatorname{can},\varphi_{\mathcal{T}}^{hS^{1}}:\operatorname{TC}^{-}( \mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\rightrightarrows\operatorname{TP}( \mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p}).\] In [1, Proposition 11.10], Bhatt-Morrow-Scholze also showed the morphism \[\operatorname{can}_{\mathcal{O}_{K}}:\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_ {K}/\mathbb{S}[z];\mathbb{Z}_{p})\to\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/ \mathbb{S}[z];\mathbb{Z}_{p}),\] is \(\mathfrak{S}\)-linear and sends \(u\) to \(E\sigma\) and \(v\) to \(\sigma^{-1}\), and the morphism \[\varphi_{\mathcal{O}_{K}}^{hS^{1}}:\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_ {K}/\mathbb{S}[z];\mathbb{Z}_{p})\to\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/ \mathbb{S}[z];\mathbb{Z}_{p})\] is \(\varphi\)-linear and sends \(u\) to \(\sigma\) and \(v\) to \(\varphi(E)\sigma^{-1}\). Since \(\sigma\) is an invertible element in \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\), \(\varphi^{hS^{1}}_{\mathcal{O}_{K}}\) induces a map \[\tilde{\varphi}^{hS^{1}}_{\mathcal{T}}:\operatorname{TC}^{-}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}]\to\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p}) \tag{2.6}\] for an \(\mathcal{O}_{K}\)-linear stable \(\infty\)-category \(\mathcal{T}\). We note that on homotopy groups, the morphism \[\tilde{\varphi}^{hS^{1}}_{\mathcal{O}_{K}}:\pi_{*}\operatorname{TC}^{-}( \mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}]\to\pi_{*} \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\] is given by \(\mathfrak{S}[u^{\pm}]\to\mathfrak{S}[\sigma^{\pm}]\) which is \(\varphi\)-semi-linear and sends \(u\) to \(\sigma\). **Lemma 2.8**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then the followings hold:_ 1. \(\pi_{*}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) _is a finitely generated_ \(\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\)_-module, thus_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) _is a finitely generated_ \(\mathfrak{S}\)_-module for all_ \(i\)_._ 2. _There is a natural number_ \(n\) _satisfying that for any_ \(j\geq n\)_,_ \(\pi_{j}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\xrightarrow{ u\cdot}\pi_{j+2}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) _is an isomorphism._ Proof.: Claim (1) directly follows from Proposition 2.6. For any \(j\geq 0\), by the calculation of \(\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\) (see [1, Proposition 11.10]), we know \(\pi_{j}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}) \xrightarrow{u\cdot}\pi_{j+2}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S} [z];\mathbb{Z}_{p})\) is an isomorphism. These yields claim (2) by claim (1). **Theorem 2.9**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is a natural number \(n\) such that the homotopy group \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a Breuil-Kisin module for any \(i\geq n\), and the dual \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) is a Breuil-Kisin module of finite \(E\)-height for any \(i\geq n\)._ Proof.: The morphism \(\tilde{\varphi}^{hS^{1}}_{\mathcal{T}}\) induces a morphism of \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\)-modules: \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}] \otimes_{\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[ \frac{1}{u}],\tilde{\varphi}^{hS^{1}}_{\mathcal{O}_{K}}}\operatorname{TP}( \mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\to\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p}). \tag{2.7}\] By proposition 2.6, both sides of the map (2.7) yields symmetric monoidal functors from \(\operatorname{Cat}^{\operatorname{perf}}_{\infty,\operatorname{sat}}(\mathcal{O }_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{ p})}(\operatorname{Sp})\), and the map (2.7) yields a symmetric monoidal natural transformation between them. By Proposition 2.7, the morphism (2.7) is an equivalence. On \(0\)-th homotopy group, \(\tilde{\varphi}^{hS^{1}}_{\mathcal{O}_{K}}\) is given by \(\varphi:\mathfrak{S}\to\mathfrak{S}\). Since \(\varphi\) is flat, one has an isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{u}]\otimes_{\mathfrak{S},\varphi}\mathfrak{S}\simeq\pi_{i}\operatorname{TP} (\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p}) \tag{2.8}\] for any \(i\). By Lemma 2.8 (2), one obtains an isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\simeq \pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{u}]\] for any \(i\geq n\), thus we have a \(\mathfrak{S}\)-linear isomorphism on homotopy groups: \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes _{\mathfrak{S},\varphi}\mathfrak{S}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p}) \tag{2.9}\] for any \(i\geq n\). After inverting \(E\in\mathfrak{S}\simeq\pi_{0}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z]; \mathbb{Z}_{p})\), the morphism \(\operatorname{can}_{\mathcal{T}}\) induces a morphism of \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\)-module: \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}] \otimes_{\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[ \frac{1}{E}],\operatorname{can}_{\mathcal{O}_{K}}}\operatorname{TP}(\mathcal{O} _{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\to\operatorname{TP}(\mathcal{ T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]. \tag{2.10}\] Both sides of the map (2.10) yield symmetric monoidal functors from \(\operatorname{Cat}^{\operatorname{perf}}_{\infty,\operatorname{sat}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z} _{p})[\frac{1}{E}]}(\operatorname{Sp})\), and the map (2.10) yield a symmetric monoidal natural transformation between them. Thus the morphism (2.10) is an equivalence. Note that the morphism \[\operatorname{can}_{\mathcal{O}_{K}}[\frac{1}{E}]:\pi_{*}\operatorname{TC}^{- }(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\to\pi_{*} \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\] is an isomorphism (see [1, Proposition 11.10]). This yields an isomorphism \[\operatorname{can}_{\mathcal{T}}[\frac{1}{E}]:\pi_{*}\operatorname{TC}^{-}( \mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\simeq\pi_{*} \operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]. \tag{2.11}\] Combine (2.11) with (2.9), we obtain an isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes _{\mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}]\stackrel{{\eqref{eq: 1.1}}}{{\simeq}}\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{ p})[\frac{1}{E}]\stackrel{{\eqref{eq:1.1}}}{{\simeq}}\pi_{i} \operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}] \tag{2.12}\] for any \(i\geq n\). Let us study the dual of (2.12). We have a morphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{ \vee}\otimes_{\mathfrak{S},\varphi}\mathfrak{S}=\operatorname{Hom}_{\mathfrak{ S}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\varphi}\mathfrak{S},\mathfrak{S})\] \[\stackrel{{\eqref{eq:1.2}}}{{\simeq}}\operatorname{ Hom}_{\mathfrak{S}}(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p}), \mathfrak{S})\stackrel{{\operatorname{can}_{\mathcal{T}}^{\vee}}}{{ \to}}\operatorname{Hom}_{\mathfrak{S}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p}),\mathfrak{S}) \tag{2.13}\] After localization by \(E\), the morphism (2.13) coincides with the dual of (2.12). The comparison between \(\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) and \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) In this section, for a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), we will compare \(\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) with \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\). Denote \(K_{\infty}=K(\pi^{1/p^{\infty}})\) and by \(\mathcal{O}_{K_{\infty}}\) by the integer ring of \(K_{\infty}\). **Lemma 2.10** ([1, Corollary 11.8]).: _For any \(\mathcal{O}_{K}\)-linear stable \(\infty\)-category \(\mathcal{T}\), the natural map_ \[\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to \operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\] _is an equivalence which is compatible with \(S^{1}\)-action, and \(G_{K_{\infty}}\)-action. In particular, the natural map_ \[\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\] _is an equivalence which is compatible with \(G_{K_{\infty}}\)-action._ Proof.: The morphism \(\mathbb{S}[z^{1/p^{\infty}}]\to\mathcal{O}_{K_{\infty}}\mid z^{1/p^{n}}\mapsto\pi^{ 1/p^{n}}\) fits into the following commutative diagram (2.14) The diagram yields a map \[\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to \operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\simeq\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{ \mathcal{C}}};\mathbb{Z}_{p})\otimes_{\operatorname{THH}(\mathbb{S}[z^{1/p^{ \infty}}];\mathbb{Z}_{p})}\mathbb{S}[z^{1/p^{\infty}}]_{p}^{\wedge}\] which is compatible with \(S^{1}\)-action. By [1, Proposition 11.7], the natural map \(\operatorname{THH}(\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\to\mathbb{S}[z ^{1/p^{\infty}}]_{p}^{\wedge}\) is an equivalence. Since \(\pi^{1/p^{n}}\) is in \(K_{\infty}\) for any \(n\), thus the equivalence is \(G_{K_{\infty}}\)-equivariant. **Proposition 2.11**.: _For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\) is a perfect object in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z ^{1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). In particular, \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\) is perfect in \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z ^{1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp})\)._ Proof.: We already know \(\operatorname{THH}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a perfect object in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z]; \mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\) (see Proposition 2.6). Besides, the functor \[-\otimes_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})} \operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z];\mathbb{Z}_{p}): \operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z ];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\to\operatorname{Mod}_{ \operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z];\mathbb{Z}_{p})}( \operatorname{Sp}^{BS^{1}})\] is exact and sends \(\operatorname{THH}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) to \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z]; \mathbb{Z}_{p})\), \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z]; \mathbb{Z}_{p})\) is a perfect object in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z ];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). Since the functor \[-\otimes_{\operatorname{THH}(\mathbb{S}[z^{1/p^{\infty}}]/\mathbb{S}[z]; \mathbb{Z}_{p})}\mathbb{S}[z^{1/p^{\infty}}]_{p}^{\wedge}:\operatorname{Mod}_ {\operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z];\mathbb{Z}_{p})}( \operatorname{Sp}^{BS^{1}})\to\operatorname{Mod}_{\operatorname{THH}(\mathcal{O }_{\mathcal{C}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp }^{BS^{1}})\] is exact and sends \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z]; \mathbb{Z}_{p})\) to \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\), thus \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p ^{\infty}}];\mathbb{Z}_{p})\) is a perfect object in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z ^{1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). Lemma 2.10 and Proposition 2.11 imply the following. **Proposition 2.12**.: _For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) is a perfect object in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{ p})}(\operatorname{Sp}^{BS^{1}})\)._ The following is a non-commutative version of the comparison theorem between \(R\Gamma_{\mathfrak{S}}\) and \(R\Gamma_{A_{\inf}}\) in [1, Theorem 1.2 (1)] **Theorem 2.13**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is a natural number \(n\) satisfying that there is a \(G_{K_{\infty}}\)-equivariant isomorphism_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes _{\mathfrak{S},\overline{\phi}}A_{\inf}\simeq\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z} _{p})\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})\] _for any \(i\geq n\), where \(\overline{\phi}\) is the map which sends \(z\) to \([\pi^{\flat}]^{p}\) and is the Frobenius on \(W\), and \(g\in G_{K_{\infty}}\) acts on \(1\otimes g\) on left hand side._ Proof.: For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), consider the following morphism \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\stackrel{{ \varphi_{\mathcal{T}}^{hS^{1}}}}{{\to}}\operatorname{TP}(\mathcal{T}/\mathbb{S }[z];\mathbb{Z}_{p})\to\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}} }/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\] where the second map is given by the diagram (2.14). The map sends \(u\) to \(\sigma\) and \(u\) is an invertible element in \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\), we have a morphism \[\underline{\varphi}_{\mathcal{T}}^{hS^{1}}:\operatorname{TC}^{-}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}]\to\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p}). \tag{2.15}\] Let us prove that the map \(\underline{\varphi}_{\mathcal{T}}^{hS^{1}}\) yields an equivalence of \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p})\)-module: \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}] \otimes_{\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}) [\frac{1}{u}];\underline{\varphi}_{\mathcal{O}_{K}}^{hS^{1}}}\operatorname{TP }(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\simeq \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^ {\infty}}];\mathbb{Z}_{p}) \tag{2.16}\] which is compatible with \(G_{K_{\infty}}\)-action. By Proposition 2.6, the left side of the map (2.16) yields a symmetric monoidal functor from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z^ {1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp})\). By Proposition 2.11, the right side of the map (2.16) also yields a symmetric monoidal functor from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z^ {1/p^{\infty}}];\mathbb{Z}_{p})}(\operatorname{Sp})\). By [3, section 11], on homotopy groups, the morphism \[\underline{\varphi}_{\mathcal{O}_{K}}^{hS^{1}}:\pi_{*}\operatorname{TC}^{-}( \mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{u}]\to\pi_{*} \operatorname{TP}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z^{1/p^{\infty}}]; \mathbb{Z}_{p})\] is given by \(\mathfrak{S}[u^{\pm}]\to A_{\inf}[\sigma^{\pm}]\) which is \(\overline{\phi}\)-linear and sends \(u\) to \(\sigma\), thus it is flat by [3, Lemma 4.30]. We now have an \(G_{K_{\infty}}\)-equivariant isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1 }{u}]\otimes_{\mathfrak{S},\overline{\phi}}A_{\inf}\simeq\pi_{i}\operatorname{ TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}]; \mathbb{Z}_{p}).\] for any \(i\). By Lemma 2.8 (2), we obtain the claim. The comparison between \(\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) and \(\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\) In this section, for a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), we will compare \(\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) with \(\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\). There exists a Cartesian diagram of \(\mathbb{E}_{\infty}\)-ring: For an \(\mathcal{O}_{K}\)-linear stable \(\infty\)-category \(\mathcal{T}\), the diagram yields morphisms \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to \operatorname{TC}^{-}(\mathcal{T}_{k};\mathbb{Z}_{p})\] and \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to \operatorname{TC}^{-}(\mathcal{T}_{k};\mathbb{Z}_{p})\stackrel{{ \varphi_{k}^{hS^{1}}}}{{\to}}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p}). \tag{2.17}\] **Theorem 2.14**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is a natural number \(n\) satisfying that the morphism (2.17) an isomorphism_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1} {p}]\otimes_{\mathfrak{S}[\frac{1}{p}],\tilde{\phi}}K_{0}\to\pi_{i} \operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\] _for any \(i\geq n\), where \(\tilde{\phi}\) is the map which sends \(z\) to \(0\) and Frobenius on \(W\)._ Proof.: The morphism (2.17) induces a commutative diagram Firstly, we prove the morphism \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to\operatorname{ TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\) induces an isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p }]\otimes_{\mathfrak{S}[\frac{1}{p}],\gamma}K_{0}\to\pi_{i}\operatorname{TP}( \mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\] for any \(i\), where \(\gamma\) is a \(W\)-linear and sends \(z\) to \(0\). By Theorem 2.9 and [3, Proposition 4.3], there is \(n\) such that for any \(i\geq n\), the homotopy group \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a Breuil-Kisin module, and \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{p}]\) is a finite free \(\mathfrak{S}[\frac{1}{p}]\)-module. By an isomorphism 2.9 and the fact that \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is \(2\)-periodic, we obtain that \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p}]\) is a finite free \(\mathfrak{S}[\frac{1}{p}]\)-module for any \(i\). Thus, on homotopy groups, one obtains an isomorphism \[\pi_{*}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p })[\frac{1}{p}]\simeq\pi_{0}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})[\frac{1}{p}]\otimes_{\mathfrak{S}[\frac{1}{p}]}\pi_{*} \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p}]\] \[\qquad\qquad\qquad\bigoplus\pi_{1}\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p}]\otimes_{\mathfrak{S}[\frac{1}{p}]} \pi_{*}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{p}], \tag{2.18}\] and we see that \(\pi_{*}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p}]\) is a flat graded \(\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{p}]\)-module. Let us prove the morphism \[\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p}] \otimes_{\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})[ \frac{1}{p}]}\operatorname{TP}(k;\mathbb{Z}_{p})[\frac{1}{p}]\to\operatorname{ TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}] \tag{2.19}\] is an equivalence. Both sides of the map (2.19) yield symmetric monoidal functors from \(\operatorname{Cat}^{\operatorname{perf}}_{\infty,\operatorname{sat}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(k;\mathbb{Z}_{p})[\frac{1}{E}]}( \operatorname{Sp})\), thus by Proposition 2.7, we know the map (2.19) is an equivalence. By the isomorphism (2.18) and the fact that \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) is \(2\)-periodic, on homotopy groups, we have an isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{p} ]\otimes_{\mathfrak{S}[\frac{1}{p}],\gamma}K_{0}\to\pi_{i}\operatorname{TP}( \mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}] \tag{2.20}\] for any \(i\). Combine the isomorphism (2.9) with an equality \(\gamma\circ\varphi=\tilde{\phi}\), there is an isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1} {p}]\otimes_{\mathfrak{S}[\frac{1}{p}],\tilde{\phi}}K_{0} \simeq (\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z }_{p})[\frac{1}{p}]\otimes_{\mathfrak{S}[\frac{1}{p}],\phi}\mathfrak{S}[ \frac{1}{p}])\otimes_{\mathfrak{S}[\frac{1}{p}],\tau}K_{0}\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq: and the Verdier quotient \(\operatorname{perf}_{(D_{i}\cap X_{i-1})\times Y}(X_{i-1}\times Y)=\operatorname{ perf}_{D_{i}\cap X_{i-1}}(X_{i-1})\otimes\operatorname{perf}(Y)\subset \operatorname{perf}(X_{i-1})\otimes\operatorname{perf}(Y)\to\operatorname{perf}(X _{i})\otimes\operatorname{perf}(Y)\) induces a fibre sequence \[L_{K(1)}K((D_{i}\cap X_{i-1})\times Y)\to L_{K(1)}K(X_{i-1}\times Y)\to L_{K(1) }K(X_{i}\times Y). \tag{2.23}\] There is a morphism of fibre sequence \(L_{K(1)}K(D_{i}\cap X_{i-1})\otimes_{L_{K(1)}K(\mathcal{C})}(\ref{eq:22})\to (\ref{eq:23})\). Note that \(D_{i}\cap X_{i-1}\) have a good compactification \((D_{i},\Sigma_{l=1}^{i-1}D_{i}\cap D_{l})\). By the induction on \(i\), we obtain the claim. The same argument as above proves in the case that \(X\) and \(Y\) are smooth varieties. Thirdly, we prove the claim in the case that \(\mathcal{T}_{1}=\operatorname{perf}(X)\) and \(\mathcal{T}_{2}=\operatorname{perf}(Y)\) for separated and finite type scheme \(X\) over \(\mathcal{C}\) and a smooth variety \(Y\). Choose a sequence of closed sub-schemes of \(X\): \[\emptyset=Z_{0}\subset Z_{1}\subset Z_{2}\subset\cdots\subset Z_{n}=X\] such that \(Z_{i}\backslash|Z_{i-1}|\) is smooth over \(\mathcal{C}\) for any \(i\). For any variety \(Z\) over \(\mathcal{C}\), since \(\mathcal{C}\) is characteristic \(0\), we have an equivalence \(L_{K(1)}K(Z)\simeq L_{K(1)}HK(Z)\) (see [20]), where \(HK(Z)\) is the homotopy \(K\) theory of \(Z\). Using Quillen's localization theorem, we have fibre sequences \[L_{K(1)}K(Z_{i-1})\to L_{K(1)}K(Z_{i})\to L_{K(1)}K(Z_{i}\backslash|Z_{i-1}|) \tag{2.24}\] and \[L_{K(1)}K(Z_{i-1}\times Y)\to L_{K(1)}K(Z_{i}\times Y)\to L_{K(1)}K((Z_{i} \backslash|Z_{i-1}|)\times Y), \tag{2.25}\] and there is a morphism of fibre sequences \(L_{K(1)}K(Y)\otimes_{L_{K(1)}K(\mathcal{C})}(\ref{eq:24})\to(\ref{eq:25})\). By the induction on \(i\), we obtain the claim. The same argument as above shows the claim in the case that \(X\) and \(Y\) are smooth varieties. For a \(\mathcal{C}\)-dg-algebra \(B\), using the Dundas-Goodwillie-McCarthy theorem [13], we have a homotopy pullback square Since \(\mathcal{C}\) has characteristic \(0\), we have \(L_{K(1)}\operatorname{TC}(B)=L_{K(1)}\operatorname{TC}(H^{0}(B))=0\). Thus the natural map \(L_{K(1)}K(B)\to L_{K(1)}K(H^{0}(B))\) is an equivalence. For affine derived schemes \(X\) and \(Y\) whose truncations are separated and of finite type over \(\mathcal{C}\), the claim holds for \(\mathcal{T}_{1}=\operatorname{perf}(X)\) and \(\mathcal{T}_{2}=\operatorname{perf}(Y)\). For derived schemes \(X\) and \(Y\) whose truncations are smooth over \(\mathcal{C}\), the claim follows from the case that \(X\) and \(Y\) are affine derived schemes by [10, Theorem A.4]. For \(\mathcal{C}\)-linear categories \(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\) which admit geometric realization \(\mathcal{T}_{1}\hookrightarrow\operatorname{perf}(X)\) and \(\mathcal{T}_{2}\hookrightarrow\operatorname{perf}(Y)\), the claim follows from the fact that \(L_{K(1)}K(\mathcal{T}_{1})\) (resp. \(L_{K(1)}K(\mathcal{T}_{2})\)) is a retract in \(L_{K(1)}K(X)\) (resp. \(L_{K(1)}K(Y)\)) see [1, Proposition 3.4]. We let \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}(\mathcal{O}_{K})_{ \operatorname{geom}}\) denote the \(\infty\)-category of smooth proper \(\mathcal{O}_{K}\)-linear categories \(\mathcal{T}\) such that \(\mathcal{T}_{\mathcal{C}}\) admits a geometric realization. Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Using [1, Thm 2.16], one obtain an equivalence \(L_{K(1)}K(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}})\simeq L_{K(1)}K(\mathcal{T }_{\mathcal{C}})\) of ring spectra. We recall some results about topological Hochschild homology theory from [11], [1] and [1]. On homotopy groups, there is an isomorphism \[\pi_{*}\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\simeq A_{ \inf}[\sigma,\sigma^{-1}] \tag{2.26}\] where \(\sigma\) is a generator \(\sigma\in\operatorname{TP}_{2}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\). Let \(\beta\in K_{2}(\mathcal{C})\) be the Bott element. For any \(\mathcal{O}_{\mathcal{C}}\)-linear stable \(\infty\)-category \(\mathcal{D}\), we have an identification \(L_{K(1)}K(\mathcal{D})\simeq L_{K(1)}K(\mathcal{D}_{\mathcal{C}})\) (see [1, Theorem 2.16]). The cyclotomic trace map \(\operatorname{tr}:K(\mathcal{O}_{\mathcal{C}})\to\operatorname{TP}(\mathcal{O }_{\mathcal{C}};\mathbb{Z}_{p})\) sends \(\beta\) to a \(\mathbb{Z}_{p}^{*}\)-multiple of \(([\varepsilon]-1)\sigma\) (see [11] and [15, Theorem 1.3.6]), the cyclotomic trace map induce a morphism \[L_{K(1)}\operatorname{tr}:L_{K(1)}K(\mathcal{D}_{\mathcal{C}})\simeq L_{K(1)}K (\mathcal{D})\to L_{K(1)}\operatorname{TP}(\mathcal{D})\simeq\operatorname{ TP}(\mathcal{D};\mathbb{Z}_{p})[\frac{1}{[\varepsilon]-1}]^{\wedge}_{p}\] for any \(\mathcal{O}_{\mathcal{C}}\)-linear category \(\mathcal{D}\). **Theorem 2.16**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. We assume \(\mathcal{T}_{\mathcal{C}}\) admits a geometric realization, then the trace map induces an equivalence_ \[L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{L_{K(1)}K(\mathcal{C})}L_{K(1)} \operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\to L_{K(1)} \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}). \tag{2.27}\] _In particular, there is a \(G_{K}\)-equivariant isomorphism_ \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}A_{\inf}[ \frac{1}{[\varepsilon]-1}]^{\wedge}_{p}\simeq\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[\frac{1}{[\varepsilon ]-1}]^{\wedge}_{p}\] _for any \(i\)._ Proof.: By Proposition 2.12 and Proposition 2.15, both sides of the map (2.27) yield symmetric monoidal functors from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}( \mathcal{O}_{K})_{\operatorname{geom}}\) to \(\operatorname{Mod}_{L_{K(1)}\operatorname{TP}(\mathcal{O}_{\mathcal{C}}; \mathbb{Z}_{p})}(\operatorname{Sp})\), thus the map (2.27) is an equivalence. On homotopy groups, the morphism \(L_{K(1)}\operatorname{tr}:\pi_{*}L_{K(1)}K(\mathcal{C})\to\pi_{*}L_{K(1)} \operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\) is given by \(\mathbb{Z}_{p}[\beta^{\pm}]\to A_{\inf}[\frac{1}{[\varepsilon]-1}]^{\wedge}_{ p}[\sigma^{\pm}]\) which sends \(\beta\) to \(([\varepsilon]-1)\sigma\), and is flat. Thus we obtain the claim. ### Non-commutative version of Bhatt-Morrow-Scholze's comparison theorem In this section, as a summary of the previous several sections, we prove a non-commutative version of Theorem 2.3. **Theorem 2.17**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is \(n\) satisfying the following holds:_ 1. _For any_ \(i\geq n\)_,_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) _is a Breuil-Kisin module._ 2. _For any_ \(i\geq n\)_,_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) _is a Breuil-Kisin module of finite_ \(E\)_-height._ 2. _(K(1)-local K theory comparison) Assume_ \(\mathcal{T}_{\mathcal{C}}\) _admits a geometric realizaiton. For any_ \(i\geq n\)_, after scalar extension along_ \(\overline{\phi}:\mathfrak{S}\to A_{\inf}\)_, one recovers K(1)-local K theory of generic fiber_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_ {\mathfrak{S},\overline{\phi}}A_{\inf}[\frac{1}{\mu}]_{p}^{\wedge}\simeq\pi_{i} L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}A_{\inf}[\frac{1}{\mu}]_{p}^{\wedge}\] 3. _(topological periodic homology theory comparison) For any_ \(i\geq n\)_, after scalar extension along the map_ \(\tilde{\phi}:\mathfrak{S}\to W\) _which is the Frobenius on_ \(W\) _and sends_ \(z\) _to_ \(0\)_, one recovers topological periodic homology theory of the special fiber:_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{p}]\otimes_{\mathfrak{S}[\frac{1}{p}],\tilde{\phi}}K_{0}\simeq\pi_{i} \operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}].\] Proof.: Claims (1) and (1') follow from Theorem 2.9. Claim (2) follows from Theorem 2.13 and Theorem 2.16. Claim (3) follows from Theorem 2.14. **Corollary 2.18**.: _For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\) which admits a geometric realization and an integer \(i\), there is an isomorphism of \(B_{\operatorname{crys}}\)-modules:_ \[\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\otimes_{W}B_{ \operatorname{crys}}\simeq\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{ \mathbb{Z}_{p}}B_{\operatorname{crys}}.\] Proof.: The claim follows from Theorem 2.17 amd Lemma 2.2. ## 3. Breuil-Kisin module and semi-stable representation ### Breuil-Kisin \(G_{k}\)-module Let us recall the work of H. Gao on Breuil-Kisin \(G_{K}\)-module [1]. Look at the following diagram: Let us denote by \(\xi\) a generator of the kernel of the map \(A_{\inf}\to\mathcal{O}_{\mathcal{C}}\). We note that \(\phi\) sends \(E\) to a \(\mathbb{Z}_{p}^{*}\)-multiple of \(\xi\). Let \((\mathfrak{M},\varphi_{\mathfrak{M}})\) be a Breuil-Kisin module. We write \(\widehat{\mathfrak{M}}\) denote to \(\mathfrak{M}\otimes_{\mathfrak{S},\phi}A_{\inf}\). After tensoring \(-\otimes_{\mathfrak{S},\phi}A_{\inf}\), the \(\mathfrak{S}[\frac{1}{E}]\)-linear isomorphism \(\mathfrak{M}\otimes_{\mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}]\overset{ \varphi_{\mathfrak{M}}}{\simeq}\mathfrak{M}[\frac{1}{E}]\) becomes the isomorphism \[\widehat{\varphi}_{\mathfrak{M}}:\widehat{\mathfrak{M}}\otimes_{A_{\inf}, \varphi_{A_{\inf}}}A_{\inf}[\frac{1}{\xi}]\simeq\widehat{\mathfrak{M}}[\frac{ 1}{\xi}]. \tag{3.1}\] **Definition 3.1** ([1, 1]).: Let \((\mathfrak{M},\varphi_{\mathfrak{M}})\) be a finite free Breuil-Kisin module of finite \(E\)-height. We call \((\mathfrak{M},\varphi_{\mathfrak{M}})\) Breuil-Kisin \(G_{K}\)-module if it satisfies the following conditions. 1. There is a continuous \(A_{\inf}\)-semi-linear \(G_{K}\)-action on \(\widehat{\mathfrak{M}}=\mathfrak{M}\otimes_{\mathfrak{S},\phi}A_{\inf}\). 2. \(G_{K}\) commutes with \(\widehat{\varphi}_{\mathfrak{M}}\). 3. \(\mathfrak{M}\subset\widehat{\mathfrak{M}}^{G_{K_{\infty}}}\) via the embedding \(\mathfrak{M}\subset\widehat{\mathfrak{M}}\). 4. \(\mathfrak{M}/z\mathfrak{M}\subset\big{(}\widehat{\mathfrak{M}}\otimes_{A_{\inf}}W( \overline{k})\big{)}^{G_{K}}\) via the embedding \(\mathfrak{M}/z\mathfrak{M}\subset\widehat{\mathfrak{M}}\otimes_{A_{\inf}}W( \overline{k})\). The \(G_{K}\)-equivariant surjective map of rings \(\mathcal{O}_{\mathcal{C}}\to\overline{k}\) induces a morphism of ring spectra \[\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}) \to\operatorname{TP}(\mathcal{T}_{\overline{k}};\mathbb{Z}_{p}). \tag{3.2}\] Combine an equivalence \(\operatorname{THH}(\mathcal{T}_{\overline{k}};\mathbb{Z}_{p})\simeq \operatorname{THH}(\overline{k};\mathbb{Z}_{p})\otimes_{\operatorname{THH}( \mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})}\operatorname{THH}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) with Proposition 2.12, we see the following Proposition. **Proposition 3.2**.: _For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), \(\operatorname{THH}(\mathcal{T}_{\overline{k}};\mathbb{Z}_{p})\) is perfect in \(\operatorname{Mod}_{\operatorname{THH}(\overline{k};\mathbb{Z}_{p})}( \operatorname{Sp}^{BS^{1}})\)._ Let us recall the calculation of \(\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\)[1]. On homotopy groups, \[\varphi_{\mathcal{O}_{\mathcal{C}}}^{hS^{1}}:\pi_{*}\operatorname{TC}^{-}( \mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\to\pi_{*}\operatorname{TP}(\mathcal{ O}_{\mathcal{C}};\mathbb{Z}_{p}) \tag{3.3}\] is given by \[A_{\inf}[u,v]/(uv-\xi)\to A_{\inf}[\sigma^{\pm}]\] which is \(\varphi_{A_{\inf}}\)-linear and sends \(u\) to \(\sigma\) and \(v\) to \(\varphi_{A_{\inf}}(\xi)\sigma^{-1}\), where \(u\) and \(\sigma\) have degree \(2\) and \(v\) has degree \(-2\). Similarly, on homotopy groups, \[\varphi_{\overline{k}}^{hS^{1}}:\pi_{*}\operatorname{TC}^{-}(\overline{k}; \mathbb{Z}_{p})\to\pi_{*}\operatorname{TP}(\overline{k};\mathbb{Z}_{p}) \tag{3.4}\] is given by \[W(\overline{k})[u,v]/(uv-p)\to W(\overline{k})[\sigma^{\pm}]\] which is \(\varphi_{W(\overline{k})}\)-linear and sends \(u\) to \(\sigma\) and \(v\) to \(p\sigma^{-1}\). **Theorem 3.3**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. The morphism (3.2) induces an isomorphism_ \[\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p}]}W(\overline{k})[\frac{1}{p}] \simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{\overline{k}};\mathbb{Z}_{p})[ \frac{1}{p}]\] _for any \(i\). In particular, \(\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p}]}W(\overline{k})[\frac{1}{p}]\) is fixed by \(G_{K^{ur}}\)._ Proof.: The morphism (3.2) yields a morphism \[\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \frac{1}{p}]\otimes_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{ p})[\frac{1}{p}]}\operatorname{TP}(\overline{k};\mathbb{Z}_{p})[\frac{1}{p}] \to\operatorname{TP}(\mathcal{T}_{\overline{k}};\mathbb{Z}_{p})[\frac{1}{p}]. \tag{3.5}\] By Proposition 2.12 and Proposition 3.2, both sides of the map (3.2) yield symmetric monoidal functors from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\overline{k};\mathbb{Z}_{p})}( \operatorname{Sp})\), and the map (3.5) yields a symmetric monoidal natural transformation between them. Thus the morphism (3.5) is an equivalence. By Theorem 2.9 and [1, Proposition 4.3], there is \(n\) such that for any \(i\geq n\), the homotopy group \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a Breuil-Kisin module, and \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1 }{p}]\) is a finite free \(\mathfrak{S}[\frac{1}{p}]\)-module. By Theorem 2.13 and the fact that \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) is \(2\)-periodic, \(\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \frac{1}{p}]\) is a finite free \(A_{\inf}[\frac{1}{p}]\)-module for any \(i\). Thus \(\pi_{*}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \frac{1}{p}]\) is a flat graded \(\pi_{*}\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{p}]\)-module and we obtain the claim. **Lemma 3.4**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then the followings hold:_ 1. \(\pi_{*}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})\) _is a finitely generated_ \(\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}}/\mathbb{S}[z];\mathbb{Z} _{p})\)_-module, thus_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})\) _is a finitely generated_ \(A_{\inf}\)_-module for any_ \(i\)_._ 2. _There is a natural number_ \(n\) _satisfying that for any_ \(j\geq n\)_,_ \(\pi_{j}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})\xrightarrow{u\text{:}}\pi_{j+2}\operatorname{TC}^{-}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) _is an isomorphism._ Proof.: Claim (1) directly follows from Proposition 2.12. For any \(j\geq 0\), by the calculation of \(\pi_{*}\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\) (see [1, Proposition 11.10]), we know \(\pi_{j}\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p}) \xrightarrow{u\text{:}}\pi_{j+2}\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C }};\mathbb{Z}_{p})\) is an isomorphism. By the claim (1), we see the claim (2). **Theorem 3.5**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. There is a natural number \(n\) such that the cyclotomic Frobenius morphism \(\varphi_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{hS^{1}}:\operatorname{TC}^{ -}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to\operatorname{TP }(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) induces a \(G_{K}\)-equivariant isomorphism_ \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})\otimes_{A_{\inf},\varphi_{A_{\inf}}}A_{\inf}\simeq\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}) \tag{3.6}\] _for any \(i\geq n\)._ Proof.: Cyclotomic Frobenius map \(\varphi_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{hS^{1}}:\operatorname{TC}^{ -}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to\operatorname{TP }(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) yields a morphism \[\underline{\varphi}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{hS^{1}}: \operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p} )[\frac{1}{u}]\to\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p}).\] The morphism \(\underline{\varphi}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{hS^{1}}\) induces a morphism of \(\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\)-module: \[\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p} )[\frac{1}{u}]\otimes_{\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}}; \mathbb{Z}_{p})[\frac{1}{u}],\underline{\varphi}_{\mathcal{O}_{\mathcal{C}}}^{ hS^{1}}}\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\to \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}). \tag{3.7}\] By Proposition 2.12, both sides of the map (3.7) yield symmetric monoidal functors from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}(\mathcal{O }_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p}) }(\operatorname{Sp})\), thus the map (3.7) is an equivalence. On homotopy groups, \(\underline{\varphi}_{\mathcal{O}_{\mathcal{C}}}^{hS^{1}}:\pi_{*}\operatorname{ TC}^{-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{u}]\to\pi_{*} \operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\) is given by \(\varphi_{A_{\inf}}\)-linear map \[A_{\inf}[u^{\pm}]\to A_{\inf}[\sigma^{\pm}],\] thus is an isomorphism. Thus we obtain the isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})[\frac{1}{u}]\otimes_{A_{\inf},\varphi_{A_{\inf}}}\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\] for any \(i\). By Lemma 3.4, there is a natural number \(n\) such that \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})[\frac{1}{u}]\) for any \(i\geq n\). For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), we will prove that the free Breuil-Kisin module \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) is a Breuil-Kisin \(G_{K}\)-module. **Theorem 3.6**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. There is a \(n\) such that for any \(i\geq n\), the followings holds:_ 1. _There is a continuous_ \(A_{\inf}\)_-semi-linear_ \(G_{K}\)_-action on_ \(\pi_{i}\operatorname{TC}^{-}(\widehat{\mathcal{T}/\mathbb{S}[z]};\mathbb{Z}_{p })^{\vee}=\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{ p})^{\vee}\otimes_{\mathfrak{S},\phi}A_{\inf}\)__ 2. \(G_{K}\) _commutes with_ \(\widehat{\varphi}_{\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee}}\)_._ 3. \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \subset\big{(}\pi_{i}\operatorname{TC}^{-}(\widehat{\mathcal{T}/\mathbb{S}[ z]};\mathbb{Z}_{p})^{\vee}\big{)}^{G_{K_{\infty}}}\) _via the embedding_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \subset\pi_{i}\operatorname{TC}^{-}(\widehat{\mathcal{T}/\mathbb{S}[z]}; \mathbb{Z}_{p})^{\vee}\)_._ 4. \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p}]}W(\overline{k})[ \frac{1}{p}]\) _is fixed by_ \(G_{K^{ur}}\)_._ _In particular, \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) is a Breuil-Kisin \(G_{K}\)-module._ Proof.: After localization by \(\xi\in A_{\inf}\simeq\pi_{0}\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}}; \mathbb{Z}_{p})\), the morphism \(\operatorname{can}_{\mathcal{T}}\) induces a morphism of \(\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{\xi}]\)-module: \[\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p} )[\frac{1}{\xi}]\otimes_{\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}}; \mathbb{Z}_{p})[\frac{1}{\xi}],\operatorname{can}_{\mathcal{O}_{\mathcal{C}}} }\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{\xi}] \to\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \frac{1}{\xi}]. \tag{3.8}\] Both sides of the map (3.8) yield symmetric monoidal functors from \(\operatorname{Cat}^{\operatorname{perf}}_{\infty,\operatorname{sat}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p} )[\frac{1}{\xi}]}(\operatorname{Sp})\), and the map (3.8) yields a symmetric monoidal natural transformation between them. Thus the morphism (3.8) is an equivalence. Note that the morphism \[\operatorname{can}_{\mathcal{O}_{\mathcal{C}}}:\pi_{*}\operatorname{TC}^{-}( \mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{\xi}]\to\pi_{*} \operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{\xi}]\] is an isomorphism (see [1, Proposition 11.10]). This yields a \(G_{K}\)-equivariant isomorphism \[\operatorname{can}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}[\frac{1}{\xi}]: \pi_{*}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})[\frac{1}{\xi}]\simeq\pi_{*}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{ \mathcal{C}}};\mathbb{Z}_{p})[\frac{1}{\xi}]. \tag{3.9}\] Combined with (3.6), this yields a \(G_{K}\)-equivariant isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})\otimes_{A_{\inf},\varphi_{A_{\inf}}}A_{\inf}[\frac{1}{\xi}]\stackrel{{ \eqref{eq:G_K}}}{{\simeq}}\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_ {\mathcal{C}}};\mathbb{Z}_{p})[\frac{1}{\xi}]\stackrel{{\eqref{eq:G_ K}}}{{\simeq}}\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{ \mathcal{C}}};\mathbb{Z}_{p})[\frac{1}{\xi}] \tag{3.10}\] for any \(i\geq n\). Since \(\varphi_{A_{\inf}}\) is an isomorphism, by Theorem 2.13 and Theorem 3.5, we have a \(G_{K_{\infty}}\)-equivariant isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_ {\mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}), \tag{3.11}\] and we see the isomorphism (3.10) is same as \(\widehat{\varphi}_{\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})}\). \(G_{K}\) naturally acts on \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})\), which is \(\pi_{0}\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\simeq A _{\inf}\)-semi-linear. Consider the dual action on \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}:=\operatorname{Hom}_{A_{\inf}}(\pi_{i}\operatorname{TC}^{-}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}),A_{\inf})\). Since the isomorphism (3.10) is \(G_{K}\)-equivariant, \(G_{K}\) commutes with \(\widehat{\varphi}_{\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee}}\) and we obtain the claim (1) and (2). Since \(\varphi:\mathfrak{S}\to A_{\inf}\) is flat, the dual of (3.11) becomes the following \(G_{K_{\infty}}\)-equivariant \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \otimes_{\mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_ {\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee}. \tag{3.12}\] Since \(g\in G_{K_{\infty}}\) acts on \(1\otimes g\) on \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \otimes_{\mathfrak{S},\phi}A_{\inf}\), we obtain the claim (3). Since \(\varphi_{A_{\inf}}:A_{\inf}\to A_{\inf}\) is flat, we have \(G_{K}\)-equivariant isomorphism \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}\otimes_{A_{\inf},\varphi_{A_{\inf}}}A_{\inf}\simeq\pi_{i} \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee} \tag{3.13}\] by Theorem 3.5. Combined with Theorem 3.3, we see that \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p}]}W(\overline{k})[ \frac{1}{p}]\) is fixed by \(G_{K^{ur}}\). Since \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}\) is finite free \(\mathfrak{S}\)-module, there is a natural inclusion \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}\hookrightarrow\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee}[\frac{1}{p}]\) and it induces a natural inclusion \[\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}\otimes_{A_{\inf}}W(\overline{k})\hookrightarrow\pi_{i} \operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}) ^{\vee}[\frac{1}{p}]\otimes_{A_{\inf}[\frac{1}{p}]}W(\overline{k})[\frac{1}{ p}].\] Thus \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p})^{\vee}\otimes_{A_{\inf}}W(\overline{k})\) is fixed by \(G_{K^{ur}}\). The last claim follows from claims (1)-(4) and [1, Lemma 7.1.2]. For a Breuil-Kisin \((\mathfrak{M},\varphi_{\mathfrak{M}})\), let \(\overline{\varphi_{\mathfrak{M}}}\) denote the composite \[\overline{\varphi_{\mathfrak{M}}}:\mathfrak{M}\otimes_{\mathfrak{S}}W( \mathcal{C}^{\flat})\stackrel{{\varphi^{\prime}_{\mathfrak{M}} \otimes\varphi_{\mathcal{C}}}}{{\to}}\mathfrak{M}[\frac{1}{E}]\otimes_{ \mathfrak{S}[\frac{1}{E}]}W(\mathcal{C}^{\flat})=\mathfrak{M}\otimes_{ \mathfrak{S}}W(\mathcal{C}^{\flat})\] where \(\varphi^{\prime}_{\mathfrak{M}}\) is the composite \(\mathfrak{M}\stackrel{{\varphi}}{{\to}}\mathfrak{M}\otimes_{ \mathfrak{S},\varphi}\mathfrak{S}[\frac{1}{E}]\stackrel{{\varphi_{ \mathfrak{M}}}}{{\to}}\mathfrak{M}[\frac{1}{E}]\), and \(\varphi_{\mathcal{C}}\) is the Frobenius of \(W(\mathcal{C}^{\flat})\). Let us define a \(\mathbb{Z}_{p}\)-module: \[T_{A_{\inf}}(\mathfrak{M}):=\big{(}\widehat{\mathfrak{M}}\otimes_{A_{\inf}}W (\mathcal{C}^{\flat})\big{)}^{\overline{\varphi_{\mathfrak{M}}}=1}.\] **Theorem 3.7**.: 1. _For a smooth proper_ \(\mathcal{O}_{K}\)_-linear category_ \(\mathcal{T}\) _and a sufficiently large_ \(i\)_, the_ \(\mathbb{Z}_{p}[G_{K}]\)_-module_ \(T_{A_{\inf}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{ Z}_{p})^{\vee})\) _is a_ \(\mathbb{Z}_{p}\)_-lattice of a semi-stable representation._ 2. _We assume_ \(\mathcal{T}_{\mathcal{C}}\) _admits a geometric realization. Then there is a_ \(G_{K}\)_-equivariant isomorphism_ \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee}\simeq T_{A_{\inf}}(\pi_{i} \operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}).\] _In particular,_ \(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_ {p}\) _is a semi-stable representation._ Proof.: (1) By Theorem [1, Theorem 7.1.7], we obtain the claim. (2) Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. We assume \(\mathcal{T}_{\mathcal{C}}\) admits a geometric realization. By the same discussion as the proof of Theorem 2.16, we have an equivalence \[L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{L_{K(1)}K(\mathcal{C})}L_{K(1)}( \operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\frac{1}{ \xi}])\to L_{K(1)}(\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C} }};\mathbb{Z}_{p})[\frac{1}{\xi}]). \tag{3.14}\] By the functionality of cyclotomic Frobenius \(\varphi_{-}^{hS^{1}}:\operatorname{TC}^{-}(-;\mathbb{Z}_{p})\to\operatorname{TP}(-; \mathbb{Z}_{p})\), we have the following commutative diagram \[L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{L_{K(1)}K(\mathcal{C} )}L_{K(1)}(\operatorname{TC}^{-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[ \tfrac{1}{\xi}])\xrightarrow[\simeq]{\operatorname{id}\otimes\varphi_{ \mathcal{O}_{\mathcal{C}}}^{hS^{1}}}L_{K(1)}(\operatorname{TC}^{-}(\mathcal{T}_ {\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[\tfrac{1}{\xi}])\] \[L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{L_{K(1)}K(\mathcal{C} )}L_{K(1)}(\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[ \tfrac{1}{\xi}])\xrightarrow[\simeq]{\simeq}L_{K(1)}(\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[\tfrac{1}{\xi}])\] We note \(\xi|\mu\) in \(A_{\inf}\). Since \(\varphi_{\mathcal{O}_{\mathcal{C}}}^{hS^{1}}:\pi_{0}L_{K(1)}\operatorname{TC}^ {-}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\tfrac{1}{\xi}]\to\pi_{0}L_{K(1) }\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})[\tfrac{1}{\xi}]\) is given by the Frobenius \(\varphi_{A_{\inf}}:A_{\inf}[\tfrac{1}{\mu}]_{p}^{\wedge}\to A_{\inf}[\tfrac{1} {\varphi(\mu)}]_{p}^{\wedge}\), on homotopy groups, we have \(G_{K}\)-equivariant diagram \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}A_{\inf}[ \tfrac{1}{\mu}]_{p}^{\wedge}\xrightarrow[\simeq]{\simeq}\pi_{i}\operatorname {TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[\tfrac{1}{\mu} ]_{p}^{\wedge}\] \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}A _{\inf}[\tfrac{1}{\varphi(\mu)}]_{p}^{\wedge}\xrightarrow[\simeq]{\simeq}\pi_ {i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \tfrac{1}{\varphi(\mu)}]_{p}^{\wedge}\] After the base change along \(A_{\inf}[\tfrac{1}{\mu}]_{p}^{\wedge}\to W(\mathcal{C}^{\flat})\), the above diagram becomes the following \(G_{K}\)-equivariant diagram. (3.15) Thus the isomoprhism \(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}W(\mathcal{ C}^{\flat})\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{ \mathcal{C}}};\mathbb{Z}_{p})\otimes_{A_{\inf}}W(\mathcal{C}^{\flat})\) is Frobenius-equivariant. Since \(\mathbb{Z}_{p}\to A_{\inf}\) and \(A_{\inf}\to W(\mathcal{C}^{\flat})\) are flat and compatible with Frobenius automorphisms, the dual of the isomorphism becomes the Frobenius-equivariant isomorphism \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee}\otimes_{\mathbb{Z}_{p}}W( \mathcal{C}^{\flat})\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{ O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee}\otimes_{A_{\inf}}W(\mathcal{C}^{ \flat})\] and we obtain the claim. _Remark 3.8_.: Similarly, we can obtain that there is an isomorphism of \(\mathbb{Z}_{p}[G_{K}]\)-modules: \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee\vee}\simeq T_{A_{\inf}}( \pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee \vee}).\] ## 4. \((\varphi,\hat{G})\)-modules and crystalline representations In this section, we will prove Theorem 1.10. We would like to thank H.Gao [1] for sharing the strategy and key constructions to prove Theorem 1.10. ### Breuil-Kisin \(G_{k}\)-modules and \((\varphi,\hat{G})\)-modules Let us recall \((\varphi,\hat{G})\)-module from [10]. Let \(L:=\bigcup_{n=1}^{\infty}K_{\infty}(\zeta_{n})\), \(\hat{G}:=\operatorname{Gal}(L/K)\) and \(H_{K}:=\operatorname{Gal}(L/K_{\infty})\). Du-Liu construct \(\mathfrak{S}\)-algebras \(\mathfrak{S}^{(2)}\) and \(\mathfrak{S}^{(2)}_{\operatorname{st}}\) as the following [10, section 2.1 and 2.3]. We set \(\mathfrak{S}^{\hat{\otimes}2}:=\mathfrak{S}[[y-z]]=W[[z,y-z]]\), \(\mathfrak{S}^{\hat{\otimes}2}\) is \(\mathfrak{S}\otimes_{\mathbb{Z}_{p}}\mathfrak{S}\)-algebra via \(z\otimes 1\mapsto z\), \(1\otimes z\mapsto y=(y-z)+z\), in this way, one can extend Frobenius action on \(\mathfrak{S}\) to \(\mathfrak{S}^{\hat{\otimes}2}\) which is Frobenius on \(W\) and sends \(z\) to \(z^{p}\) and \(y-z\) to \(y^{p}-z^{p}\). Let \(\mathfrak{S}^{(2)}\) be \(\mathfrak{S}[[y-z]]\{\frac{y-z}{E}\}_{\delta}^{\wedge}\), where \(\{-\}_{\delta}^{\wedge}\) means freely adjoining elements in the category of \((p,E)\)-completed \(\delta\)-\(\mathfrak{S}\)-algebras [10, section 2.1 and 4.1]. \(\mathfrak{S}^{(2)}\) is a \(\mathfrak{S}\)-algebra via \(u\mapsto u\), and the structure map is flat (see [10, Proposition 2.2.7]), and is a sub-\(\mathfrak{S}\)-algebra of \(A_{\inf}\), where we regard \(A_{\inf}\) is as a \(\mathfrak{S}\)-algebra via \(\phi\) (see [10, section 2.4]). Let us recall the definition of \(\mathfrak{S}^{(2)}_{\operatorname{st}}\) from [10, section 2.3]. Define a Frobenius action on \(W[[z,\mathfrak{y}]]\) which sends \(x\) to \(x^{p}\) and \(\mathfrak{y}\) to \((1+\mathfrak{y})^{p}-1\) and set \(\mathfrak{w}=\frac{\mathfrak{y}}{E}\). Let \(\mathfrak{S}^{(2)}_{\operatorname{st}}=\mathfrak{S}[[z,\mathfrak{y}]]\{ \mathfrak{w}\}_{\delta}^{\wedge}\), and it is a sub-\(\mathfrak{S}\)-algebra of \(A_{\inf}\), where we regard \(A_{\inf}\) is as a \(\mathfrak{S}\)-algebra via \(\phi\) (see [10, section 2.4]). There is a natural inclusion of \(\mathfrak{S}\)-algebra \(W[[z,y-z]]\subset W[[z,\mathfrak{y}]]\) which sends \(y\) to \(z(\mathfrak{y}+1)\), it is \(\delta\)-ring map, by the constructions we have an inclusion of sub-\(\mathfrak{S}\)-algebras \(\mathfrak{S}^{(2)}\hookrightarrow\mathfrak{S}^{(2)}_{\operatorname{st}}\) of \(A_{\inf}\). Let us denote \(\theta_{0}:\mathfrak{S}\to\mathfrak{S}^{(2)}\mid z\mapsto z\), and denote \(\theta_{1}:\mathfrak{S}\to\mathfrak{S}^{(2)}\mid z\mapsto y\). We note that there is a commutative diagram (see [10, Corollary 2.4.5]): where \(\iota\) sends \(z\) to \([\pi^{\flat}]\) and \(y\) to \([\varepsilon]\cdot[\pi^{\flat}]\). We regard \(\mathfrak{S}^{(2)}\) and \(\mathfrak{S}^{(2)}_{\operatorname{st}}\) as \(\mathfrak{S}\)-algebras via the diagram unless otherwise stated. **Definition 4.1** ([10, Definition 3.3.2]).: Let \((\mathfrak{M},\varphi_{\mathfrak{M}})\) be a finite free Breuil-Kisin module of finite \(E\)-height. We call \((\mathfrak{M},\varphi_{\mathfrak{M}})\)\((\varphi,\hat{G})\)-module if it satisfies the following conditions. 1. There is a continuous \(\mathfrak{S}^{(2)}_{\operatorname{st}}\)-semi-linear \(\hat{G}\)-action on \(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}^{(2)}_{\operatorname{st}}\). 2. \(\hat{G}\) commutes with \(\varphi\) on \(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}^{(2)}_{\operatorname{st}}\). 3. \(\mathfrak{M}\subset(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}^{(2)}_{ \operatorname{st}})^{H_{K}}\). 4. \(\hat{G}\) acts on \((\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}^{(2)}_{\operatorname{st}}) \otimes_{\mathfrak{S}^{(2)}_{\operatorname{st}}}W(k)\) trivially. _Remark 4.2_.: For a Breuil-Kisin \(G_{K}\)-module \((\mathfrak{M},\varphi_{\mathfrak{M}},G_{K}\curvearrowright\mathfrak{M} \otimes_{\mathfrak{S},\phi}A_{\inf})\), the \(p\)-adic representation \(T_{A_{\inf}}(\mathfrak{M})\) is a \(\mathbb{Z}_{p}\)-lattice of semi-stable representation of non-negative Hodge-Tate weight [11, Theorem 7.1.7]. Du-Liu proved that the sub-modules \(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}^{(2)}_{\operatorname{st}}\subset \mathfrak{M}\otimes_{\mathfrak{S},\phi}A_{\inf}\subset T_{A_{\inf}}(\mathfrak{M })\otimes_{\mathbb{Z}_{p}}A_{\inf}\) are \(G_{K}\)-stable [10, Theorem 3.3.3] (see also [11, Proposition 3.1.3]). By Definition 3.1 (3), \(G_{L}=\operatorname{Gal}(\overline{K}/L)(\subset G_{K_{\infty}})\) acts on \(\mathfrak{M}\subset\mathfrak{M}\otimes_{\mathfrak{S},\phi}A_{\inf}\) trivially, thus \(G_{K}\)-action on \(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}_{\mathrm{st}}^{(2)}\) factors through \(\hat{G}\). The \(\hat{G}\)-action on \(\mathfrak{M}\otimes_{\mathfrak{S}}\mathfrak{S}_{\mathrm{st}}^{(2)}\) induces the \((\varphi,\hat{G})\)-module structure on \((\mathfrak{M},\varphi_{\mathfrak{M}})\)[12, Theorem 3.3.3]. The comparison theorem between \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) and \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) We regard \(\mathcal{O}_{K}\) as an \(\mathbb{S}[z_{0},z_{1}]\)-algebra via the map \(z_{0},z_{1}\mapsto\pi\). In [11], Liu-Wang revealed the structure of \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\): there is an isomorphism \[\pi_{0}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{ p})\simeq\mathfrak{S}^{(2)},\] where we identify \(z_{0}=z\) and \(z_{1}=y\) (see [11, Theorem 1.3]). In this section, we will compare \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) with \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) for an \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\). We note that the functor \[\operatorname{TP}(-/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}):\operatorname{Cat }_{\infty}^{\operatorname{perf}}(\mathcal{O}_{K})\overset{\operatorname{THH}( -/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})}{\to}\operatorname{Sp}^{BS^{1}} \overset{(-)^{tS^{1}}}{\to}\operatorname{Sp}\] is a lax symmetric monoidal, \(\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) admits an \(\mathbb{E}_{\infty}\)-ring structure and \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) also admits an \(\mathbb{E}_{\infty}\)-ring structure. For \(i=0,1\), the map of ring spectra \(\mathbb{S}[z]\overset{z\mapsto z_{i}}{\to}\mathbb{S}[z_{0},z_{1}]\) induces a morphism of \(\mathbb{E}_{\infty}\)-ring spectra \[e_{i}:\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})\overset {z\mapsto z_{i}}{\to}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1} ];\mathbb{Z}_{p}). \tag{4.1}\] For \(\heartsuit\in\{\operatorname{THH},\operatorname{TP},\operatorname{TC}^{-}\}\) and an \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), the left unit \(e_{0,\mathcal{T}}\) and right unit \(e_{1,\mathcal{T}}\) are the maps \[\heartsuit(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to\heartsuit(\mathcal{T}/ \mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\] induced by \(z\mapsto z_{0}\) and \(z\mapsto z_{1}\) respectively. Let us denote by \(\operatorname{THH}(\mathbb{S}[z_{0},z_{1}]/_{0}\mathbb{S}[z])\) (resp. \(\operatorname{THH}(\mathbb{S}[z_{0},z_{1}]/_{1}\mathbb{S}[z])\)) the relative topological Hochschild homology of \(\mathbb{S}[z]\to\mathbb{S}[z_{0},z_{1}]\mid z\to z_{0}\) (resp. \(\mathbb{S}[z]\to\mathbb{S}[z_{0},z_{1}]\mid z\to z_{1}\)). There is a commutative diagram in \(\operatorname{Mod}_{\operatorname{THH}(\mathbb{S}[z])}(\operatorname{Sp}^{BS^{1}})\): Since the bottom-left square and the bottom square are pushout squares (see [11, Lemma 2.3]), Similarly since the right square is pushout square, the top-right square is also pushout square. Thus we have the following equivalence (transitivity property of relative THH) \[\operatorname{THH}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\simeq \operatorname{THH}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{THH}(\mathbb{S}[z_{0},z_{1}]/_{\mathbb{S}[z];\mathbb{Z}_{p}})} \mathbb{S}[z_{0},z_{1}]_{p}^{\wedge}. \tag{4.2}\] We also have the following commutative diagram: Applying \(\mathcal{T}=\operatorname{perf}(\mathcal{O}_{K})\) for the equivalence (4.2), then we see the bottom and total square are pushout squares. We know that the top square is a pushout square. Thus we have the following \(S^{1}\)-equivariant equivalence \[\operatorname{THH}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\simeq \operatorname{THH}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})} \operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}). \tag{4.3}\] Notice an exact symmetric monoidal \(\infty\)-functor \[-\otimes_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}),e_ {i}}\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\] from \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{ Z}_{p})}(\operatorname{Sp}^{BS^{1}})\) to \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1 }];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). If \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\) is smooth proper, \(\operatorname{THH}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is dualizable and perfect in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{ Z}_{p})}(\operatorname{Sp}^{BS^{1}})\). Since the functor \(-\otimes_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p})} \operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) is an exact symmetric monoidal, we see the following lemma: **Lemma 4.3**.: _For a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\), \(\operatorname{THH}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) is dualizable and perfect in \(\operatorname{Mod}_{\operatorname{THH}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1 }];\mathbb{Z}_{p})}(\operatorname{Sp}^{BS^{1}})\)._ **Proposition 4.4**.: _Let \(\mathcal{T}_{1},\mathcal{T}_{2}\) be smooth proper \(\mathcal{O}_{K}\)-linear categories. There are equivalences:_ \[\operatorname{TC}^{-}(\mathcal{T}_{1}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \otimes_{\operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}]; \mathbb{Z}_{p})}\operatorname{TC}^{-}(\mathcal{T}_{2}/\mathbb{S}[z_{0},z_{1}]; \mathbb{Z}_{p})\simeq\operatorname{TC}^{-}(\mathcal{T}_{1}\otimes_{\mathcal{ O}_{K}}\mathcal{T}_{2}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}),\] \[\operatorname{TP}(\mathcal{T}_{1}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \otimes_{\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{ Z}_{p})}\operatorname{TP}(\mathcal{T}_{2}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \simeq\operatorname{TP}(\mathcal{T}_{1}\otimes_{\mathcal{O}_{K}}\mathcal{T}_{2 }/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}).\] Proof.: By Lemma 4.3, one can prove using the same argument as the proof of Proposition 2.6. **Proposition 4.5**.: _Let \(\mathcal{T}_{1},\mathcal{T}_{2}\) be smooth proper \(\mathcal{O}_{K}\)-linear categories. There are equivalences_ \[\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}),e_{i}} \operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \simeq\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\] \[\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}),e_{i}} \operatorname{TC}^{-}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \simeq\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\] _for \(i=0,1\)._ Proof.: For \(i=0,1\) and \(\heartsuit\in\{\operatorname{TC}^{-},\operatorname{TP}\}\), there is a commutative diagram: This diagram induces a morphism of \(\heartsuit(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}])\)-module spectra \[\heartsuit(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{\heartsuit(\mathcal{O }_{K}/\mathbb{S}[z];\mathbb{Z}_{p}),e_{i}}\heartsuit(\mathcal{O}_{K}/\mathbb{S} [z_{0},z_{1}];\mathbb{Z}_{p})\to\heartsuit(\mathcal{T}/\mathbb{S}[z_{0},z_{1}]; \mathbb{Z}_{p}). \tag{4.4}\] By Proposition 2.6 and Proposition 4.4, both sides of the map (4.4) yield symmetric monoidal functors from \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}( \mathcal{O}_{K})\) to \(\operatorname{Mod}_{\heartsuit(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}]; \mathbb{Z}_{p})}(\operatorname{Sp})\). Since every smooth proper \(\mathcal{O}_{K}\)-linear category is dualizable in \(\operatorname{Cat}_{\infty,\operatorname{sat}}^{\operatorname{perf}}\), by [1, Proposition 4.6] we obtain the claim. There is an equivalence of \(\mathbb{E}_{\infty}\)-ring spectra \(\operatorname{ex}:\mathbb{S}[z_{0},z_{1}]\xrightarrow{\simeq}\mathbb{S}[z_{0 },z_{1}]\mid z_{0}\mapsto z_{1},z_{1}\mapsto z_{0}\), and it induces a commutative diagram and an isomorphism of \(\mathfrak{S}\)-algebra (see also [1, section 4]) \[\mathfrak{S}^{(2)}\otimes_{\theta_{0},\mathfrak{S}}\mathfrak{S}\simeq\mathfrak{ S}^{(2)}\otimes_{\theta_{1},\mathfrak{S}}\mathfrak{S}. \tag{4.5}\] By [1, Corollary 3.7], we see that \(\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) is \(2\)-periodic and there is an isomorphism \(\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{ p})=\mathfrak{S}^{(2)}[\sigma,\sigma^{-1}]\) by choosing a generator \(\sigma\in\pi_{2}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z_{1}]; \mathbb{Z}_{p})\). For \(i=0,1\), on homotopy groups, the morphism \[\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z];\mathbb{Z}_{p}) \xrightarrow{e_{i}}\pi_{*}\operatorname{TP}(\mathcal{O}_{K}/\mathbb{S}[z_{0},z _{1}];\mathbb{Z}_{p})\] is given by \[\mathfrak{S}[u,u^{-1}]\to\mathfrak{S}^{(2)}[\sigma,\sigma^{-1}] \tag{4.6}\] which is \(\theta_{i}\)-linear map andcghghfn sends \(u\) to \(a_{i}\cdot\sigma\) for some units \(a_{i}\in(\mathfrak{S}^{(2)})^{*}\). **Proposition 4.6**.: _There is an isomorphism_ \[\pi_{j}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\theta_{i}}\mathfrak{S}^{(2)}\simeq\pi_{j}\operatorname{TP}( \mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p}) \tag{4.7}\] _for any \(j\) and \(i=0,1\)._ Proof.: If \(i=0\), by [1, Proposition 2.2.7] the graded-ring morphism (4.6) is flat. We obtain the claim by Proposition 4.5. By the isomorphism (4.5), \(\theta_{1}\) is also flat. Thus the morphism (4.6) is also flat for \(i=1\) The comparison theorem between \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) and \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{1/p^ {\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\). We regard \(\mathcal{O}_{\mathcal{C}}\) as an \(\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}]\)-algebra via the map \(z_{0}^{1/p^{n}}\mapsto\pi^{1/p^{n}},z_{1}^{1/p^{n}}\mapsto\zeta_{n}\pi^{1/p^{n}}\). We will prove the comparison theorem between \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) and \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{1 /p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\) for a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\). The following Lemma is inspired by H. Gao in [1]. **Lemma 4.7**.: _The natural map \(\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to \operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{ 1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\) is an equivalence which is compatible with \(S^{1}\)-action. In particular, the natural map_ \[\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{ 1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\] _is an equivalence._ Proof.: There is a diagram \[\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{ Z}_{p}) \to\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/ \mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\] \[\simeq\operatorname{THH}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})\otimes_{\operatorname{THH}(\mathbb{S}[z_{0}^{1/p^{\infty}},z _{1}^{1/p^{\infty}}];\mathbb{Z}_{p})}\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/ p^{\infty}}]_{p}^{\wedge}\] which is compatible with \(S^{1}\)-action. For \(i=0,1\), we have a \(S^{1}\)-equivariant equivalence \(\operatorname{THH}(\mathbb{S}[z_{i}^{1/p^{\infty}}];\mathbb{Z}_{p})\simeq \mathbb{S}[z_{i}^{1/p^{\infty}}]_{p}^{\wedge}\)[1, Proposition 11.7], and we obtain the claim. **Lemma 4.8**.: _The natural map \(\mathbb{S}[z_{0},z_{1}]\to\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}]\) induces an isomorphism_ \[\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z _{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\simeq\pi_{i} \operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\otimes _{\mathfrak{S}^{(2)},\iota}A_{\inf}\] _for any \(i\)._ Proof.: The morphism \(e_{0}:\mathbb{S}[z]\to\mathbb{S}[z_{0},z_{1}]\) induces an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\theta_{0}}\)\(\mathfrak{S}^{(2)}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z _{1}];\mathbb{Z}_{p})\), and the composite map \(\mathbb{S}[z]\overset{e_{0}}{\to}\mathbb{S}[z_{0},z_{1}]\to\mathbb{S}[z_{0}^{ 1/p^{\infty}},z_{1}^{1/p^{\infty}}]\) induces an isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty} }];\mathbb{Z}_{p}).\] Since there is an equation of morphisms \(\phi=\iota\circ\theta_{0}\), we obtain the claim. ### \(\tau\)-action on \(\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) The construction in this section is inspired by H. Gao [1]. Let \(\tau\in\operatorname{Gal}(L/\bigcup_{n=1}^{\infty}K(\zeta_{n}))\) be a topological generator and \(\tilde{\tau}\in G_{K}\) be the lift of \(\tau\) satisfying \(\tilde{\tau}(\pi^{1/p^{n}})=\zeta_{n}\pi^{1/p^{n}}\). Consider a map \(\eta:\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}]\to\mathcal{O}_{\mathcal{C}}\) which sends \(z_{0}^{1/p^{n}}\) to \(\pi^{1/p^{n}}\) and \(z_{1}^{1/p^{n}}\) to \(\zeta_{n}\pi^{1/p^{n}}\). We will use the following notations associated with a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\). * \(e_{0}^{\infty}:\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S }[z^{1/p^{\infty}}];\mathbb{Z}_{p})\overset{\simeq}{\to}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/ p^{\infty}}];\mathbb{Z}_{p})\) the morphism given by \(z^{1/p^{\infty}}\mapsto z_{0}^{1/p^{\infty}}\) * \(a:\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{C}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\) the morphism given by the commutative diagram * \(b:\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to\operatorname{ TP}(\mathcal{T}_{\mathcal{O}_{C}}/\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{ \infty}}];\mathbb{Z}_{p})\) the morphism given by commutative diagram * \(c:\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\to \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{C}}/\mathbb{S}[z_{0}^{1/p^{\infty }},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\) the morphism given by commutative diagram * \(\tilde{\tau}_{1}:\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{C}}/\mathbb{S}[z ^{1/p^{\infty}}];\mathbb{Z}_{p})\to\operatorname{TP}(\mathcal{T}_{\mathcal{O} _{C}}/\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\) the morphism given by the commutative diagram (4.8) **Lemma 4.9**.: _The following diagram commutes_ Proof.: The diagram (4.8) is compatible with \(\mathbb{S}\)-ring spectrum structure, we obtain the claim. The following theorem, inspired by H. Gao [1], plays the most important role in the proof of Theorem 1.10. **Proposition 4.10**.: _The following diagram commutes_ (4.9) Proof.: The diagram (4.8) fits into the following commutative diagram: Thus we see that the left square of (4.9) commutes. Look at the following diagram: then we see that the right square of (4.9) commutes. Fix an integer \(i>>0\). By Proposition 2.6, \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a finitely generated \(\mathfrak{S}\)-module. Choose a \(\mathfrak{S}\)-generator \(\mathfrak{m}_{1},\mathfrak{m}_{2},....,\mathfrak{m}_{n}\) of \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\). We have an equation \(\overline{\phi}=\phi\circ\varphi:\mathfrak{S}\to A_{\inf}\) (see the diagram (2.2)), combine isomorphism (2.9) with Theorem 2.13 then we see that \(a\) induces an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\). For \(x\in\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\), we write \(\overline{x}\) for the image of \(x\) under the composite map \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\overset{a} {\to}\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/ \mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\overset{\simeq}{\leftarrow}\pi_{i} \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\), then \(\overline{x}\) is a bijection. We have an equation \(\overline{\phi}=\phi\circ\varphi:\mathfrak{S}\to A_{\inf}\) (see the diagram (2.2)), combine isomorphism (2.9) with Theorem 2.13 then we see that \(a\) induces an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\). For \(x\in\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\), we write \(\overline{x}\) for the image of \(x\) under the composite map \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\overset{a} {\to}\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/ \mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\overset{\simeq}{\leftarrow}\pi_ {i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\), then \(\overline{x}\) is a bijection. We now turn to the case where \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a bijection. We have an equation \(\overline{\phi}=\phi\circ\varphi:\mathfrak{S}\to A_{\inf}\) (see the diagram (2.2)), combine isomorphism (2.9) with Theorem 2.13 then we see that \(a\) induces an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}}/\mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\). For \(x\in\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\), we write \(\overline{x}\) for the image of \(x\) under the composite map \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\overset{a} {\to}\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/ \mathbb{S}[z^{1/p^{\infty}}];\mathbb{Z}_{p})\overset{\simeq}{\leftarrow}\pi_ {i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\), then \(\overline{x}\) is a bijection. We now turn to the case where \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\) is a bijection. We will show that \(\overline{x}\) is a bijection. \(\overline{\mathfrak{m}_{1}},\overline{\mathfrak{m}_{2}},...,\overline{\mathfrak{m}_{n}}\) become a \(A_{\inf}\)-generator of \(\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\). Let \(a_{x1},a_{x2},...,a_{xn}\) be elements of \(A_{\inf}\) so that \[\tilde{\tau}(\overline{x})=a_{x1}\overline{\mathfrak{m}_{1}}+a_{x2}\overline{ \mathfrak{m}_{2}}+\cdots+a_{xn}\overline{\mathfrak{m}_{n}}.\] **Proposition 4.11**.: _For any \(x\) and an integer \(l\), \(a_{xl}\) is contained in \(\mathfrak{S}^{(2)}\) under the inclusion \(\mathfrak{S}^{(2)}\overset{\iota}{\hookrightarrow}A_{\inf}\)._ Proof.: Combine Proposition 4.10 with Lemma 4.9, we see the following diagram commutes. (4.10) We note that \(\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})\to\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}/ \mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}];\mathbb{Z}_{p})\) is \(\pi_{0}\operatorname{TP}(\mathcal{O}_{\mathcal{C}};\mathbb{Z}_{p})\simeq A_{ \inf}\)-linear. The morphism \(c:\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\to \pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{ 1/p^{\infty}}];\mathbb{Z}_{p})\) is given by \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\theta_{0}}\mathfrak{S}^{(2)}\overset{\iota\mathrm{d}\otimes \iota}{\to}\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p}) \otimes_{\mathfrak{S},\phi}A_{\inf}\) by Proposition 4.6 and Lemma 4.8. Besides, by Proposition 4.6, the morphism \(e_{0,\mathcal{T}}:\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z} _{p})\to\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\) is given by \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\to\pi_{i} \operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{\mathfrak{ S},\theta_{0}}\mathfrak{S}^{(2)}\), and \(\{e_{0,\mathcal{T}}(\mathfrak{m}_{1}),e_{0,\mathcal{T}}(\mathfrak{m}_{2}),...,e _{0,\mathcal{T}}(\mathfrak{m}_{n})\}\) is a \(\mathfrak{S}^{(2)}\)-generator of \(\operatorname{TP}(\mathcal{T}/\mathbb{S}[z_{0},z_{1}];\mathbb{Z}_{p})\). Therefore, for \(x\in\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\), there are \(a_{x1}^{\prime},a_{x2}^{\prime},...,a_{xn}^{\prime}\in\mathfrak{S}^{(2)} \overset{\iota}{\hookrightarrow}A_{\inf}\) such that \[e_{1,\mathcal{T}}(x)=\Sigma_{l=1}^{n}a_{xl}^{\prime}e_{0,\mathcal{T}}( \mathfrak{m}_{l}).\] Look at the diagram, then we see an equation \(\tilde{\tau}_{1}(a(x))=\Sigma a_{xl}^{\prime}c(e_{0,\mathcal{T}}(\mathfrak{m} _{l}))\), and we obtain the claim. _Remark 4.12_.: We can apply the same procedure to \(\tilde{\tau}^{-1}\) and we obtain that there are elements \(b_{xj}\) of \(\mathfrak{S}^{(2)}\) so that \(\tilde{\tau}^{-1}(\overline{x})=b_{x1}\overline{\mathfrak{m}_{1}}+b_{x2} \overline{\mathfrak{m}_{2}}+\cdots+b_{xn}\overline{\mathfrak{m}_{n}}\), where instead of \(\eta\) we use a morphism \(\mathbb{S}[z_{0}^{1/p^{\infty}},z_{1}^{1/p^{\infty}}]\to\mathcal{O}_{\mathcal{C}}\) which sends \(z_{0}^{1/p^{n}}\) to \(\pi^{1/p^{n}}\) and \(z_{1}^{1/p^{n}}\) to \(\zeta_{n}^{-1}\pi^{1/p^{n}}\). Let \(\tilde{\tau}^{*}\) denote the dual action of \(\tilde{\tau}\) on \(\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})^{\vee}:=\operatorname{Hom}_{A_{\inf}}(\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}),A_{\inf})\). Choose a \(\mathfrak{S}\)-basis \(\mathfrak{n}_{1},\mathfrak{n}_{2},...\mathfrak{n}_{d}\) of \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\). Combine (2.9) with Theorem 2.13, there is an isomorphism \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{ \mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{ O}_{\mathcal{C}}};\mathbb{Z}_{p})\). For \(x\in\pi_{i}\operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{ p})^{\vee}\), let \(\overline{x}\) denote the image of \(x\) under the inclusion \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\hookrightarrow \pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \otimes_{\mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}(\mathcal{T}_{ \mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee}\), and let \(b_{xl}\) be elements of \(A_{\inf}\) so that \(\tilde{\tau}^{*}(\overline{x})=\Sigma_{l}b_{xl}\overline{\mathfrak{m}_{l}}\). Remark 4.12 directly induces the following. **Corollary 4.13**.: _For any \(x\) and \(l\), \(b_{xl}\) is contained in \(\mathfrak{S}^{(2)}\) under the inclusion \(\mathfrak{S}^{(2)}\stackrel{{\iota}}{{\hookrightarrow}}A_{\inf}\)._ ### \(\tilde{\tau}\)-action and crystalline representations Let \(\phi_{1}:\mathfrak{S}\to A_{\inf}\) be a \(W\)-linear map which sends \(z\) to \([\varepsilon][\pi^{\flat}]\). The following diagram commutes. Let \(E_{1}\) denote \(\theta_{1}(E)\), \(E_{0}\) denote \(\theta_{0}(E)\) and \(\xi_{1}\) denote \(\tilde{\tau}(\xi)\). We note that \(\mathfrak{S}^{(2)}\) is an integral domain (see [1, Proof of Lemma 2.3.2]), and \(\iota(E_{1})=\xi_{1}\). Firstly, we prove the following Lemma. **Lemma 4.14**.: _As sub-rings of \(A_{\inf}[\frac{1}{\xi_{1}}]\), there is an equation_ \[\mathfrak{S}^{(2)}=\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\cap A_{\inf}\] _of sub-algebra \(A_{\inf}[\frac{1}{\xi_{1}}]\), where we regard \(\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\) as a sub-ring of \(A_{\inf}[\frac{1}{\xi_{1}}]\) via \(\iota[\frac{1}{E_{1}}]:\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\hookrightarrow A_{ \inf}[\frac{1}{\xi_{1}}]\)._ Proof.: It is suffice to show that \(\mathfrak{S}^{(2)}\) contains \(\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\cap A_{\inf}\). The map \(\iota\) induces a morphism of \(W\)-algebra \(\overline{\iota}:\mathfrak{S}^{(2)}/(E_{1})\to A_{\inf}/(\xi_{1})\) and \(\theta_{1}\) induces a morphism of \(W\)-algebra \(\overline{\theta_{1}}:\mathfrak{S}/(E)\to\mathfrak{S}^{(2)}/(E_{1})\). The isomorphism (4.5) induces an isomorphism of \(\mathfrak{S}\)-algebra \(\mathfrak{S}\otimes_{\mathfrak{S},\theta_{0}}\mathfrak{S}^{(2)}\simeq \mathfrak{S}\otimes_{\mathfrak{S},\theta_{1}}\mathfrak{S}^{(2)}\) which sends \(E_{0}\) to \(E_{1}\). By [1, Lemma 2.2.8 (2)], \(\theta_{0}\) induces an isomorphism \(\mathcal{O}_{K}\simeq\mathfrak{S}/(E)\simeq\mathfrak{S}^{(2)}/(E_{0})\simeq \mathfrak{S}^{(2)}/(E_{1})\) of \(W\)-algebras. Thus we obtain that \(\overline{\iota}\) is given by natural \(W\)-algebra \(\mathcal{O}_{K}\hookrightarrow\mathcal{O}_{\mathcal{C}}\). We assume that there exists an element \(x\) of \(\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\cap A_{\inf}\subset A_{\inf}[\frac{1}{\xi_ {1}}]\) s.t. \(x\) does not contained in \(\mathfrak{S}^{(2)}\). Since \(x\in\mathfrak{S}^{(2)}[\frac{1}{E_{1}}]\), there is the natural number \(n\geq 1\) such that \(E_{1}^{n}\cdot x\) is in \(\mathfrak{S}^{(2)}\) and \(E_{1}^{n-1}\cdot x\) is not in \(\mathfrak{S}^{(2)}\). Since \(\mathfrak{S}^{(2)}\) is an integral domain, \(E_{1}^{n}\cdot x\not\in(E_{1})\subset\mathfrak{S}^{(2)}\). Thus the class \(\overline{E_{1}^{n}\cdot x}\) is not zero in \(\mathfrak{S}^{(2)}/(E_{1})\). On the other hand, since \(x\in A_{\inf}\), \(E_{1}^{n}\cdot x\in(\xi_{1})\subset A_{\inf}\). Thus the class \(E_{1}^{n}\cdot x\) is zero in \(A_{\inf}/(\xi_{1})\). This is contradictory to the fact that \(\overline{\iota}:\mathfrak{S}^{(2)}/(E_{1})\to A_{\inf}/(\xi_{1})\) is injective. Fix a smooth proper \(\mathcal{O}_{K}\)-linear category \(\mathcal{T}\) and \(i>>0\). The \(\mathfrak{S}\)-module \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) has a \((\varphi,\hat{G})\)-module structure via Theorem 3.6 and Remark 4.2. Let us recall how to construct an \(A_{\inf}\)-semi-linear \(G_{K}\) action on \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \otimes_{\mathfrak{S},\phi}A_{\inf}\). We showed there is an isomorphism \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \otimes_{\mathfrak{S},\phi}A_{\inf}\stackrel{{\eqref{eq:A_{\inf} }}}{{\simeq}}\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}} };\mathbb{Z}_{p})^{\vee}\). After the localization by \(E\), the \(\mathfrak{S}\)-linear morphism \(\operatorname{can}_{\mathcal{T}}:\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})\to\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S }[z];\mathbb{Z}_{p})\) become a \(\mathfrak{S}[\frac{1}{E}]\)-linear isomorphism \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{ 1}{E}]\stackrel{{\eqref{eq:A_{\inf}}}}{{\simeq}}\pi_{i} \operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})[\frac{1}{E}]\) of \(\mathfrak{S}[\frac{1}{E}]\)-modules. Similarly, after the localization by \(\xi\), the \(G_{K}\)-equivariant \(A_{\inf}\)-linear morphism \(\operatorname{can}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}:\pi_{i}\operatorname{ TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\to\pi_{i} \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})\) become a \(A_{\inf}[\frac{1}{\xi}]\)-linear isomorphism \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z }_{p})[\frac{1}{\xi}]\stackrel{{(\ref{eq:A_inf})}}{{\simeq}}\pi_{i} \operatorname{TP}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})[ \frac{1}{\xi}]\) of \(A_{\inf}[\frac{1}{\xi}]\)-modules. Choose a \(\mathfrak{S}\)-basis \(\mathfrak{n}_{1},\mathfrak{n}_{2},...,\mathfrak{n}_{d}\) of \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) and a \(\mathfrak{S}\)-basis \(\mathfrak{m}_{1},\mathfrak{m}_{2},...,\mathfrak{m}_{d}\) of \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\), where we use the facts that \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) and \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \stackrel{{(\ref{eq:A_inf})}}{{\simeq}}\pi_{i} \operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\otimes_ {\mathfrak{S},\varphi}\)\(\mathfrak{S}\) are finite free \(\mathfrak{S}\)-modules and have same rank. Under the \(\mathfrak{S}\)-linear map \(\operatorname{can}_{\mathcal{T}}^{\vee}:\pi_{i}\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\to\pi_{i}\operatorname{TC}^{-}(\mathcal{ T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\), let \(B\) be the matrix with \(\mathfrak{S}\)-coefficients so that \((\mathfrak{m}_{1},\mathfrak{m}_{2},...,\mathfrak{m}_{d})=B(\mathfrak{n}_{1}, \mathfrak{n}_{2},...,\mathfrak{n}_{d})\). For \(x\in\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})^{\vee}\), let \(\overline{x}\) denote the image of \(x\) under the inclusion \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \hookrightarrow\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{ Z}_{p})^{\vee}\otimes_{\mathfrak{S},\phi}A_{\inf}\simeq\pi_{i} \operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p}) ^{\vee}\). Similarly, for \(y\in\pi_{i}\operatorname{TC}^{-}(\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}; \mathbb{Z}_{p})^{\vee}\), let \(\overline{y}\) denote the image of \(x\) under the inclusion \(\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee} \hookrightarrow\pi_{i}\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p })^{\vee}\otimes_{\mathfrak{S},\phi}A_{\inf}\simeq\pi_{i}\operatorname{TP}( \mathcal{T}_{\mathcal{O}_{\mathcal{C}}};\mathbb{Z}_{p})^{\vee}\). By the functionality of \(\operatorname{can}_{-}\), we have the following commutative diagram Since the top and bottom morphisms of the above diagram are \(\mathfrak{S}\)-linear, the equation \((\overline{\mathfrak{m}}_{1},\overline{\mathfrak{m}}_{2},...,\overline{ \mathfrak{m}}_{d})=B(\overline{\mathfrak{n}}_{1},\overline{\mathfrak{n}}_{2},...,\overline{\mathfrak{n}}_{d})\) holds. **Proposition 4.15**.: _For any \(x\in\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\), there are \(a_{1},a_{2},...,a_{d}\in\mathfrak{S}^{(2)}\) so that_ \[\tilde{\tau}^{*}(\overline{x})=a_{1}\overline{\mathfrak{n}}_{1}+a_{2}\overline {\mathfrak{n}}_{2}+\cdots+a_{d}\overline{\mathfrak{n}}_{d}.\] Proof.: Since \(\mathfrak{n}_{1}.\mathfrak{n}_{2},...,\mathfrak{n}_{d}\) is a \(A_{\inf}\)-basis of \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\), there are \(a_{1},a_{2},...,a_{d}\in A_{\inf}\) so that \[\tilde{\tau}^{*}(\overline{x})=a_{1}\overline{\mathfrak{n}}_{1}+a_{2}\overline {\mathfrak{n}}_{2}+\cdots+a_{d}\overline{\mathfrak{n}}_{d}.\] After the localization by \(E\) the \(\mathfrak{S}\)-linear morphism \(\operatorname{can}_{\mathcal{T}}:\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})\to\pi_{i}\operatorname{TP}(\mathcal{T}/ \mathbb{S}[z];\mathbb{Z}_{p})\) become a \(\mathfrak{S}[\frac{1}{E}]\)-linear isomorphism, for \(x\in\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) there is a \(y\in\operatorname{TP}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) and an integer \(n\) so that \(E^{n}\cdot x=\operatorname{can}_{\mathcal{T}}^{\vee}(y)\) in \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\). Notice the following commutative diagram. We have \(\operatorname{can}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{\vee}(\tilde{\tau}^{* }(\overline{y}))=\tilde{\tau}^{*}(\overline{E^{n}\cdot x})=\tilde{\tau}^{*}(E_{0 }^{n}\cdot\overline{x})=E_{1}^{n}\cdot\tilde{\tau}^{*}(x)\). By Corollary 4.13, there are \(b_{1},b_{2},...,b_{d}\in\mathfrak{S}^{(2)}\) so that \(\tilde{\tau}^{*}(\overline{y})=b_{1}\overline{\mathfrak{m}}_{1}+b_{2}\overline{ \mathfrak{m}}_{2}+\cdots b_{d}\overline{\mathfrak{m}}_{d}\). Since the \(A_{\inf}\)-linear map \(\operatorname{can}_{\mathcal{T}_{\mathcal{O}_{\mathcal{C}}}}^{\vee}\) is represented by the matrix \(B\), we have an equation \((E_{1}^{1}a_{1},E_{1}^{1}a_{2},....,E_{1}^{1}a_{d})=B(b_{1},b_{2},....,b_{d})\). Thus we know that \(E_{1}^{n}a_{i}\) is in \(\mathfrak{S}^{(2)}\). By Lemma 4.14, we obtain the claim. **Corollary 4.16**.: _The \(p\)-adic representation \(T_{A_{\mathrm{inf}}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee})\) is a \(\mathbb{Z}_{p}\)-lattice of a crystalline representation._ Proof.: By [1, Corollary 3.3.4], it is suffice to show that \(\tilde{\tau}^{*}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee})\) is contained in \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee }\otimes_{\mathfrak{S},\theta_{0}}\mathfrak{S}^{(2)}\). By Proposition 4.15, we obtain the claim. ### The proof of Main Theorems The following theorem is the summary of section 3 and section 4. **Theorem 4.17**.: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. Then there is a natural number \(n\) satisfying the following holds for any \(i\geq n\):_ 1. _Breuil-Kisin module_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) _has a Breuil-Kisin_ \(G_{K}\)_-module structure in the sense of_ _[_6_]__._ 2. _Breuil-Kisin module_ \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee}\) _has a_ \((\varphi,\hat{G})\)_-module structure in the sense of_ _[_1_]__._ 3. _The_ \(\mathbb{Z}_{p}[G_{K}]\)_-module_ \(T_{A_{\mathrm{inf}}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee})\) _is a_ \(\mathbb{Z}_{p}\)_-lattice of a crystalline representation._ 4. _If_ \(\mathcal{T}_{\mathcal{C}}\) _admits a geometric realization, then there is a_ \(G_{K}\)_-equivariant isomorphism_ \[T_{A_{\mathrm{inf}}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee})\simeq\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee}\] _of_ \(\mathbb{Z}_{p}\)_-modules._ Proof.: (1) follows from Theorem 3.6. (2) follows from Remark 4.2. (3) follows from Corollary 4.16. (4) follows from Theorem 3.7. **Theorem 4.18** (Main Theorem).: _Let \(\mathcal{T}\) be a smooth proper \(\mathcal{O}_{K}\)-linear category. If \(\mathcal{T}_{\mathcal{C}}\) admits a geometric realization, then Conjecture 1.2 holds for \(\mathcal{T}\), i.e, there is an isomorphism of \(B_{\mathrm{crys}}\)-module:_ \[\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})\otimes_{W}B_{\mathrm{ crys}}\simeq\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})\otimes_{\mathbb{Z}_{p}}B_{ \mathrm{crys}}\] _which is compatible with \(G_{K}\)-action and Frobenius endomorphism._ Proof.: Firstly, we will prove the claim when \(i\) is large enough. By Remark 3.8, we have a \(G_{K}\)-equivariant isomorphism \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee\vee}\simeq T_{A_{\mathrm{ inf}}}(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{ \vee\vee}),\] thus we have an isomorphism \[\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee\vee}\otimes_{\mathbb{Z}_{p}}B _{\mathrm{crys}}\simeq\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z]; \mathbb{Z}_{p})^{\vee\vee}\otimes_{\mathfrak{S}}B_{\mathrm{crys}}\] which is \(G_{K}\)-equivariant and compatible with Frobenius endomorphism, and there is the identification of rational Dieudonne modules \(D_{\mathrm{crys}}(\pi_{i}L_{K(1)}K(\mathcal{T}_{\mathcal{C}})^{\vee\vee} \otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p})=\pi_{i}\operatorname{TC}^{-}( \mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee\vee}\otimes_{\mathfrak{S}, \tilde{\phi}}W[\frac{1}{p}]\) (see [1, Remark 4.5]). By Theorem 2.14, we have an isomorphism \(\pi_{i}\operatorname{TC}^{-}(\mathcal{T}/\mathbb{S}[z];\mathbb{Z}_{p})^{\vee \vee}\otimes_{\mathfrak{S},\tilde{\phi}}W[\frac{1}{p}]\simeq\pi_{i} \operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\). Thus we obtain the claim. Now we prove the general case. For any \(i\in\mathbb{Z}\), since \(L_{K(1)}K(\mathcal{T})\) is \(2\)-periodic, there is an isomorphism \[\pi_{i}L_{K(1)}K(\mathcal{T})\otimes_{\pi_{0}L_{K(1)}K(\mathcal{C})}\pi_{2}L_{K( 1)}K(\mathcal{C})\simeq\pi_{i+2}L_{K(1)}K(\mathcal{T}),\] and we have an isomorphism \(\pi_{2}L_{K(1)}K(\mathcal{C})\simeq\mathbb{Z}_{p}(1)\) (see [10]), we obtain the isomorphism \[\pi_{i}L_{K(1)}K(\mathcal{T})(1)\simeq\pi_{i+2}L_{K(1)}K(\mathcal{T}).\] Similarly, there is an isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}]\otimes _{\pi_{0}\operatorname{TP}(k;\mathbb{Z}_{p})[\frac{1}{p}]}\pi_{2} \operatorname{TP}(k;\mathbb{Z}_{p})[\frac{1}{p}]\simeq\pi_{i+2}\operatorname{ TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}],\] and we have an isomorphism \(\pi_{2}\operatorname{TP}(k;\mathbb{Z}_{p})[\frac{1}{p}]\simeq W[\frac{1}{p}](1)\) (see [1]), where the twist refers to twisting the Frobenius, we obtain an isomorphism \[\pi_{i}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}](1)\simeq \pi_{i+2}\operatorname{TP}(\mathcal{T}_{k};\mathbb{Z}_{p})[\frac{1}{p}].\] Since we already prove the claim when \(i\) is large enough, we obtain the claim.
2309.05036
What Is Near?: Room Locality Learning for Enhanced Robot Vision-Language-Navigation in Indoor Living Environments
Humans use their knowledge of common house layouts obtained from previous experiences to predict nearby rooms while navigating in new environments. This greatly helps them navigate previously unseen environments and locate their target room. To provide layout prior knowledge to navigational agents based on common human living spaces, we propose WIN (\textit{W}hat \textit{I}s \textit{N}ear), a commonsense learning model for Vision Language Navigation (VLN) tasks. VLN requires an agent to traverse indoor environments based on descriptive navigational instructions. Unlike existing layout learning works, WIN predicts the local neighborhood map based on prior knowledge of living spaces and current observation, operating on an imagined global map of the entire environment. The model infers neighborhood regions based on visual cues of current observations, navigational history, and layout common sense. We show that local-global planning based on locality knowledge and predicting the indoor layout allows the agent to efficiently select the appropriate action. Specifically, we devised a cross-modal transformer that utilizes this locality prior for decision-making in addition to visual inputs and instructions. Experimental results show that locality learning using WIN provides better generalizability compared to classical VLN agents in unseen environments. Our model performs favorably on standard VLN metrics, with Success Rate 68\% and Success weighted by Path Length 63\% in unseen environments.
Muraleekrishna Gopinathan, Jumana Abu-Khalaf, David Suter, Sidike Paheding, Nathir A. Rawashdeh
2023-09-10T14:15:01Z
http://arxiv.org/abs/2309.05036v1
What Is Near?: Room Locality Learning for Enhanced Robot Vision-Language-Navigation in Indoor Living Environments ###### Abstract Humans use their knowledge of common house layouts obtained from previous experiences to predict nearby rooms while navigating in new environments. This greatly helps them navigate previously unseen environments and locate their target room. To provide layout prior knowledge to navigational agents based on common human living spaces, we propose WIN (What Is Near), a commonsense learning model for Vision Language Navigation (VLN) tasks. VLN requires an agent to traverse indoor environments based on descriptive navigational instructions. Unlike existing layout learning works, WIN predicts the local neighborhood map based on prior knowledge of living spaces and current observation, operating on an imagined global map of the entire environment. The model infers neighborhood regions based on visual cues of current observations, navigational history, and layout common sense. We show that local-global planning based on locality knowledge and predicting the indoor layout allows the agent to efficiently select the appropriate action. Specifically, we devised a cross-modal transformer that utilizes this locality prior for decision-making in addition to visual inputs and instructions. Experimental results show that locality learning using WIN provides better generalizability compared to classical VLN agents in unseen environments. Our model performs favorably on standard VLN metrics, with Success Rate 68% and Success weighted by Path Length 63% in unseen environments. _Index Terms:_ Embodied Agents, Mapping, Vision-Language Navigation ## I Introduction Vision-Language Navigation (VLN) requires an agent to traverse through indoor environments based on descriptive navigational instructions. This task has garnered significant interest from both the computer vision and natural language processing (NLP) research communities, due to its practical applications in domestic settings. A VLN agent must learn to align visual inputs and instructions to execute a series of actions in order to reach a target location or object [1]. This is different from classical goal navigation problems, as task success is measured by how well agent's trajectory conforms to the instruction. This task is inherently complex, as both natural language and visual understanding are challenging on their own. To perform this task, the agent must have a Natural Language (NL) model to _understand_ instructions, a vision model to extract visual features, and a model to learn the visual-language correspondence in order to determine the best action at each step. The agent must also keep track of its navigational history to estimate its progress. Existing VLN agents utilize SoTA vision and language models to encode visual cues and language instructions, and perform cross-modal encoding to learn their relationship. However, due to the variability in the appearance of home environments, these models perform poorly in previously unseen environments compared to seen ones. Previous works have attempted to improve performance through improving vision-language cross-modal learning [2, 3], training data augmentation [4, 5, 6], and applying mixture of training methods [7, 8], but there remains a significant gap in navigational success between agents in seen and unseen environments. These methods overlook the inherent patterns in human living spaces, such as the proximity of bathrooms to bedrooms. An agent can more effectively navigate an environment if it knows the relative locations of rooms, or the room-to-room relationships. To address this, we propose a new model that learns these layout patterns during training and predicts nearby room categories during validation. These predictions, along with the agent's understanding of the instructions, are used to select the next action. Some recent work have used ego-centric maps for short-distance navigation by predicting a semantic map of visible regions [9][10]. However, these methods tend to have redundant information in their egocentric occupancy maps and encapsulate limited information about the surrounding Fig. 1: Using ego-centric layout knowledge for navigational reasoning. Agent grounds the token _bathroom_ to current location, predicts neighboring scenes and decides the best action based on the rest of the instruction. area. This is evident from their limited navigational success even when ground truth maps are provided. Additionally, these methods have not been tested against long-horizon tasks, such as the Room-to-Room (R2R) [1] and REVERIE [11] tasks, which involve complex instructions and span multiple locations. To address these limitations, we propose the **WIN**: 'What **Is** Near' approach for vision-language navigation. The WIN model is trained on neighborhood adjacency data extracted from large real-world house environments, such as Habitat 3D [12]. Given a panoramic image of a room, the WIN agent predicts the categories (classes or _types_) and relative locations of surrounding regions based on the visual cues from the current view (Fig. 1). The navigational agent then uses this local knowledge and language understanding to select the next action from the available directions in the view. We hypothesize that incorporating layout information will significantly reduce the number of probable action decisions obtained using only vision and language modalities. The compact representation of the local neighborhood used by the model also simplifies training and validation. We evaluate the performance of our WIN model on the Room-to-Room (R2R) [1] and REVERIE [11] datasets. Experimental results show that learning layout prior using WIN improves the generalization ability of baseline agents in unseen environments. The model performs favorably on standard VLN metrics: Success Rate (SR) (measures how well agent reaches target location) is 68%, and Success Weighted by Path Length (SPL) (measures how well agent reaches target using shortest paths) is 63% - on unseen environments. We successfully demonstrate that layout prior knowledge can reduce the performance margin between seen and unseen environments. Our contribution in this paper is as follows: * We propose a novel next-action-reasoning model for VLN based on _locality_ prior knowledge, conditioned on current view and navigational history (Sec. IV). * We develop a mechanism to transform agent-centric locality knowledge to global map for trajectory prediction (Sec. III-F). * We demonstrate that locality knowledge improves the performance of simple and classical VLN agents, without adding significant complexity (Sec. V-C). ## II Related Work ### _Data Augmentation_ Recent studies in VLN have developed additional training data improving generalization. Early attempts synthesized additional training data; by back-translating instructions from trajectories [13, 14], by mixing parts of existing paths [5, 15], and by generating images and instructions from web [16]. Generating visual variations in the environment were studied in [4, 6] to improve the navigational performance in unseen environments. A recent study [17], automatically generated navigational graphs and instructions from un-annotated large scale Habitat-Matterport 3D (HM3D) [18] dataset aiming to provide more training examples to VLN agents. This dataset has multiple viewpoints in the same room and focuses on navigability among them. In our work, we develop our _Locality Knowledge_ from un-annotated HM3D dataset but extract inter room-visibility and relative room locations. Unlike [17], we select one viewpoint per room for room type inference and cleverly deduce visibility (occlusion) and locality (shares a wall) among rooms. ### _Environment Map Learning_ Learning to perform navigation in indoor spaces requires the agent to build and update a locality map representation of the environment. Generalization of navigational experiences to complex and unseen environments motivated recent works in end-to-end learning-based approaches [9, 19, 20]. Recently, a transformer-based topological planner [21] that uses graph neural network features to encapsulate the relationship between location connectivity and NL instructions was introduced. The limitation of this method is that the agent has to pre-explore the environment and build a map before attempting the VLN task. Semantic mapping for visual, vision-and-language, and point goal navigation tasks has been studied for continuous environments (VLN-CE). Existing approaches in the Point-Nav task [12], which is a short-horizon task, use Knowledge Graphs to represent room-object relationships in an environment during training and match the learned relationships to the test environments during inference [22]. For language-based navigational tasks, semantic map generation and environment layout learning are still unexplored. Our work is analogous to [19] in that they predict object locations and direction in an ego-centric map. Similar studies failed to generalize well in unseen environments as they map the visible regions only and not the occluded neighborhoods [23], have limited region classes [24] and suffer from room-object contextual-bias in the object-centric methods [24] (i.e. Tables can be in different rooms). Another work [9] that uses similar indoor environment priors, which is applied in the PointNav problem cannot be re-purposed for the VLN task because of its limited region categories and difference in task complexity. As the VLN task has a long-horizon trajectory, instruction, and intermediate goals, it is pertinent that the neighborhood map contains a larger number of room categories, conforms to the instruction, and accentuates the action decision at each navigational step. In this work, we propose to learn room connectivity information (including both visible regions and regions that are occluded but adjacent to the current room) directly from the environment during training and apply this knowledge to improve action decisions. ## III Method In this section, we introduce the Vision-language Navigation problem, our methods and delineate on building the locality knowledge dataset. ### _Problem Setup_ In VLN, an agent is placed in a discrete indoor environment in which each location represents a node of a predefined connectivity graph. Given a natural language instruction describing a trajectory from the start location to a target location, an agent at every time step perceives a panoramic RGB-D view of its surrounding and chooses the next viewpoint through an action direction from candidate viewpoints (navigable or unobstructed directions in a panoramic viewpoint). The agent executes a sequence of actions to complete the instruction until it decides to stop, ideally at the goal location, completing the episode. ### _Overview of the WIN Model_ Learning room layout patterns in navigational spaces can support robots to efficiently reach goal locations. During the training phase, the agent learns to extract connectivity information between rooms types (i.e. _bathroom, bedroom, toilet_) and their relative orientations from _Locality Knowledge_ (Sec. III-D) w.r.t to the agent heading. This knowledge encodes different room-to-room connections (such as through doors, hallways or walls), room types, their relative locations, and distances to room centres. The agent learns visual-language correspondence along with topological relations that exist in the environment at each step of the navigational episode. Later, in the navigation phase, the agent uses the knowledge associated with each viewpoint to evaluate the action choices. Hence, the final action decision will be based on the agent's instruction-understanding, visual grounding of the current observation, and locality knowledge. The WIN model is detailed in Sec. IV. We adopt a modular approach to provide the locality knowledge to the VLN agent. Specifically, we build a simple WIN model such that it can be added to existing VLN models to improve navigational success. ### _Model Inputs_ #### Iii-C1 Language Encoding An instruction of \(n\) words \(X=\langle x_{1},x_{2},...,x_{n}\rangle\) is given to the agent at the start of a navigational episode. This instruction is tokenized and applied to a language encoder to obtain a language feature. #### Iii-C2 Vision Embedding A single trajectory consists of a sequence of \(K\) panoramic views (steps) \(V=\langle V_{1},V_{2},...,V_{K}\rangle\), each comprising of 36 single views in 3 camera elevations (up, horizon and down). At each time step \(t\), we extract RGB-D visual feature \(I_{i}\) and depth feature \(D_{t}\) as visual context. Also, we add the relative heading \(\theta\) and elevation \(\phi\) angles of each view with respect to the agent's current orientation to retain view directions \(R_{t}=[\cos\theta,\sin\theta,\cos\phi,\sin\phi]\). ### _Locality Knowledge_ The core of locality prediction includes the representation of the local neighbourhood, predicting the ego-centric locality map based on common patterns and improving the action probability using the predicted map. To predict room layouts, we trained a our _Locality Predictor_ using HM3D dataset which has 900 houses with room panoramas, camera poses, and floor plans. The HM3D dataset has realistic 3D indoor scenes but lacks the navigability graphs, or region labels or room boundaries. Hence, to build the _Locality Knowledge_ base, we need 1) one summary viewpoint per room, 2) its type and 3) geodesic distance and orientation between neighboring rooms. The summary viewpoints are collected by sampling equidistant viewpoints from navigable regions of the scene and eliminating points based on following conditions. To obtain single viewpoint per room; (1) candidate viewpoints should be at least 2 meters apart from each other (2) the panoramic images from viewpoints cannot have significant matching ORB descriptors [25] to eliminate candidates from same room and, (3) farther viewpoints with matching descriptors are navigable from each other, and (4) close viewpoints with fewer to no matches are considered neighboring but occluded from each other. A ResNet [26] model trained on the MP3D [27] dataset was used to obtain the room types. The best thresholds for ORB descriptor matching are selected for the maximum coverage of the scene. The distances are based on average room sizes and room-to-room distances in the HM3D dataset. The room adjacency matrix (Fig. 2) shows common room neighborhoods, connectivity and adjacency. The resulting _Locality Map M\({}^{GT}\)_ is a metric-semantic map of the region surrounding the agent. That is, the area around the agent is divided into a fixed-size grid \(g\times g\) with a side \(s\) to represent the region of interest. Each cell encodes the location, orientation and type of the rooms with respect to the heading of the agent. ### _Locality Predictor_ The locality predictor (Fig. 3) uses horizon visual features and predicts room class (type) of the grid area surrounding the agent. This module uses the agent's panoramic observa Fig. 2: Room adjacency matrix of Habitat-Matterport 3D (HM3D) dataset. Each cell represents connectivity (share a wall), navigability (direct access) or visibility (line-of-sight) between room types. Brighter colours show large co-occurrence of room types in their neighbourhood. tion to produce a probability distribution of region categories for each cell in the locality map. The predictor contains two functions namely (1) egocentric mapper and (2) neighborhood predictor. The former is an affine transformation and inverse projective mapping of the semantic visual features to the ground plane to obtain the current map using the camera parameters, and visual inputs. To suppress feature collision during ground projection, we follow MapNet [28] and take the maximum of values height-wise. Evidently, this map only includes the regions that are visible to the agent. The neighborhood predictor network is trained to extend this projected map to invisible regions using supervision. The _Map Decoder_ concatenates current \(M_{t}\) and previous \(M_{t-1}\) maps using a trainable network with parameter \(W_{M}\) to obtain map feature \(m_{t}\), \[m_{t}=[M_{t-1};M_{t}]W_{M} \tag{1}\] and applies \(m_{t}\) to LSTM to track the map evolution due to previous agent action in the hidden feature \(h_{t}\): \[h_{t}=LSTM([m_{t};\hat{a}_{t-1}],h_{t-1}) \tag{2}\] where \(\hat{a}\) is the action embedding. The updated map is the probability distribution of room types for each direction, \[p_{M,t}=softmax(f_{V}W_{M}h_{t}) \tag{3}\] ### _Target Encoder_ In order to utilise the locality predictions for action selection, we transform the locality map to global grid. For this we obtain position token of each target location in the global grid from Imaginary Scene Tokenization (IST) mechanism in [29] to provide the global map. Each target token represents prediction of the scene layout in its cell. To this we add room type feature, orientation and distance of each view of the panorama to obtain target tokens \(c_{t}\) (Fig. 4). These target tokens are applied to the structured transformer to generate updated target tokens which is recursively applied to the transformer in the next time step. ## IV What Is Near (**WIN**) Model The What Is Near (WIN) model (Fig. 5) is composed of a cross-modal transformer that is adapted from [30]. At each time step \(t\), the model takes in inputs from previous state \(s_{t-1}\), language encoding \(X\), vision embedding \(V\), history tokens \(h_{t-1}\), and a neighborhood encoding \(N_{t}\) which captures the agent's knowledge of the current scene. The language encoding is time-independent, while the state and history tokens come from previous time steps, and the visual and neighborhood tokens are obtained for the current scene. The WIN model processes the entire language instruction and performs self-attention with the help of a BERT language encoder, generating an initial state token \(s_{0}\) and language embedding. A vision encoder \(f_{V}\) is used to encode the panoramic scene and produce visual features. The cross-modal transformer then performs cross-attention on the language, observation, history, and target tokens to learn their correspondence, and the [CLS] embedding of the transformer is used to predict the action. The WIN model leverages the panoramic image visual features to predict the local neighborhood, locations, and action probabilities by considering prior knowledge and visual cues from the scene. The action predictor operates based on the visual-language-locality correspondence learned by the cross-modal transformer and produces a probability distribution over each candidate direction. We extend the Structure Transformer Predictor (STP) mechanism from [29] to model the overall space of the environment. We adapt this model for our task as a region category prediction task. The history token of the STP is composed of, \[H_{t}=f_{V}(I_{t})+f_{R}(R_{T})+f_{T}(t)+f_{P}(I_{t}) \tag{4}\] where \(f_{R},f_{T},f_{P}\) are trainable encoders for motion direction, navigation step, and agent global position respectively. \(f_{V}\) is the encoder for panoramic view \(I_{t}\). To provide the locality map to the transformer, we need to extent the local map to the global coordinate system. For this we transform the locality prediction from agent-centric coordinate system to global map using rigid-body transformation. This transformation also maps locality map to the global grid. We reuse IST of the STP model and generate target tokens \(c_{i}\) for each global grid cell such that, \[c_{i}=f_{P}(I_{t})*x_{0}*f_{M}(I_{t},D_{t},M_{t-1}) \tag{5}\] where \(f_{P}(I_{t})\) is the same position encoder as in (4), \(x_{0}\) is the instruction embedding, and \(f_{M}\) is the local-global Fig. 4: Neighbourhood encoding utilizes view embedding, orientation embedding, room category feature and position Fig. 3: Locality predictor produce an ego-centric locality map \(M_{t}\) based on RGB-D input and past action \(a_{t}-1\). The visual feature \(f_{V}(I_{t})\) is projected to map feature space using \(D_{t}\) and orientation \(R\). The locality Map decoder uses LSTM to integrate robot motion and predicts new locality maps based on the current \(M_{t}\) and previous \(M_{t-1}\) ego-centric maps. transformation. _Locality predictor_\(f_{M}\) has recurrent memory model based on LSTM trained on locality information. For details of STP, we encourage readers to refer to [29]. Finally, to produce action probabilities based on the learned locality knowledge, we formulate action prediction as a classification problem. An MLP is applied to the vision-language representation to predict an action probability distribution over navigable viewpoints as in (6). \[p_{t}(I_{i}^{p})=\frac{\exp{f_{A}(I_{i}^{p}\odot^{VL}c_{t})}}{\sum_{j}\exp{f_{A} (I_{j}^{p}\odot^{VL}c_{t})}} \tag{6}\] where \(\odot\) represents element-wise multiplication and \({}^{VL}c_{t}\) is the vision-language fused representation. As in existing works that use pretrained vision-language models, we use the embedded [CLS] token that is a fused representation of vision-language modalities [31] as the state representation. In all, the complete model is (7), \[s_{t},p_{t}^{a}=\text{WIN}(h_{t-1},X,I_{t}^{c},P_{t},c_{t}) \tag{7}\] where \(s_{t}\) is the state vector and \(p_{t}^{a}\) is the action probability. ### _Training_ In this section, we describe the training procedure for the WIN model. Our model is trained in 2 parts: Locality Predictor module training and end-to-end training for VLN. #### Iii-A1 Locality Predictor The Locality prediction \(f_{T}\) module is trained using the Locality Knowledge (see Sec. III-D. The model is trained by providing the panoramic observation at different agent orientations for each scene and comparing the prediction with the ground truth. The objective is to minimize (8), \[\mathcal{L}_{locality}=\sum_{k\in\mathbb{R}}CrossEntropyLoss(M_{t,k}^{pred},M_ {t,k}^{GT}) \tag{8}\] #### Iii-A2 Action Prediction We adopt a combination of Reinforcement Learning (RL) and Imitation Learning (IL) for training our agent. Imitation Learning is applied to train the agent while providing the ground truth action or _teacher action_ at each time step, and minimizing the cross-entropy loss defined by (8). For RL, we use Advantage Actor Critic (A2C) [32] to learn actions that maximize rewards from reducing the distance to the goal location at each time step and arriving within 3m of the target, at the end of a navigational episode. During navigational training, the target encoding \(N_{t}\) from the frozen _Locality Predictor_ is used as an input to the cross-modal transformer along with history \(H_{t}\) and vision-language inputs. Following [29] we include _history teacher loss_ to accommodate the change in action space with visited locations. The final loss aims to minimize the negative likelihood of the target view \(I_{s,j}:\mathcal{L}_{A}=-\log{p_{t}(I_{i}^{p})}\) and the _history teacher loss_. Formally we minimize for all steps \(T\), \[\mathcal{L}_{IL}=-\sum_{t=1}^{T}\pi\log(a_{G}^{t};\Theta)-\sum_{t=1}^{T}\log{ p_{t}(I_{i}^{p})} \tag{9}\] where \(a_{G}\) is the global action towards the goal, \(\pi\) is the navigational policy parameterized by \(\Theta\). Another MLP is used to decode the global target from the semantic target tokens for the global map prediction. The agent samples action \(a_{t}^{*}\) from action probability \(p_{a_{t}}^{*}\) from the **WIN** model. We found that a combination of IL and RL balances exploration-exploitation strategies effectively: defined as, \[\mathcal{L}_{RL+IL}=-\sum_{t=1}^{T}a_{t}^{*}\log(p_{a_{t}}^{*})A_{t}+\lambda_ {IL}\mathcal{L}_{IL} \tag{10}\] where \(\lambda_{IL}\) is the IL training coefficient and \(A_{t}\) is the advantage calculated by the A2C algorithm [32]. Fig. 5: The proposed What is Near (WIN) model (left) and representation of egocentric locality map (right). The cross-modal structured transformer accepts language, vision, history and target tokens to predict action probabilities and semantic map of the global space. A hidden state \(s\) vector encodes the state and history of navigation episodes while location encoder combines global target tokens with locality embedding. This locality map (ego-centric) to global map transformation provides agent the long-term planning capability ## V Experiments In this section, we elaborate on our research questions, experiments and our baseline agents. From our experiments we aim to understand the following, How does locality knowledge affect VLN agent performance?Neighborhood knowledge learned by the agent must include both the metric and semantic layout of the locality. On one end, an agent may use information such as, if a direction leads outdoors or indoors or at the other end utilizes the complete room type information. To measure the expressiveness of locality map, we test the agent on different map types including one with random map prediction and one with ground-truth map provided. How does the performance of our WIN model compare to existing work in VLN?As the existing works ignore regions beyond the view of the agent, the current SoTA agents can benefit from locality knowledge for the next action prediction. We compare our model performance against SoTA VLN agents. ### _Baseline Agents_ We select two robust but computationally simple agents as our baseline to show how locality knowledge can affect their environment awareness and eventually, their navigational success. #### V-A1 Vkn\(\circ\)Bert For the first baseline, we use a simple Recurrent VLN-BERT (VLN\(\circ\) BERT) [31] agent with basic history representation. VLN\(\circ\)BERT uses the [CLS] token of transformer in recurrent fashion as navigational history. Our model incorporates locality knowledge in this baseline by multiplying the global map prediction with the action probabilities. #### V-A2 Td-Stp TD-STP [29] is a method for enabling action reasoning by providing an encoded global grid positions to pretrained cross-modal encoder and predicting a _target_ cell based on the given instruction. For this, TD-STP imagines a discrete global grid over the entire floor area initially, and update the target location at each time step. We extend the Structured Transformer Planner (STP) mechanism in our the WIN model for global grid semantic mapping. ### _Setup_ #### V-B1 Dataset We evaluate WIN using validation splits of the Room-to-Room (R2R) [1] and REVERIE [11] datasets. R2R dataset consists of 7k trajectories from 90 houses split into _Train_ (61 houses), _ValSeen_ (houses from train seen), _ValUnseen_ (11 houses not included in train seen split) and _TestUnseen_ (18 houses not part of other splits). Each trajectory has 3 fine-grained English instructions. The test unseen split trajectories are submitted to an online system for evaluation1. The online server reports all metrics used for our evaluation. REVERIE dataset contains high-level instructions and uses the same split for training and evaluation2. Footnote 2: REVERIE leader board: [https://eval.ai/web/challenges/challenge-page/606/leaderboard/1683](https://eval.ai/web/challenges/challenge-page/606/leaderboard/1683) #### V-B2 Evaluation Metrics We use the standard metrics for evaluating the agent's performance on the R2R and REVERIE datasets. In R2R, the standard metrics such as Trajectory Length (TL), Navigation Error (NE), Oracle Success Rate (OSR), Success Rate (SR) and Success Rate weighted by Path Length (SPL) are reported [1, 35]. In addition to these, REVERIE [11] also evaluates Remote Grounding Success (RGS) to measure the success rate of locating the remote object and Remote Grounding Success weighted by Path Length (RGSPL) which rewards shorter path lengths. #### V-B3 Implementation details The model is built on PyTorch and experiments are performed on an NVIDIA A6000 GPU. Our model is trained for 100k iterations with early stopping applied at the highest SPL to prevent over-fitting. The final results are reported for grid size \(g\) 10 and cell size \(s\) 0.5m. The batch size is set to 8 and the learning rate is \(1e-5\). The dropout is set to 0.5 and AdamW optimizer is used for training. We develop two baselines using publicly available source codes and hyper-parameters are set as per the original models. ### _Results_ #### V-C1 Results on the R2R dataset Our WIN+STP agent improved performance over the TD-STP baseline by a large margin (Table I). The TD-STP baseline uses visual features and instructions for predicting the global action space which is essentially an occupancy tracking. Instead, the WIN model predicts the room layout which is a useful for local action selection and reducing the overall path length. This shows WIN has comparatively better local action selection due to the layout understanding. The overall reduction in the navigation error (3.73m \(\rightarrow\) 3.61m) compared to the baselines suggests that the agent is being directed to takes better actions based on the locality knowledge. #### V-C2 Results on the REVERIE dataset WIN also shows better SR in _TestUnseen_ split (Table II) compared to the baselines. The agent could utilize descriptive instructions and select correct navigational actions at each step leveraging the locality knowledge. This improved instruction and layout understanding, lead to a higher navigational success rate (35.89%\(\rightarrow\) 42.19%) and SPL (27.51%\(\rightarrow\)31.06%). ## VI Discussion We compare our results with larger and more complex SoTA models to show that our relatively simpler model performs competently using locality knowledge. Our WIN model has a better SPL (63%) than DUET [34] (SPL: 59%) and same SR as EnvEdit [4](68%) on the R2R _TestUnseen_ split. Both these agents are trained using augmented datasets; EnvEdit is trained on changed visual appearances of MP3D scenes and DUET is trained on multiple auxiliary tasks to learn local and global topology encoding. Compared to these methods which are computationally complex, our model training is simpler as we make use of simple methods to extract neighborhood knowledge. This computational advantage makes our model's performance gain more significant. WIN performs better than the baseline model on REVERIE task on navigational metrics. The object grounding scores, RGS and RGSPL, are not improved because WIN model only considers the room-to-room relations and not object-to-room relations. Overall, the locality knowledge in our WIN method is advantageous for navigational agents. ### _Effect of varying mapping area_ Here we compare the SR of our WIN agent with various locality map resolutions (Table III). We see lower SR for extreme grid sizes and highest SR for 5x5 grid. This can be explained by the average room sizes (3.16\(m^{2}\)) in the MP3D dataset [18]. The average distance between viewpoints in the R2R dataset is 2.25\(\pm\)0.57m with one or more viewpoints of them occupying the same room. Hence the largest grid size that WIN can predict well is about 2 average sized rooms. As the map size increases beyond two rooms, the prediction accuracy drops and the agent may be misguided. ### _Impact of locality knowledge_ To measure the lower and upper bounds of WIN map prediction, we compare the SR with different types of maps provided to the agent in Table IV. We test four types of maps with grid size 5x5: Method _#1_ for random room types (Rand. type) and directions (Rand. dir.), _#2_ with random room location with room types from the locality knowledge ground-truth (GT type), _#3_ with map prediction from the Map Predictor module (Pred. type) and _#4_ with full ground-truth locality knowledge (LK GT). Method _#1_ represents the lower bound performance of the WIN model resulting in lowest SR because the direction and room types in the map do not correlate with the environment. Also, a large SR-SPL margin of _#2_ shows that the agent can still deduce the neighborhood but chooses wrong directions and takes longer trajectories. In the upper-bound scenario, _#4_, the agent utilizes the actual room type and direction and obtains the highest SR and SPL. ### _Limitations_ We observe that the performance of our WIN model degrades on trajectories with uncommon room types _i.e._ uncorrelated classes shown by dark colours (Fig. 2). In certain failure cases the agent loses confidence in vision-language based action predictions and gets stuck in some location when the locality suggests diametrically opposite actions. This could be tackled by using locality knowledge from large house plan datasets with various room-to-room relationship examples. In future work, we plan to explore large-scale locality learning from real house plans. ## VII Conclusion We present a novel approach for VLN based on using room locality knowledge to predict neighboring rooms in the indoor environments. Our modular WIN model demonstrates a significant performance gain in unseen environments compared to the SoTA baselines. In this study, we encode layout patterns commonly found in indoor environments using a locality prediction model and use this knowledge to assist Vision-language navigation agents in making action decisions at each time step. Navigational results on the R2R and REVERIE tasks show that the WIN method outperforms both baseline methods while reducing the success rate margin between seen and unseen environments. A potential extension of this work is to learn general topological relationships from large-scale house plan datasets such as CubiCasa5k [36] and ZInD [37].
2309.15231
Spatiotemporal patterns of Io's bright transient eruptions, 1978-2022
This study analyzes Io's thermally detected volcanic outbursts and mini-outbursts, generally called bright transient eruptions. We examine their evolving characteristics over the history of outburst observations between the Voyager flybys in 1978 and 2022. We catalog, compare, and interpret the data of these bright transient eruptions from several spacecraft flybys and numerous ground-based observation campaigns. To test the spatiotemporal behavior of these events, we compare them to a population of randomly spaced, stochastic events with an equal likelihood of occurrence anywhere on Io's surface. We find that the aggregate of all outbursts is consistent with a random distribution across Io, whereas mini-outbursts strongly prefer the trailing hemisphere (180 to 360 W). On shorter timescales, however, outbursts show a significant change in spatiotemporal behavior before and after the year 2012. Outbursts from 1995 to 2007 favor the northern leading hemisphere, while outbursts from 2013 to 2021 favor the southern trailing hemisphere. These temporally separated clusters of outbursts are remarkably similar to Io's two primary mountainous regions, indicating that outbursts may be related to mountain-forming activity. These trends show how bright transient eruptions are distinct from Io's other forms of volcanism. These could be essential constraints to assess models of Io's interior heat transport between tidal generation and volcanic distribution.
Christian D. Tate, Julie A. Rathbun, Alexander G. Hayes, Rosaly M. C. Lopes, Madeline Pettine
2023-09-26T19:48:03Z
http://arxiv.org/abs/2309.15231v1
# Spatiotemporal patterns of Io's bright transient eruptions, 1978-2022 ###### Abstract This study analyzes Io's thermally detected volcanic outbursts and mini-outbursts, generally called bright transient eruptions. We examine their evolving characteristics over the history of outburst observations between the Voyager flybys in 1978 and 2022. We catalog, compare, and interpret the data of these bright transient eruptions from several spacecraft flybys and numerous ground-based observation campaigns. To test the spatiotemporal behavior of these events, we compare them to a population of randomly spaced, stochastic events with an equal likelihood of occurrence anywhere on Io's surface. We find that the aggregate of all outbursts is consistent with a random distribution across Io, whereas mini-outbursts strongly prefer the trailing hemisphere (180 to 360 W). On shorter timescales, however, outbursts show a significant change in spatiotemporal behavior before and after the year 2012. Outbursts from 1995 to 2007 favor the northern leading hemisphere, while outbursts from 2013 to 2021 favor the southern trailing hemisphere. These temporally separated clusters of outbursts are remarkably similar to Io's two primary mountainous regions, indicating that outbursts may be related to mountain-forming activity. These trends show how bright transient eruptions are distinct from Io's other forms of volcanism. These could be essential constraints to assess models of Io's interior heat transport between tidal generation and volcanic distribution. ## 1 Introduction Jupiter's innermost Galilean moon, Io, is the most volcanically active object in the solar system. With far more energy to release than Earth and other bodies and less surface area to dissipate it, Io is in a class of its own. Its most intense volcanic eruptions, called outbursts, occur about monthly (Spencer & Schneider, 1996), whereas Earth's subaerial "large eruptions" (that produce \(\geq\)0.1 km\({}^{3}\) tephra) occur once or twice yearly (Siebert et al., 2015). Because Io's geological clock runs several orders of magnitude faster than Earth's, Io's liveliness lets us observe volcanism at both larger spatial scales and on faster timescales. Although Io's current data are relatively coarse in spatiotemporal resolutions, the persistent inquiry of scientists and advances in observational techniques over the last fifty years have produced a compelling, complex, and exciting portrait of Io. Nevertheless, a theoretical understanding of Io's volcanic mechanisms remains elusive. While other studies have summarized the significant achievements and limitations of the current knowledge about Io (de Kleer & Rathbun, 2023; McEwen et al., 2023), it appears that much more data and analysis are necessary to understand what makes Io tick. It is difficult to characterize Io's steady state behavior and to constrain its stochastic and periodic departures from that hypothetical steady state. We explore these gaps in knowledge by studying the historical record of Io's bright transient eruptions and how their behavior has changed over the last fifty years. Previous studies have analyzed portions of Io's outburst dataset. Vecder et al. (2012) examined 14 of the first outbursts observed by the Voyager and Galileo spacecraft and ground-based infrared photometry starting in the late 1970s. This study paid close attention to the Galileo dataset, which remains the source of Io's best spatial and spectral resolution even today. While Veeder et al. (2012) examined outbursts up to the year 2001 from the perspective of Io's total heat budget, Cantrall et al. (2018) investigated the geological context of 5 outbursts and 17 mini-outbursts that occurred from 2001 to 2015. More recently, de Kleer and Rathbun (2023) compiled 33 outburst and "sub-outburst" events from 1978 to 2018, and a recent IRTF observational campaign found seven new outbursts between 2017 and 2021 (Tate et al., 2023). Our review identifies 66 likely outburst and mini-outburst observed between 1978 and 2022. This list includes every event with some evidence of it being in the class of a bright transient eruption. We carefully filter the less confident events from this inclusive list and apply our statistical analyses to a more robust subset of events. ## 2 Data Campaigns using NASA's Infrared Telescope Facility (IRTF) started in the 1990s and remain active even after next-generation adaptive optics (AO) telescope systems came online in the late 1990s. These IRTF observation campaigns of Io's volcanism supply valuable long-term data with updated instruments over the course of three decades (Rathbun et al. 2010; Tate et al., 2023). In contrast to earlier observations, many of the outburst detections made after the year 2000 were primarily made with adaptive optics system (AO) telescopes including Keck, Gemini, and the European Southern Observatory (Marchis et al., 2002; Marchis et al., 2004; de Pater et al., 2016; de Pater et al., 2016; de Kleer et al. 2019). It is difficult to overestimate the importance of ground-based techniques in the years following the Galileo mission. Nearly two decades of consistent, high-resolution surveys of Io have created an amazing dataset with which to study the character of Io's volcanism over decadal timescales. We gather all published data about large-scale infrared observation campaigns capable of detecting outbursts on Io. The number of events used in our various analyses are listed in Table 1, and Table 2 is the inclusive list of Io's bright transient eruptions. The major campaigns and spacecraft missions are summarized in Table 3. Of the 66 events, 30 are outbursts, and 36 are mini-outbursts. The rationale for classifying these events is explained below. Although the instruments and techniques of these campaigns vary greatly, we compile plausible outbursts and mini-outbursts discovered in the major infrared detection campaigns. We generally accept the designation of an outburst or mini-outburst if the primary reference and subsequent publications give compelling evidence. For consistency across the whole dataset, however, we also devise and filter by a more uniform classification metric. ### Criteria used for including outbursts and mini-outbursts in this analysis This study aims to characterize the spatial, temporal, and thermal properties of bright transient events (e.g., outbursts and mini-outbursts). We use the definitions of outburst and mini-outbursts described by Tate et al. (2023), which require evidence that the eruption is sufficiently bright and confined to a small region on Io. Although not considered here, outbursts can also be inferable from enhancements in Io's plasma torus (Brown & Bouchez, 1997; Morgenthaler et al., 2019 and 2022). Even for direct detections, however, the use of the term "volcanic outburst" is not entirely consistent in the Io literature (see Tate et al., 2023 for a discussion of this point). To clarify and standardize the criteria necessary for our analyses of outbursts or mini-outbursts, we identify three primary criteria: 1. **A bright transient eruption event has a large infrared output.** We use the threshold in the intensity in the 3.8 \(\upmu\)m Lp-band, for which an outburst emits \(\rm I_{3.8\upmu m}>150\) GW/sr/\(\upmu\)m and a mini-outburst emits \(\rm I_{3.8\upmu m}>30\) GW/sr/\(\upmu\)m (de Kleer & de Pater, 2016a; Tate et al., 2023). Note that this intensity should be corrected for reflected sunlight, emission angle, and other photometric effects. 2. **An event is spatially constrained** if it is confined to a small region on Io. We adopt a maximum localization uncertainty of 15 degrees or \(\sim\)500 km in north-south and west-east directions. This localization threshold is near the best achievable with NASA's IRTF. 3. **An event is thermally constrained** if several 1-10 \(\upmu\)m intensity measurements can constrain its effective temperature, area, and total power. Four or more bands are preferred if the event is thoroughly non-uniform and is not well characterized with a 1-temperature fit. For our study, an outburst or mini-outburst is confirmed and spatially characterized if its data satisfies the first and second criteria. An event is thermally characterized if it satisfies the first and third criteria (assuming that care is taken to mitigate spectral contamination from other hot spots and reflected sunlight). Finally, an event is fully characterized if it satisfies all three criteria. Further constraints are possible, such as time-resolved changes in intensity and modeling the lava flow and cooling rates (Davies et al., 2000, 2005, 2006, 2010, 2014). When the 3.8 \(\upmu\)m intensity is not directly measured, criterion (1) can be satisfied by interpolating or extrapolating several wavelength measurements to 3.8 \(\upmu\)m or by evaluating the 3.8 \(\upmu\)m of a black-body fit. This is common for ground-based and New Horizons instruments that take 1.0-2.2 \(\upmu\)m spectra (Marchis et al., 2002; Tsang, 2014; Spencer et al., 2007). Intensities at wavelengths less than 1 micron, such as the \(<\)1-micron wavelength filters of the Hubble Space Telescope, have not by themselves been successful at constraining Io's eruption temperatures (Marchis et al., 2002; Milazzo et al., 2005). In the case of noisy intensity measurements with abnormally large uncertainties, such as full-disk sunlit imaging with the IRTF (Tate et al., 2023), the lower limit of the peak intensity should exceed the outburst or mini-outburst threshold. Not all confirmed outbursts need to satisfy criterion (2) if they have a sufficiently high 3.8 \(\upmu\)m intensities, often more than twice the intensity threshold \(\rm I_{3.8\upmu m}>300\) GW/sr/\(\upmu\)m. This exception is essential for early disk-integrated techniques and the sunlit IRTF campaign (Tate et al., 2023). Several early detections fall into this category, such as the colossal 1978 and 1986 outbursts (Witteborn et al., 1979; Veeder et al., 1994; Blaney et al., 1995), neither of which are named or localized. Some confirmed outbursts are thermally constrained if they have high-quality multi-spectral intensity measurements that control for spectral contamination from reflected sunlight and other hot spots. For instance, we consider 202010A a thermally (but not spatially) constrained outburst because of its high-quality multi-spectral intensity measurements were taken in Jovian eclipse and occultation, which removes contamination from reflected sunlight and isolates the hot spot along one direction (Tate et al., 2023). For these reasons, the collections of spatially and thermally characterized events given in Tables 1 and 2 are overlapping but not identical. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & **Outbursts** & **Mini-outbursts** & **Both** \\ \hline Total number of events & 30 & 36 & 66 \\ \hline Localized events & 17 & 27 & 44 \\ \hline Loc. 1995-2011 & 10 & 14 & 24 \\ \hline Loc. 2013-2022 & 7 & 13 & 20 \\ \hline Unique locations & 13 & 16 & 29 \\ \hline Events at Tvashtar Paetra & 6 & 0 & 6 \\ \hline Events at Pillan Patera & 2 & 4 & 6 \\ \hline Events at Loki Patera & 0 & \textgreater{}10 & \textgreater{}10 \\ \hline Thermally constrained events & 18 & 16 & 34 \\ \hline Therm. and loc. constrained & 13 & 15 & 28 \\ \hline Median effective temperature & 1240 K & 905 K & 1150 K \\ \hline Median effective total power & 7.8 TW & 1.2 TW & 3.7 TW \\ \hline \end{tabular} \end{table} Table 1: The total number of large transient volcanic events used in this study. 1999/06/22 99906A, Gish 9908A, Gish Bar? 14 \(\pm\) 12 74 \(\pm\) 12 yes yes 2.3 - 4.7 1805 \(\pm\) 0 1250 \(\pm\) 2 27.0 1999/11/25 Loki Patera mini-outb. 13 \(\pm\) 5 309 \(\pm\) 5 yes no 3.5 - 4.8 127 \(\pm\) 8 - & \begin{table} \begin{tabular}{c The location name is the hot spot name or an alphanumeric designation assigned in its first publication. A hot spot name follows some alphanumeric designations for several less certain events. 2. The criteria for spatial and thermal constraints are given in section 2.1 3. The intensity is in units of GW/sr/micron and nominally at the 3.8 micron or the Lp-band. When 3.8-micron intensity was not measured, the event's footnote gives the used wavelength. 4. Spencer and Schneider (1996); and unpublished data from Fink, Lebofsky, Larson. 5. Witteborn et al. (1979); Spencer and Schneider (1996). 6. Sinton et al., (1980), Spencer and Schneider (1996). 7. Veeder et al. (1994), Blaney et al. (1995), Spencer and Schneider (1996). 8. Spencer and Schneider (1996), Spencer et al. (1997), Veeder et al. (2012). Intensities are at 3.5 \(\upmu\)m. The 9509A event was not included in the Veeder et al. (2012) list of outbursts. 9. Davies et al. (2018). 10. Davies et al. (2001), Veeder et al. (2012). 11. Howell et al. (2001). For the 1999 Tvashtar event, also see Marchis et al. 2002; Milazzo et al., (2005); Davies et al. (2010). The 9906A and 0002A events were not included in the Veeder et al. (2012) list of outbursts. Veeder et al. (2012) located 9908A at 85W. 12. Milazzo et al., (2005), Davies et al. (2010). This event is not used in the location statistics because it was likely part of 2001/02/19 eruption. 13. Marchis et al. (2002). Tvashtar was also seen two months earlier on 2000/12/15. Io was highly active in 2001. Surt erupted in the most powerful outburst on record. 14. Rathbun et al. (2004), Marchis et al. (2005). 15. Lopes et al. (2004). 16. de Pater et al. (2017) and de Kleer and de Pater (2016a) for events after 2013. 17. de Pater et al. (2016b), Marchis et al. (2004). Some values differ from those reported in Cantrall et al. (2018). 18. Laver et al. (2007), Cantrall et al. (2018). The 2006/04/17 measurement was saturated but comparable to the 2006/06/02 outburst. 19. Tsang et al. (2014), Spencer et al., (2007). On 2007 January 18 IRTF observed a bright feature near Tvashtar (Rathbun & Spencer, 2009), and a short time later on 2007 March 1 the New Horizons byby observed an outburst and a Pele-type plume at Tvashtar. 20. de Pater et al. (2016a) reported mini-outbursts at Pillan in 2007, 2010, and 2015. Lellouch et al. (2015) detected another mini-outburst in 2008 at 4 micron intensity. Pillan was also active on 2015/03/31. The 2010 Pillan event is missing from Cantrall et al. (2018). 21. de Pater et al. (2016a) detected two mini-outbursts at Kanehekili Fluctus in 2010. A similar detection on 2010/08/21 comes close to a mini-outburst. Galileo saw two Prometheus-style plumes in 1997/5/6 and 1997/11/8. 22. de Pater et al. (2014a), de Kleer and de Pater (2016a), de Kleer et al (2019b). Heno qualified as a mini-outburst on 2013/08/20, 22, and 29. Rarog was also outbursting on 2013/08/22 and mini-outbursting 2013/08/15 to 2013/09/13. 23. de Kleer et al. (2014), de Kleer and de Pater (2016a) de Kleer et al (2019b). Juno did not detect a hot spot at the 201308C (JR159: 28.9 N, 228.8 W) location on orbit 10 (2017-12-16) but did on orbits 25 (2020-02-17) and (2021-02-21). * de Kleer and de Pater (2016a), Cantrall et al. (2018), de Kleer et al (2019b). Chors Patera was mini-outburst two days after its maximum intensity on 2014/10/22. Kurdalagon also approached mini-outburst levels on 2015/04/17. * de Kleer and Rathbun (2023), Tate et al. (2023). * Tate et al. (2023). 201801A (JR143: 20.3, 217 W) was detected on Juno's orbits 16 (2018-10-29), 24 (2019-12-26), 25 (2020-02-17), 32 (2021-02-21), and 33 (2021-04-15). It was not detected on orbits 10 (2017-12-16), 20 (2019-05-29), or 27 (2020-06-02). Pillan Patera (JR079) was detected on Juno's orbits 16 (2018-10-29), 17 (2018-12-21), and (2019-02-12). It was not detected on orbits 27 (2020-06-02) or 37 (2021-10-16). 202108D (JR102: 6.1 S, 186.4 W) was detected on Juno's orbits 25 (2020-02-17) and 32 (2021-02-21). It was not detected on orbits 33 (2021-04-15) or 37 (2021-10-16). * UP 254W was reclassified as an outburst. The max 3.8-micron intensity comes from Tate et al. (2023), and the effective temperature and power come from de Kleer et al. (2019). * Tate et al. (2023). IRTF localized 201906A within the Acala Fluces area. Acala (JR121: 8.3, 334.5 W) was detected on Juno's orbits 18 (2019-02-12), 20 (2019-05-29), 26 (2020-04-10) and 32 (2021-02-21). It was not detected on orbits 24 (2019-12-26) or 27 (2020-06-02). * Zambon et al. (2023), Patine et al., (2023). Intensity is at 4.7 microns. Laki-oi (JR205) was detected on Juno's orbits 10 (2017-12-16), 11 (2018-02-07), 25 (2020-02-17). It was not detected on orbits 20 (2019-05-29) or 37 (2021-10-16). * de Pater et al. (2023). Kanehekili (JR080) was detected on Juno's orbits 20 (2019-05-29), 24 (2019-12-26), and 25 (2020-02-17). It was not detected on orbit 37 (2021-10-16). \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \hline **Years** & **Observatory** & **Observations** & **Outbursts** & **Frequency** & **Mini-outbursts** & **Frequency** & **References** \\ \hline 1978 & LPL & 14 & 1 & 7.1\% & – & – & Witteborn et al. (1979) \\ \hline 1979 & Voyagers & 2 & 0 & 0.0\% & – & – & McEwen and Soderblom, (1983) \\ \hline 1979 - 1981 & IRTF, UMRT, UH & 27 & 1 & 3.7\% & – & – & Sinton et al. (1983) \\ \hline 1982 - 1983 & UKIRT & 42 & 0 & 0.0\% & – & – & Titlemore \& Sinton (1989) \\ \hline 1983 - 1993 & IRTF & 55 & 2 & 3.6\% & – & – & Veeder et al. (1994), Blaney et al. (1995) \\ \hline 1989 - 1992 & WIRO & 96 & 0 & 0.0\% & – & – & Howell \& Klassen (1995) \\ \hline 1995 - 1997 & IRTF, Lowell & 56 & 3 & 5.4\% & – & – & Spencer et al. (1997) \\ \hline 1996 - 2001 & Galileo & 7 & 1 & 14\% & 2 & 28.6\% & Rathbun et al (2004), Lopes et al. (2004) \\ \hline 1999 - 2000 & IRTF, WIRO, ESO & 28 & 4 & 14.3\% & – & – & Howell et al. (2001) \\ \hline 2001 - 2007 & IRTF, WIRO & 33 & 0 & 0.0\% & – & – & Rathbun \& Spencer (2010) \\ \hline 2001 & Keck, ESO & 13 & 3 & 23.1\% & 0 & 0.0\% & Marchis et al. (2002), (2005) \\ \hline 2003 - 2007 & Keck & 12 & 2 & 16.7\% & 6 & 50.0\% & de Pater (2014a), (2014b), (2017) \\ \hline 2007 & New Horizons & 2 & 1 & 50\% & 1 & 50.0\% & Laver et al. (2007), Tsang et al. (2014) \\ \hline 2008 - 2009 & Keck & 11 & 0 & 0.0\% & 2 & 18.2\% & de Pater (2014a), (2014b), (2017) \\ \hline 2010 - 2012 & Keck, Gemini & 18 & 0 & 0.0\% & 4 & 22.2\% & de Pater (2014b), (2017) \\ \hline \hline \end{tabular} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Years**} & \multirow{2}{*}{**Observations**} & \multirow{2}{*}{**Outburst**} & \multirow{2}{*}{**Outburst**} & \multirow{2}{*}{**Min-outburst**} & \multirow{2}{*}{**Min-outburst**} & \multirow{2}{*}{**Time-averaged total**} \\ & & & & & & & \\ \hline **1978 - 1993** & **236** & **4** & \(3.4\pm 1.7\%\) & **–** & **–** & **–** \\ \hline **1995 - 2012** & **180** & **14** & \(16\pm 4\%\) & **15** & \(48\pm 12\%\) & **1.78 \(\pm\) 0.47 TW** \\ \hline **2013 - 2022** & **472** & **10** & \(4.3\pm 1.4\%\) & **15** & \(10\pm 3\%\) & **0.45 \(\pm\) 0.14 TW** \\ \hline **All, 1978 - 2022** & **888** & **28** & \(6.2\pm 1.2\%\) & **30** & \(17\pm 3\%\) & **0.69 \(\pm\) 0.13 TW** \\ \hline \end{tabular} \end{table} Table 4: Summary of the combined Io observation campaigns sensitive to the infrared signature of outbursts and mini-outbursts. Note that the activity in 1995-2012 was about four times higher than before or after. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**2013 – 2018**} & \multirow{2}{*}{**Keck, Gemini**} & \multirow{2}{*}{**271**} & \multirow{2}{*}{**3**} & \multirow{2}{*}{**1.1\%**} & \multirow{2}{*}{**13**} & \multirow{2}{*}{**4.8\%**} & \multirow{2}{*}{**de Kleer et al. (2019a)**} \\ \hline **2016 – 2022** & \multirow{2}{*}{**IRTF (eclipse)**} & \multirow{2}{*}{**77**} & \multirow{2}{*}{**3**} & \multirow{2}{*}{**3.9\%**} & \multirow{2}{*}{**–**} & \multirow{2}{*}{**–**} & \multirow{2}{*}{**Tate et al. (2023)**} \\ \hline **2016 – 2022** & \multirow{2}{*}{**IRTF (sunlit)**} & \multirow{2}{*}{**99**} & \multirow{2}{*}{**4**} & \multirow{2}{*}{**4.0\%**} & \multirow{2}{*}{**–**} & \multirow{2}{*}{**–**} & \multirow{2}{*}{**Tate et al. (2023)**} \\ \hline **2017 – 2022** & \multirow{2}{*}{**Juno**} & \multirow{2}{*}{**20**} & \multirow{2}{*}{**0**} & \multirow{2}{*}{**0.0\%**} & \multirow{2}{*}{**1**} & \multirow{2}{*}{**5.0\%**} & \multirow{2}{*}{**Pettine et al. (_2023_)**} \\ \hline **2022** & \multirow{2}{*}{**Keck, Gemini**} & \multirow{2}{*}{**4**} & \multirow{2}{*}{**0**} & \multirow{2}{*}{**0.0\%**} & \multirow{2}{*}{**0**} & \multirow{2}{*}{**0.0\%**} & \multirow{2}{*}{**de Pater et al., (2023)**} \\ \hline **2022** & \multirow{2}{*}{**JWST**} & \multirow{2}{*}{**1**} & \multirow{2}{*}{**0**} & \multirow{2}{*}{**0.0\%**} & \multirow{2}{*}{**1**} & \multirow{2}{*}{**100.0\%**} & \multirow{2}{*}{**de Pater et al., (2023)**} \\ \hline **All: 1978 – 2022** & **–** & **888** & **28** & \(6.2\pm 1.2\%\) & **30** & \(17\pm 3\%\) & \\ \hline \end{tabular} \end{table} Table 3: List of the major Io observation campaigns that are sensitive to the infrared signature of outbursts. Only the high sensitivity campaigns are used to measure the frequency of mini-outbursts. Figure 1: Mini-outbursts (top in red) and outbursts (bottom in blue) for the spatially constant events from 1995 to 2022. The global mosaic of Io is from Becker and Geissler (2005) and Williams et al. (2011) made from Galileo’s high-resolution visible-NIR images. Note how mini-outbursts erupt primarily between longitudes 180W and 320W, outside of which they are confined to equatorial region. Outbursts by contrast appear more uniform. Figure 2: Same as Figure 1, except that outburst locations (blue) and mini-outburst locations (red) are in two time periods: 1995-2012 (top) and 2013-2022 (bottom). Note how outburst eruptions dramatically changed their global patterns after 2012. Figure 4: **(A)** Effective surface area versus the high-temperature fits on a log-log plot of the thermally constrained outbursts (squares) and mini-outbursts (circles) points. The black lines show the Weins-law power of the area-temperature space, and the red lines show the range of areas and temperatures that emit 30 and 150 GW/sr/\(\upmu\)m at 3.8 \(\upmu\)m (Lp-band). The histograms are the (**B**) effective temperatures, (**C**) effective areas, and (**D**) total powers of the eruptions. The vertical lines are the median value of each population. Figure 3: Histograms of the spatially constrained outbursts (blue) and mini-outbursts (orange) binned in longitude (left) and latitude (middle). The top row is for the years 1995-2022, which spans the dataset of localized events. The middle row is 1995-2012, and the bottom is 2013-2022. The right plots show the number of outbursts and mini-outbursts discovered each year from 1978 to 2022 (middle right) and the estimated frequency percentage of event occurrence for three periods (right). Note that the event detections include some that are not spatially constrained. See Table 5 for spatiotemporal statistics and Table 4 for frequency constraints for these periods. ### 2.2 High-level trends After compiling a comprehensive record of Io's published outburst activity over the last five decades, several spatial and temporal patterns emerge. Importantly, many of the trends have become apparent only after a multi-decade baseline emerges to disentangle long term processes in Io's complex volcanic behavior. Many of these trends are presented here for the first time because they are difficult to discern on the sub-decadal timescales of previous studies. #### 2.2.1 Location trends The 17 localized outbursts in this study originate from 13 unique volcanic locations. The only confirmed repeating outburst locations occurred at Tvashtar Patera and Pillan Patera for 4 and 2 localized events, respectively. There is evidence that Acala Fluctus is also a repeating outburst location. Besides these, however, the other 11 outburst locations are "one-off" outburst events. Of these unique volcanic locations, over half (or 7 of the 13) are not clearly associated with any of the emission centers observed by Galileo. The event names in Table 2 are the location names of the associated hot spots, and events without previously known hot spots have alphanumeric names assigned by the discoverer (e.g., the event names "0002A", "UP 254W" and "202010A"). By contrast, the 27 localized mini-outbursts originate from 16 unique volcanic locations, most associated with known or named hot spots. Since a disproportionate number of outbursts occur without an apparent correlation with previously observed hot spots, the volcanic mechanism responsible for the brightest transient events might not require known hot spots. Instead, a subset of outbursts could reactivate dormant volcanoes or be "one-off events" unassociated with any previous hot spot. Figure 1 shows a relatively uniform scatter of outburst locations on the global scale. On a finer scale, however, several regions do not show outburst activity in this dataset (see Figures 1, 4, and 7). The sub-Jovian and anti-Jovian longitudes from 30W to 340W and 180W to 125W seem devoid of outburst. The South Pole has no event below 56S where the 2013 event at Heno Patera took place. The spatial distributions of outbursts and mini-outbursts could be decoupled. Except for Pillan Patera, where several of both have occurred, outbursts and mini-outbursts seem to dominate different regions. #### 2.2.2 Changing location trends Although Figure 1 shows that outbursts have a roughly uniform global distribution over the four decades of this dataset, this pattern does not persist for shorter time periods. In fact, Figure 2 shows a stark change in outburst locations between the 1995-2012 and 2013-2022 periods which favor the leading and trailing hemispheres, respectively. Mini-outbursts, by contrast, consistently prefer the trailing hemisphere (180W to 0W) for the entire dataset. The statistical significance of these trends is explored in sections 3 and 4, and we discuss their scientific significance in section 5. These changing trends designate three periods in the dataset. The first period between 1979 and 1994 was before many modern infrared observational capabilities. The data is sparse, and we cannot say there was much outburst behavior in that early period. What followed was a period when outbursts were primarily found in the leading hemisphere. The first was 9503A detected with IRTF (Spencer et al. 1997). This trend continued through the Galileo mission and afterward until the 2007 New Horizons flyby detected the last outburst at Tvashtar (Tsang 2014, Spencer et al., 2007). No outbursts were detected anywhere on To between 2008 and 2012. Outbursts were observed again in 2013 but were notably in the trailing hemisphere, and that trend continued at least until late 2021. #### 2.2.3 Temporal clustering on decadal timescales How often do outbursts occur on Io and does this frequency change over time? Table 3 summarizes the number of infrared observations and outburst detections for each of the major ground-based observation campaigns and spacecraft missions surveyed. Some small ground-based campaigns are clumped for brevity, but every effort was made to verify the total number of observations in each 1-6 year period. Table 4 further summarizes these observations into three time periods: 1997-1993, 1995-2012, and 2013-2022. The number of outbursts in Table 3 and 4 is 28 instead of the 30 recorded in Tables 1 and 2 because we excluded the unnamed January 1978 event and the late 2000 Tvashtar detection made with the Hubble Space Telescope (HST) at non-thermal wavelengths (less than 1 \(\upmu\)m). Outbursts are rare in the sense that extensive, multi-year observation campaigns often find only a handful. While estimating the frequency from individual campaigns is prone to the uncertainties of small number statistics, the aggregate information in Table 4 is considerably more robust. Several results are surprising: first the average outburst frequency between 1978 and 2022 was \(23\pm 4\) per year, and second that they were significantly more frequent in the years 1995 to 2012 than before and after. We justify these claims below. We define frequency as either the number of events in a period of time or equivalently as the probability of detecting an event in one observation that sees half of Io's surface. If \(n\) stochastic events are detected in \(N\) unbiased observations, then each observation has a ( \(n\pm\surd\!n\) )/\(N\)% chance of detecting an event and roughly twice this probability for an event happening somewhere on Io. The frequency is then about 365 * 2 ( \(n\pm\surd\!n\) )/\(N\) events per year. This estimation assumes that outbursts (1) persist in a detectable fashion for about 24 hours, (2) are detectable everywhere on the visible hemisphere, and (3) are equally probable on the visible and invisible hemispheres. We use these assumptions mainly for consistency with previous studies (Spencer and Schneider 1996), and other sets of assumptions do not greatly affect the relative changes in frequency that we detect throughout the. However, if outbursts generally do not sustain \(\rm I_{L_{P}}\)\(>\)150 GW/sr/\(\upmu\)m brightness for 24 hours, or if they are not equally likely on the visible and hemisphere; then these biases will skew the frequency estimate. We justify the 24-hour assumption with high cadence measurements of three outbursts in 2013 that had decay half-lives of about 0.8-1.8 days (de Kleer and de Pater, 2016a). This definition also includes several observational assumptions: that (1) events detected at the same location within 24 hours count as one outburst, (2) events at the same location more than 24 hours apart count as two outbursts, and (3) events are equally detectable anywhere on the hemisphere visible to the observer. The first two assumptions are negligible because high-cadence campaigns rarely observe Io on consecutive nights. The latter bias, however, is more serious because nearly all observation techniques are less sensitive to events on Io's limb, especially for lower power mini-outbursts and for high-latitude events. Due to these limitations, our estimates generally underestimate the true frequency. An early estimation of Io's outburst frequency was based on the five events found 1978 to 1995 and gave a \(3.3\pm 1.5\)% probability or \(12\pm 5\) outbursts per year (Spencer and Schneider 1996). A more recent study found a \(8\pm 3\)% frequency based on a 2016-2022 IRTF campaign (Tate et al., 2023). For the whole dataset 1978 to 2022 summarized in Table 2, the frequency is \(6.2\pm 1.2\)% or \(23\pm 4\) outbursts per year. Although these values appear to be relatively persistent, outbursts are not constant on decadal or yearly timescales. Table 4 gives frequencies of \(3.4\pm 1.7\%\), \(16\pm 4\%\), and \(4.3\pm 1.4\%\) for the periods 1978-1993, 1995-2012, and 2013-2022, respectively. The frequency for 1995-2012 is a factor of 3.7 higher than 2013-2022 and separated by more than a two-sigma, which we interpret as a significant change in outburst behavior. Observationally, the campaigns in 1995-2012 and 2013-2022 used similar techniques and many of the same telescopes (e.g., IRTF, Gemini N, and Keck II). We therefore cannot identify any major experimental biases that could systematically skew these results. Mini-outbursts require more sensitivity, and we estimate their frequency from only the spacecraft flybys and ground-based AO campaigns (see Table 3). Mini-outbursts between 1996 and 2022 occurred at a higher rate of about \(61\pm 11\) events per year or \(17\pm 3\%\) of the time. This is 2.7 times more frequent than outbursts, and intriguingly this ratio is roughly constant as outburst frequency rises after 1995 and decreases after 2012. The mini-outburst frequency decreased from \(48\pm 12\%\) to \(10\pm 3\%\) between 1996-2011 and 2013-2022, respectively. The dataset is not sensitive to mini-outbursts before Galileo's 1996 arrival. The campaigns that are most sensitive to mini-outburst are even less susceptible to experimental changes than the larger dataset used for outbursts. In summary, both outbursts and mini-outbursts are significantly more frequent in the years 1995/6-2012 than 2013-2022, and mini-outbursts are consistently 2-3 times more frequent than outbursts. There are also years-long periods of this dataset when no outbursts were detected. These periods of low-outburst activity are 2002-2005, 2008-2012, and 2014-2017. The absence of detections could be because of the low observational cadence over these periods, especially the first two (Cantrall et al., 2018). However, this does not explain why the intermittent ground-based observation campaigns observing Io at these times succeeded in detecting a large number of mini-outburst events (see Figure 3). If this is not an observational bias, then mini-outburst activity seems more consistent from year to year, whereas outburst activity seems to fluctuate greatly and even cease for 3-5 year periods. #### 2.2.4 Temporal clustering on short timescales If frequency changes on decadal timescales, how constant is it on shorter periods? While this question is more difficult to test with our dataset, there is circumstantial evidence that outbursts and mini-outbursts are also clustered in time over 1-20 day timescales. Two examples of this phenomenon are: the three rapid outbursts at Amirani, Tvashtar, and Surt observed within a span of 3 days in 2001 (Marchis et al., 2002); and the three outbursts at Heno, Rarog and 201308C, and one mini-outburst at Loki observed within 15 days in 2013 (de Kleer and de Pater, 2016a). The trio of outbursts in 2013 is especially compelling because despite frequent observations no other outbursts were detected in the five years before 2008-2012 and the fours years after 2014-2018. One way to quantify clustering in time is to count the fraction of events that occur within a time interval of at least one other event. We arbitrarily chose an interval of 10 days. This clustering metric does not account for irregular schedules that do not always observe Io more than once in any given 10-day window. We find that 26 of 60 events 1995-2022 are clustered in time, or that \(43\pm 9\%\) of outbursts and mini-outburst events happen within 10 days of another outburst or mini-outburst. The expected clustering of independent events with randomly timed observations is no greater than 10%. Comparing this to the measured \(43\pm 9\%\), we conclude that outbursts and mini-outbursts are significantly clustered at 10-day timescales. The results of this analysis are largely the same for the subsets of the data, including only the outbursts, mini-outbursts, both for 1995-2011, and both for 2013-2022. In each case, 30% to 50% of the events are clustered in 10-day intervals. This shows how bright transient events are more likely to occur. #### 2.2.5 Temperature trends Outbursts are detected when relatively large quantities of high-temperature lava suddenly appear on Io's surface. Consequently, these events contain the hottest materials measured on the surface of Io (excluding the atmosphere and interior). Whether outbursts are, in fact, the hottest form of volcanism on Io or just the most visible source of high-temperature activity remains an open question. While outbursts are transiently the brightest hot spots on Io, they are not the only hot spots that reach the temperatures necessary for non-sulfurous volcanism. High-resolution measurements from Galileo's NIMS instrument show that many other hot spots reach comparable temperatures above 600K (McEwen et al., 1998; Williams and Howell 2007). Nevertheless, outbursts and mini-outbursts are significant - if not the primary - mechanisms for transporting silicate magma to the surface and resurfacing Io with non-sulfurous materials. The effective temperatures of the outbursts in this study range from about 700K to \(\sim\)1900K. Figure 4 shows the temperature-area behavior of the 33 constrained events. Some values are constrained with two-temperature fits, for which we take the hottest temperature. Mini-outbursts generally have lower effective temperatures, which could be an observational bias. Both populations have a maximum density of around 1200 K, consistent with silicate volcanism \(<\)1475 K and too hot for sulfuric volcanism \(<\)600 K (Caar, 1986; Schneider and Spencer, 2023). Four of the 18 outburst events (\(22\pm 11\)%) have ultra-high temperatures greater than 1475K. These have sparked debate over possible ultramafic volcanism on Io (Howell et al. 1997). The first of these events was detected in 1986 at 4.8-8.7 \(\upmu\)m (Veeder et al. 1994), a wavelength range that is not ideal for constraining high temperatures. The other ultra-high temperature events are better constrained and remain the best evidence of possible ultramafic volcanism. The 9610A outburst at \(\sim\)1500 K (Stansberry et al. 1997) and the 1997 Pillan Patera outburst at \(\sim\)1600 K (Davies et al., 2001; Howell et al., 2001) occurred in the late 1990s. Subsequent surveys by Galileo and ground-based AO telescopes did not detect activity temperatures \(>\)1475 K (Lopes et al. 2007). An exception to this is the 201308C event which has the highest estimated temperature at 1900K. This event has a spectrum that is difficult to interpret, and different fitting strategies give different temperatures that range between 1270K and 1900K, with the latter being preferred (de Kleer et al., 2014). Although ultramafic volcanism remains controversial in the community, the data suggest that some ultra-high temperatures do occur, but they are rare and only appear as outbursts (since no mini-outbursts or persistent hot spots reach such high temperatures). These ultra-high-temperature outbursts are likely exposing relatively large areas of hot materials. #### 2.2.6 Power and location trends We do not find any strong correlations between outburst temperature and location. Nevertheless, location may correlate with total outburst power. There are 8 events with power greater than 10 TW, 6 of which occurred in the northern hemisphere. The significance of this trend is not yet quantified, however. ### 2.3 Repeating locations While most outbursts and mini-outbursts occur at one-off locations, there are several notable exceptions. The only confirmed repeating outbursts took place at Tvashtar and Pillan Patera with the possibility of a third at Acala Fluctus. The repeating mini-outburst locations are at Pillan Patera, Loki Patera, and Kanehekili Fluctus. #### 2.3.1 Tvashtar Paetra Six outbursts occurred at Tvashtar Paterae between late 1999 and May 2007. Tvashtar patera was closely monitored during and after the Galileo mission. Galileo's SSI captured the first high-resolution observation of fire fountains or lava curtains (Keszthelyi et al., 2001). After this period, ground-based AO campaigns did not detect outbursts at Tvashtar until before the New Horizons flyby in late 2007 (de Pater, 2014; de Kleer and de Pater, 2016a; de Kleer et al., 2019a). Although Tvashtar dominates the dataset in the time period 1999-2007, its location is consistent with other outbursts during the 1995-2007 interval. #### 2.3.2 Pillan Patera The Pillan Patera area is an active site for persistent and transient emission. Pillan was classified as a persistent hot spot in the Galileo era (Lopes et al., 2007). After 2001, however, AO observations detected it less than half the time (Cantrall et al., 2018). Throughout both periods, Pillan had two outbursts and two mini-outbursts as well as several events below the 30 GW/sr/\(\upmu\)m mini-outburst threshold (Marchis et al., 2000; Howell et al., 2001; Davies et al., 2001; de Kleer & de Pater, 2016a; de Kleer et al. 2019a; Tate et al. 2023). Pillan's 1997 outburst was likely due to lava fire fountains (Keszthelyi et al., 2001; Davies et al., 2001; Veeder et al., 2012). Regarding the number of events, Pillan Patera is second only to Tvashtar Patera, which is notably on the opposite side of Io. Tvashtar and Pillan are at 120\({}^{\circ}\)W 62\({}^{\circ}\)S 244\({}^{\circ}\)W 12\({}^{\circ}\)S, respectively, and separated by a great circle of 120\({}^{\circ}\). In addition to Pillan's 1997 ultra-high temperature outburst, Galileo detected an abnormally low-dust "stealth plume" at that location (Geissler & McMillan, 2008). Two Prometheus-style plumes were observed at Pillan on 6/28/1997 and 11/8/1997 with Galileo SSI's limb imaging in the violet band of SSI). While plumes and outbursts are not causally linked, Pillan's ultra-high-temperature outburst could be associated with its subsequent outgassing. #### 2.3.3 Kanehekili Fluctus Two Prometheus-style plumes were observed over Kanehekili Fluctus in 1997 and corroborated with later surface color and morphology changes (Geissler, 2008). Kankekili's plume detections suggest that it was active in the first half of the Galileo mission. Kanehekili had low activity for over a decade, with only a low-intensity Keck detection in 2001 (Marchis et al., 2002). In late 2010, Kanehekili erupted with two mini-outbursts and a sustained output of I\({}_{\nu}\)= 20-40 GW/sr/\(\upmu\)m for three months (de Pater et al., 2014a; Cantrall et al., 2018; Davies et al., 2012). Keck observed Kanehekili three times between August and November 2010 with a steady temperature of T\(\sim\) 520K. Since Kanehekili likely maintained this medium-temperature and high-intensity state between detections (possibly longer before and after the observations), its 2010 event is the longest-lived transient eruption. If mini-outbursts can last much longer than an average of 24 hours, then this would affect the mini-outburst frequency estimate of section 2.3.3 and likely underestimate their contribution to Io's total heat budget. Kanehekili was active again in late 2022 when Keck measured I\({}_{\rm L\varphi}\)= 10 GW/sr/\(\upmu\)m in September, and three months later, in November, JWST measured I\({}_{\rm L\varphi}\)\(\sim\)35 GW/sr/\(\upmu\)m with a dominant temperature around 600K (de Pater et al., 2023). JWST measured the 1.7-5.3 \(\upmu\)m spectral intensity and found a 1.707-\(\upmu\)m SO enhancement centered on Kanehekili, evidence for plume activity. The combined evidence of plume and volcanic activity in 1997 and 2022 with the medium-temperature mini-outbursts in 2010 and 2022 suggests that each eruption was remarkably similar. If so, then Kanehekili Fluctus could have a \(\sim\)12-year cycle, with the next mini-outburst and plume activity expected in the mid-2030s. #### 2.3.4 Loki Patera Loki is the most active hot spot on Io and responsible for 10% of Io's total heat output, about 20% of the combined thermal output of all volcanic processes on Io (Veeder et al., 2012). The decades of monitoring Loki Patera illustrate the value of long-term observation campaigns and the surprising insights that such a dataset makes possible (Rathbun et al., 2002, 2006; de Pater et al., 2016; de Kleer et al., 2017; Bartolic et al., 2022; de Kleer & Rathbun, 2023). Loki's highly periodic emission reaches the mini-outburst threshold of 30 \(<\) I\({}_{3.8\upmu\rm{ms}}\) =\(<\) 150 GW/sr/\(\upmu\)m every 400-550 days. This transient and predictable activity at Loki Patera is often treated as a special case, distinct from all other hot spots (Rathbun et al., 2002, Rathbun and Spencer, 2006; Davies et al. 2015). However, a similarly semi-regular activity could be more common in other hot spots. As the temporal baseline grows and probes longer timescales, we may find that Loki is merely the most energetic example of a population of periodic hot spots, which could be due to the large size of the patera. Even though Loki is the most frequent mini-outburst hot spot in our list, including or excluding it does not change the significant temporal and spatial patterns we observe. For simplicity, we include Loki observations of I\({}_{3.8\upmu\rm{m}}\) \(>\) 40 GW/sr/\(\upmu\)m (primarily from the de Pater et al., 2017 dataset). We note that there has never been a confirmed outburst observed at Loki. Despite the outburst detected in 1990 (Veeder et al., 1994), this event was not precisely localized in latitude. This event was likely not Loki Patera because it has not had an outburst in the following thirty years of careful observation (Veeder et al., 2012; de Kleer & Rathbun, 2023). Given the subsequent detections of several outbursts at longitudes near Loki (such as Acala Fluctus), the 1990 outburst likely originated from some other location. #### 2.3.5 Acala Fluctus Area Acala Fluctus is a large volcanic region west of Loki Patera and the closed outburst location to thereto. Acala showed little thermal activity before its two 2019 outbursts (Tate et al., 2023). These took place in May and June 2019, a month and a half apart, with lower intensity (\(\sim\)2 GW/sr/\(\upmu\)m) detections before, between, and after the eruptions. ## 3 Statistical Tests We establish statistical tests to ascertain whether large transient events are randomly distributed on the surface of Io. Prior investigations have mainly employed two tests: mean absolute latitude and mean pairwise spacing, to accept or reject a null hypothesis stating that the population is randomly distributed on a sphere (de Kleer and de Pater 2016b; Cantral et al., 2018). In addition to applying these heritage techniques, we also investigate the spatial distribution of outbursts and mini outbursts relative to the axes of a three-dimensional cartesian coordinate system where the X-axis goes through the anti / sub-Jovian points and the Z-axis goes through Io's poles. This coordinate system is convenient to assess hemispherical dichotomies (e.g., leading/trailing hemisphere) and quantify the confidence that any observed trends are real and not the result of a random distribution. We employ Kolmogorov-Smirnov (K-S) tests to compare the observed outburst and mini-outburst populations to each other, as well as to simulated populations from defined probability distributions (e.g., random). Compared to previous studies, this analysis uses a larger dataset that spans a greater range of time. Note that we did not apply the nearest-neighbor distance analysis used by Hamilton et al. (2013) and Tyler et al. (2015), as that study analyzed hundreds of hot spots and patera locations and would not be appropriate on the smaller sample size of our dataset (with \(n\) between 5 and 44). ### Mean Absolute Latitude The mean absolute latitude \(\underline{|\varphi|}\) tests if a set of points has a polar or equatorial preference compared to a random collection. To know if \(\underline{|\varphi|}\) is consistent with a random population, it is compared with a collection of \(n\) randomly selected points. Random points have a mean absolute latitude of \(\underline{|\varphi|}\simeq\)**32.7**\(\pm\)**32.7/\(\surd\)n\({}^{\circ}\) degrees, with this uncertainty valid only for large \(n\). We compute the n-dependent uncertainty value for each set directly from the standard deviation of \(N\)=10,000 simulations of \(n\) random points on a sphere. The result of this statistical test is consistent with the null hypothesis if it has percentiles between \(>\)5%ile and \(<\)95%ile. The test provides evidence for a polar preference if \(>\)95%ile and the opposite for \(<\)5%ile. For example, to quantify the likelihood that the outburst and mini-outburst events (n=44) have a polar preference, we can compute the actual or observed mean absolute latitude \(\underline{|\varphi|}=\) 31.6 and a distribution of expected \(\underline{|\varphi|}=32.7\pm 5.4^{\circ}\) for \(N\)=10,000 random sets of n=44 points each. Figure 5 shows the distribution of random simulations, which is nearly Gaussian for large \(n\) values. The actual value for \(\underline{|\varphi|}\) is in the 38th percentile in the \(\underline{|\varphi|}\) distribution of simulations. The mean absolute latitude statistic for these values is consistent with random locations. Therefore, the n=44 outbursts and mini-outbursts do not have a significant polar preference. We repeat this analysis for each subset of \(n\) points specified in the columns of Table 5 and record the percentiles in the row labeled "mean absolute latitude." We then repeat this for the other statistical tests that are listed in the rows of Table 5 and explained in sections 3.2 and 3.3. We call these "percentile of the mean " tests. Because the percentile is size-independent for various values of \(n\), it is more informative than the calculated mean value. For this reason, Table 5 only reports the percentile values so that the statistical results of various sample sizes can be more easily juxtaposed. ### Comparison to Io-centered fixed body coordinate system To generalize the mean absolute latitude test for the other directions, we analyze points on the globe of Io in cartesian coordinates (see Figure 3). The X, Y, and Z axes extend from anti-Jovian (0 W) to sub-Jovian points (180 W), leading (90 W) to trailing (270 W), and from the South pole to the North pole, respectively. We also compute the mean absolute values long these axes, \(|\)x\(|\), \(|\)y\(|\), and \(|\)z\(|\), to measure the symmetry between the 90th meridian, prime-meridian, and equator planes, respectively. For a set of longitude and latitude coordinates on Io, we project each point into \(\mathbf{x}_{i}=(x_{i},y_{i},z_{i})\) coordinates (Figure 2) before calculating the actual mean and standard deviation values for each direction. As explained above, we compare this measured mean to the averages of a large number of randomly generated simulations \(\underline{x}\) and calculate the percentile of the measured mean \(\check{\mathbf{x}}\) relative to those simulations. The expected mean of the values \(\underline{x}\), \(\underline{Y}\), and \(\underline{Z}\) are 0.0 R\({}_{\text{lo}}\) or 0.0\({}^{\circ}\) from the plane of symmetry (equator, prime-meridian, or 90th meridian). Conversely, the expected mean of the _absolute_ values \(|\underline{x}|\), \(|\underline{Y}|\), and \(|\underline{Z}|\) are 0.5 R\({}_{\text{lo}}\) or 30.0\({}^{\circ}\) from the plane of symmetry. To continue the above example for mean absolute latitude, we can similarly quantify the polar preference of the n=44 outbursts and mini-outbursts by taking the mean absolute projection along these axes. This value is \(|\underline{z}|=0.489\) R\({}_{\text{lo}}\) which we compare to the distribution of random simulations \(|\underline{z}|=0.500\pm 0.064\) R\({}_{\text{lo}}\). This puts \(|\underline{z}|\) in the 40th percentile of the simulated \(|\underline{Z}|\) values, meaning that the n=44 events do not have a significant polar preference. Since the \(|\underline{\varphi}|\) and \(|\underline{z}|\) tests both measure polar preference, the percentile values are approximately equal \(|\underline{z}|\approx|\underline{\varphi}|\). Note that while the mean value lange the Z-axis is \(|\underline{z}|=30^{\circ}\) latitude, this is slightly smaller than the \(|\underline{\varphi}|=32.7^{\circ}\) latitude for the mean absolute latitude (compare the percentile values for \(|\underline{z}|\) and \(|\underline{\varphi}|\) in Table 5). ### Mean Pairwise Spacing Another test for randomness is the mean pairwise spacing, \(\underline{d}\). Instead of projecting each point along a global axis, this method calculates the great-circle distance between every pair of points; d( \(\mathbf{x}_{i},\mathbf{x}_{i}\)) for \(i\neq i^{\prime}\). As above, this mean value is compared to the distribution of simulated values. This test measures any clustering or repelling nature of the points. As above, a set of points with a mean \(\underline{d}\) value that is far from the mean (50%ile). We consider the set clustered for small percentiles (<5%ile) and repelled for large percentiles (>95%ile). ### Kolmogorov-Smirnov Tests We also use the Kolmogorov-Smirnov (K-S) test (Massey, 1951; Virtanen et al., 2020) to determine the goodness of fit or confidence that the set of points is consistent with a set drawn from a random population. We also compare two sets of points to determine the confidence that they come from the same population (Hodges, 1958; Virtanen et al., 2020). This statistical test is similar but somewhat more robust than the percentile of the mean statistics used above. We present both results because they measure different properties of the dataset. Since the K-S test is one-dimensional, we calculate the confidence in each XYZ direction, from which we take the largest value to be the lower limit of the combined confidence in all three dimensions. Figure 5: Io’s spatial distributions of mini-outbursts (left) and outbursts (middle). The mean absolute latitude (top) and mean pairwise distance (bottom) show the statistical behavior of each relative to the histograms of ten thousand random simulations. Figure 6: Statistical tests to determine the probability that the observed distribution of outbursts and mini-outbursts are random. The blue histogram in each subplot represents results from 10,000 random simulations, and the solid black vertical line is the mean value of the outbursts (bottom) or mini-outbursts (top). The dashed black lines are the averages for all known hot spots. Figure 8: Plot of outburst statistics for years 1995-2011 (top) and 2012-2022 (bottom). Figure 7: The outbursts 1990-2011 (red) and 2013-2022 (blue) are different populations. **A** shows the outburst locations projected on the Y-Z plane (north is up, south down, leading left, trailing right). The ovals are solid when the outburst is in the sub-Jovian hemisphere and dashed for the anti-Jovian hemisphere. Subplots **B**, **C**, and **D** show the cumulative histograms in the X, Y, and Z directions. The K-S p-value confirms that the outbursts before and after 2012 are samples from the same population. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Outbursts**} & \multicolumn{3}{c|}{**Mini-outbursts**} & \multicolumn{3}{c|}{**Both**} & \multicolumn{1}{c|}{**Hot spots**} \\ \hline Time range & **1995-2011** & **2013-2022** & **all** & **1995-2011** & **2013-2022** & **all** & **1995-2011** & **2013-2022** & **all** & \\ \hline Sample \(n\) & 10 & 7 & 17 & 14 & 13 & 27 & 24 & 20 & 44 & 275 \\ \hline Hemi pref X & 34\% & 36\% & 28\% & **96\%** & 20\% & 75\% & 85\% & 18\% & 57\% & 4.8\({}^{*}\) \\ \hline Hemi pref Y & **4.3\%** & **99.7** & 62\% & **99.9\%** & 91\% & **99.9\%** & 89\% & **99.4\%** & **99.6\%** & -2.2\({}^{*}\) \\ \hline Hemi pref Z & **99.7\%** & 12\% & 90\% & 14\% & 11\% & **4.8\%** & 82\% & **4\%** & 30\% & -2.3\({}^{*}\) \\ \hline Hemi sym [X] & 2\% & 70\% & 11\% & 89\% & 58\% & 85\% & 36\% & 68\% & 51\% & 34.1\({}^{*}\) \\ \hline Hemi sym [Y] & 69\% & 68\% & 74\% & 95\% & 38\% & 82\% & 94\% & 51\% & 88\% & 30.1\({}^{*}\) \\ \hline Hemi sym [Z] & **97\%** & 28\% & 86\% & **0.4\%** & 86\% & 13\% & 21\% & 70\% & 40\% & 26.6\({}^{*}\) \\ \hline Mean abs lat \(|\theta|\) & 96\% & 26\% & 84\% & **0.3\%** & 83\% & 12\% & 22\% & 65\% & 38\% & 28.5\({}^{*}\) \\ \hline Mean pair dist & **1.0\%** & **4.0\%** & 63\% & **1.0\%** & 28\% & **2.6\%** & 52\% & 1.6\% & 20\% & 89.6\({}^{*}\) \\ \hline Mean pair dist\({}^{*}\) & 6.3\% & **4.0\%** & **99.9\%** & **2.0\%** & 35\% & **4.3\%** & **99.9\%** & **1.9\%** & 53\% & 89.6\({}^{*}\) \\ \hline \multirow{3}{*}{Summary} & \begin{tabular}{c} _leading,_ \\ _nonterm,_ \\ _pdeaneral pref.,_ \\ _clustered_ \\ \end{tabular} & \begin{tabular}{c} _training and_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ _nemi,_ \\ _nemi,_ \\ _nemi,_ \\ _nemi,_ ## 4 Interpretations and Results Table 5 gives the percentile scores for spatial statistical tests described above. These tests are evaluated for three time intervals. One is the whole dataset of localized events 1995-2022, and the others divide the dataset before and after the year 2012. The choice for these dates is discussed in section 4.3. ### Global trends over the whole dataset The most statically robust result of these tests is the mini-outburst 1996-2022 preference for the trailing hemisphere (\(\bar{y}\)=99.9%ile, n=27) (see Figures 1, 4 and 6, and Table 5). Secondarily, mini-outbursts have Figure 9: Same as Figure 2 plotted with the density of mountains from the l=2 spherical harmonics coefficients from Kirchoff et al. (2011). The light areas have significantly more mountains than the dark areas. Outburst locations (blue) and mini-outburst locations (red) for 1995-2011 (top) and 2013-2022 (bottom). southern preference (\(\bar{\rm z}=\)4.8%ile, n=27). Outbursts on the other hand do not show a leading-trailing (\(\bar{\rm y}=\)62%ile, n=17) or a significant north-south preference (\(\bar{\rm z}=\)90%ile, n=17). Section 4.3 explores the significant temporal change of outburst preference along the leading-trailing axis. ### Polar trends and the lack thereof Previous studies found that the outbursts and mini-outbursts of 2013-2018 have spatial distributions that are clustered at high absolute latitudes (de Kleer and de Pater, 2016b; Carrtall et al., 2018). While we confirm these results for the specific AO observations of 2013-2018, this trend is not persistent over longer time periods, but instead more generally consistent with random locations. Considering the entire dataset over multiple decades, outbursts and mini-outbursts do not significantly occur at high absolute latitudes (or closer to the poles). Nevertheless, it is interesting to note the weak but opposite polar behavior for outbursts (\(\lfloor\underline{z}\rfloor=\) 86%ile, n=17) and the equatorial behavior of mini-outbursts (\(\lfloor\underline{z}\rfloor=\) 13%ile, n=27). Although we do not find a poleward trend in the total n=44 dataset, this analysis treats each event the same without accounting for each event's peak temperature and power. Previous studies hypothesized that events near the poles could be more violent with higher temperatures because they are driven by deep-mantle heating and the opposite for volcanos near the equator (Hamilton et al., 2013; Tyler et al., 2015; de Kleer & de Pater, 2016b). The available data do not significantly confirm this hypothesis because many powerful, high-temperature outbursts occurred near the equator. Although low-temperature events are generally rare at high absolute latitudes, we do not find any significant correlation between temperature and latitude. A tentative link between outbursts and deep mantle heating is that outbursts have high \(\lvert\bar{\rm z}\rvert\) (86%ile, n=17) and mini-outbursts have low \(\lvert\underline{z}\rvert\) values (13%ile, n=27). Although these percentiles are statistically consistent with random, the K-S test gives a 96% confidence that these are different populations. ### Temporal trends and periods of distinct behavior The spatiotemporal patterns of bright transient eruptions seem to change between three time ranges: 1978-1994, 1995-2012, and 2013-2022. We propose these periods because they appear coincident with changes (see Figure 3), even though their precise start and end dates are limited by the intermittent observational campaigns. **1978-1994** (16 years) \(\bullet\) Outbursts (n=5) had _no clear hemispherical preference_. None were detected in the years 1980-1985 and 1991-1994. \(\bullet\) Mini-outbursts were not characterized due to the low sensitivity. **1995-2012** (18 years) \(\bullet\) Outbursts (n=15) were more frequent and preferred the _northern and leading hemispheres_. None 2001-2005 and 2008-2012. \(\bullet\) Mini-outbursts (n=17) were more frequent and preferred the _trailing hemisphere and equatorial region_, **2013-2022** (10 years) \(\bullet\) Outbursts (n=10) preferred the _trailing hemisphere_ (and weakly the southern). None 2014-2017. * Mini-outbursts (n=19) weakly preferred the _trailing hemisphere._ The outburst behavior underwent a significant transition between 1995-2007 and 2013-2021. During the first period, outbursts predominantly took place in the northern and leading hemispheres of Io, while during the second period, they favored the trailing southern hemisphere. This transition happened sometime between 2006-2013, as constrained by the 2006-2007 outbursts at Tvashtar Patera and the 2013 outbursts at Heno Patera and Rarog Patera. However, the exact time of the transition cannot be specified more precisely due to a multi-year absence in outburst activity from 2008-2012. Interestingly, mini-outbursts during this time continued at approximately the same frequency and were more likely to repeat from locations near the equator. A spatiotemporal trend can be observed in the outburst behavior. Outbursts before 2012 favored the leading hemisphere while those observed after 2012 favored the trailing hemisphere. The outbursts of 1995-2007 were found on the northern and leading hemispheres while those of 2013-2022 were found on the southern and trailing hemispheres. The locations of outbursts and mini-outbursts during these two periods appear to be strongly decoupled. The Kolmogorov-Smirnov (K-S) test was used to rigorously demonstrate that outbursts for at least two distinct populations separated around the year 2012 with a 99.5% confidence to reject the null hypothesis stating that outbursts before and after 2012 are drawn from the same population. The distribution of outbursts before and after 2012 are most different in the leading-trailing axis and most similar towards and away from Jupiter. In addition to the outburst behavior on Io's surface, the behavior of mini-outbursts was also analyzed. The Kolmogorov-Smirnov (K-S) test was used to determine if mini-outbursts changed spatial patterns over the same time periods as outbursts. The maximum confidence of the mini-outburst K-S statistics was 86%, which is insufficient to reject the null hypothesis. Therefore, it was found that mini-outbursts do not change spatial patterns over the same time periods as outbursts. The difference in behavior between outbursts and mini-outbursts could be due to rheological differences in the upwelling magma or due to thickness and strength differences in the overlying lithosphere through which the magma must penetrate. This suggests that there could be a barrier preventing the less powerful mini-outbursts from breaking the surface. ## 5 Discussion ### How homogeneous are transient eruptions in time and space? Over what timescales are the global patterns of outbursts and mini-outbursts statistically unchanging? Given the significant variations of outburst behavior throughout the historical record from 1978 to 2022, the lower limit of this timescale is about 30 years. At the high end, this could be much longer than the 45-year observational baseline if this dataset does not capture Io's primary rhythms. The 30-year low limit is estimated from the 15 years between 1995 and 2012, during which outburst spatial and frequency behavior significantly differs from the later 10 years, 2013 to 2022. Both mini-outbursts and outbursts were about 4 times more frequent in the years 1995 to 2011 than in the decades before and after. Taken in light of the changing outburst location trends, the 1995-2011 outbursts preferred the leading hemisphere and were 3-4 times more frequent than the 2013-2021 outbursts that preferred the trailing hemisphere. Although mini-outbursts were similarly more frequent during the same period, they did not switch hemispheres. This could imply that the cause of the greater frequency in 1995-2011 is unrelated to the cause of the changing outburst locations. Alternatively, instead of the dichotomy between the frequently outbursting leading hemisphere in 1995-2011 and the moderately outbursting trailing hemisphere in 2013-2021, it is equally valid to interpret the data as semi-constant activity in the trailing hemisphere for both periods 1995-2021 and a highly active episode in the leading hemisphere 1995-2011. Support for this alternative interpretation is that 2 of the 10 outbursts 1995-2011 are in the trailing hemisphere (Pillan 1997 and Surt 2001) counter to the overall outburst trend at that time. The top map of Figure 2 shows these locations. With the 1995-2011 outbursts being 3.7 times more active than 2013-2021, the two outbursts at Pillan and Surt could represent a similar rate of outbursts in the trailing hemisphere occurring throughout the dataset. An early observation campaign in 1979-1981 noticed that the trailing hemisphere was generally more active at thermal wavelengths and contained six of eight possible outburst detections (Sinton et al., 1983). This is consistent with the May 5 and July 9, 1979, detections that were observed by Voyagers 1 and 2. Although the Voyager instrument suite was not sensitive to outbursts, these flybys provided visual evidence of high activity in the trailing hemisphere from longitudes 320 W to about 110 W (McEwen & Soderblom, 1983). If this activity was due to bright transient eruptions, this longitudinal distribution was likely more similar to the 2013-2022 era than the intermediate Galileo-era. Since the 1995-2011 period differed from what came before and after, Io's bright transient eruptions could alternate between the leading and trailing hemispheres on a \(\sim\)30-year timescale. Although the evidence for a global oscillation of Io's outburst behavior is presently weak, this hypothesis predicts that outbursts will transition to the leading hemisphere sometime in the 2030s. Even if something like this cycle is occurring on Io, however, it is difficult to form a physical hypothesis to explain this behavior (see Section 5.2). Despite the uncertainty of how these findings constrain Io's geology, they have strong implications for interpreting short-duration observation campaigns or spacecraft missions. For instance, the detailed portrait of Io's volcanism captured by the Galileo mission from 1996-2001 might not represent Io's long-term behavior. This is especially true for \(>\)50-year variations. Several persistent hot spots observed during the Galileo era have disappeared and new ones have appeared elsewhere (de Kleer & Rathbun, 2023), suggesting that the evolution of persistent hot spots is slow but observable. Although hot spots fluctuate, their global distribution appears constant (de Kleer et al., 2019a). What makes outbursts unique is their variability on every timescale that we can measure. ### What could cause the leading-trailing change? Although outbursts appear homogeneous (or repelling) distributed over Io's surface during the 1978-2022 observational baseline, they switch from a leading hemisphere preference in 1995-2011 to trailing in 2013-2022. This is in contrast with mini-outbursts, which always prefer the trailing hemisphere. From a theoretical perspective, this dichotomy between the leading and trailing hemispheres is unexpected. Two prevalent surface heat flux models - derived from the deep-mantle and asthenospheric heating models - do not have leading-trailing asymmetries to explain this (Hamilton et al., 2013; Tyler et al. 2015). The only surface heat flux asymmetries in these models come from second-order effects along the sub-Jovian to anti-Jovian axis. However, since neither Io's hot spots nor dormant paterae follow these ideal heat flux models (Hamilton et al., 2013; Tyler et al., 2015), it is of little surprise that large transient eruptions are also different. One of Io's asymmetries between the leading and trailing hemispheres involves the Io plasma torus. A 53-57 km/s tailwind bombards Io's trailing hemisphere and causes a slight atmospheric density enhancement in the leading hemisphere (Walker et al., 2010; Blocker et al. 2018; Bagenal & Dols, 2020). Although Io's atmosphere is complexly nonuniform with larger asymmetries between day and night, sub-Jovian and anti-Jovian, and pole and equator, the plasma headwind could differentiate the geology of these hemispheres (de Pater et al., 2021, 2023). Perhaps this affects the nature of volatile deposition on Io's surface. However, the SO2 ice distribution does not show a strong leading-trailing asymmetry (Trumbo et al., 2022; de Pater et al. 2023). Another hypothesis is that outburst activity is related to Io's mountain-building regions. This idea was briefly proposed by Cantrall et al. (2018) to describe the trailing bias of the 2001-2016 bright transient events. The present discoveries make this theory more plausible. The bimodal (k=2) clustering of Io's mountains is remarkably similar to the outburst spatiotemporal patterns described above. Kirchoff et al. (2011) used spherical harmonic fitting to analyze the global pattern of mountain locations. They discovered that the mountains are highly concentrated around two main locations. One on the leading hemisphere at 20N 80W and one on the trailing hemisphere at 15S 260W, and that these locations are anti-correlated with hot spot locations (see Fig. 3 of Kirchoff et al. (2011) and Fig. 4.10 of Keane (2023)). These findings are confirmed by further analysis of Io's mountain distributions (White et al. 2014; Ahern et al. 2017; Keane et al., 2023). Importantly, the global clusters of mountains are near the mean outburst locations during the periods 1995-2011 and 2013-2022, respectively. The cluster around 20N 80W is denser, which could explain why the 1995-2011 outbursts were 4 times more frequent. The high tectonic stresses in the mountainous regions can cause powerful seismic shocks that might trigger outburst eruptions. Galileo found that Io's mountains are surprisingly tall, so much so that they set stringent constraints on Io's lithospheric composition, temperature, and thickness (Turtle et al. 2007; White et al. 2014; Bland and McKinnon, 2016; Keszthelyi et al. 2023 and references therein). We hypothesize that slowly ascending magma builds pressure and energy in the upper lithosphere until a nearby seismic event ruptures the magma chamber and provides a path for its rapid ascent, although we recognize that future modeling efforts will be required to substantiate this possibility. The conditions needed for an outburst include the formation of high-pressure magma near the surface and the trigger that quickly releases this magmatic energy by transporting large volumes of high-temperature lava to Io's surface. We can call these two conditions "outburst potential" and "outburst trigger", respectively. The triggering mechanism could be io-quakes or landslides generated in Io's mountain-forming regions. After these conditions are met, there is an "outburst signature", namely the sudden and short-lived infrared enhancement produced by the high-temperature lava. In this hypothesis, therefore, mountains are somehow critical for creating the conditions necessary for outbursts. Judging from the distribution of outbursts, the pockets of high outburst potential would accumulate over decades in a fairly uniform pattern over the globe of Io. Both the outburst potential and the triggering energy would need to reach a critical threshold before an outburst takes place. The sizes of these pockets would vary to account for the two orders of magnitude range of power between mini-outbursts and the largest outbursts. The magma scavenging effect (Hamilton et al. 2013) and the time required to produce these pockets would repel each other. Once outburst potential accumulates, however, a common trigger or series of triggers would act on all nearby pockets. Exploring this parameter space would be a valuable line of future research. The apparent scarcity of powerful triggers would counteract the magma-scavenging effect over long distances. This balance of two opposing effects could also explain why bright transient events are both temporally clustered in \(\sim\)10-day intervals and spatially repelled over \(\sim\)20-year intervals. Assuming that mountains are associated with outbursts, we can infer that the northern leaning cluster of mountains near 20N 80W was highly active in the years 1995-2011 followed by a dormant period from 2013-2022. At this time, the mountains in the trailing southern hemisphere centered at 15S 260W became more active. This would imply that seismic triggers act over large distances \(\sim\)2000 km. To explain mini-outbursts with the same mechanism, the more spread-out cluster of mountains in the trailing hemisphere must be more amenable to lower energy events than the dense cluster of mountains in the leading hemisphere. Terrestrial mountains are associated with both the dampening of small earthquakes and the amplification of powerful earthquakes (Meunier et al., 2008; van der Elst et al., 2016; Weber et a l., 2022; Li et al. 2019), and Io's mountains in the leading hemisphere may preferentially make the conditions for only large outbursts. The dense mountains would require higher outburst potential before the triggering threshold is met. If true, this means that outburst activity provides an indirect measure of Io's seismic activity, which appears to alternate between leading and trailing hemispheres on \(\sim\)30-year timescales. ### How are outbursts associated with persistent hot spots? Like outbursts, persistent hot spots also changed behavior between the Galileo-era (1996-2001) and AO-era (2001-2016) (Cantrall et al. 2018). Unlike outbursts, however, this transition seemed to start earlier and last longer. If mountain-forming is not causally related to outbursts, then perhaps correlated changes in other volcanic activity can inform what a common mechanism would look like. Cantrall et al. (2018) defined a persistent hot spot as something detected in more than half of the observations capable of detecting it. They found 18 such locations: 5 on the leading and 13 on the trailing hemisphere. An examination of the Galileo detections shows approximately 13 locations that meet this criterion: 5 in the leading and 8 in the trailing (Lopes et al. 2007). This is a net gain of five (or a 43 pm 17% increase in) persistent hot spots in the trailing hemisphere over the course of \(\sim\)10 years. Although the transitional time frame varies for each hot spot, the first changes were visible around 2000 near the end of the Galileo era and the latest emerging hot spots established their full brightness around the year 2013 (see Fig 12. of Cantrall et al. 2018). Loki Patera is the most extravagant example of this transition. Loki's time series (see Fig 6.10 of de Kleer and Rathbun, 2023) shows a significant difference between a 540-day cycle before 2002 and a \(\sim\)480-day cycle after 2013. Between 2002 and 2009, Loki was in a low-brightness transitional state, followed by erratic episodes until 2013 when it returned to a periodic output even brighter than before. Loki follows the trend of the persistent hot spot population by dimming after 2000 and ramping back up to higher and more predictable levels in the early 2010s. The increase of persistent hot spots in the trailing hemisphere could be related to a similar switch in the outburst hemispheres. For 2000-2013, the combined effect of fewer persistent hot spots in the trailing hemisphere and four times as many outbursts in the leading hemisphere might have a common cause. To compound this trend, Loki emitted significantly less energy from the trailing hemisphere in the 2003-2009 timeframe. This asymmetry in Io's heat flux might be measurable if we took a closer look at the entire dataset. Although inferences of Io's total power do not show a significant heat flux difference between the leading and trailing hemispheres, these are indirect measurements based on the number of discernable hot spots and patera distributed over Io (Veeder et al. 2012; Cantrall et al. 2018). A more detailed, time-dependent heat budget solution would likely reveal global changes in total volcanic radiation that coincide with changes in outburst patterns. If true, this could lead to a common mechanism for decadal variations in Io's volcanism irrespective of volcanic style. ### Future analysis There are several aspects of this dataset that remain to be explored. A rigorous correlation between transient and persistent hot spot locations would be of great value, as would a comparison between transient hot spots and mountains (analogous to Kirchoff et al., 2011). We did not explore the 2-5 \(\upmu\)m ratio of thermal emission, which could expand on previous discoveries of how outbursts differ from other volcanic styles (Davies et al. 2010). Future investigations can also expand the time clustering analysis in three ways: first, by factoring in the specific observation dates and detection dates with their respective observer geometries for each campaign to control for irregular sampling rates and hemispherical preferences; second by evaluating the significance of time clustering in a broad, semi-continuous range of timescales between decades and days; and third by correlating this temporal clustering with nearest-neighbor clustering to evaluate if bright transient events trigger each other or are responding to an independent mechanism. As this dataset grows and data science advances, future analysis will require more advanced techniques. The application of machine learning (ML) will become more necessary. ML's pattern-finding ability will likely reveal surprising correlations in Io's behavior. Whether or not these new methods transform our knowledge of Io, both these and conventional methods would greatly benefit from larger datasets with a longer observational baseline. Therefore, continued monitoring of Io is crucial. ### The importance of more data The most critical future work is to maintain and improve a regular cadence of high-quality observations of Io. Because Io's volcanism is highly variable and cannot yet be explained theoretically, more data is paramount to future discoveries. Our survey of the past 45 years emphasizes the critical importance of continuous, state-of-the-art monitoring of Io's volcanic activity to achieve a more comprehensive understanding. We demonstrate that volcanic activity varies significantly across decades, and the underlying processes are likely far more complex than a square pattern with a \(\sim\)30-year period. Io's volcanic timescales are several orders of magnitude faster than terrestrial planets. This attribute - namely regular volcanic events - likely holds a wealth of insights for volcanism in general. Io's surface exhibits a level of activity unparalleled by any other object in our solar system. The frequency of Io's volcanic events is comparable to the large atmospheric events on other planets (e.g. Earth's large tropical storms), emphasizing the unique opportunity to study geological processes at an accelerated pace. If Io serves as a geological analog for terrestrial volcanism, then a millennium of Earth's volcanic activity can be witnessed in a single year on Io. But it is important to remember that Io's geological timescales are long relative to that of spacecraft missions, observational campaigns, or even human lifespans. Despite the accelerated pace of Io's volcanism, decades of high-resolution observations are required to discern the primary cycle of this dynamic moon. Io changes in a wide range of timescales: from the decadal (or 5000-day) global trends of outbursts, to the \(\sim\)500-day periodicity of Loki Patera's brightness, to the 10-day temporal clustering in large transient eruptions. The complex interplay of these various timescales underscores the necessity of long-term, high-cadence, and high-quality monitoring campaigns. ## 6 Conclusions This work comprehensively explores the behavior of Io's directly detected volcanic outbursts. The dataset we compiled shows that the localized outbursts appear uniformly distributed on Io's surface. However, this uniformity does not hold for temporal subsets of the dataset. We identified a significant change in the outburst locations before and after 2012. Random spatiotemporal distributions do not accurately characterize Io's powerful transient volcanic eruptions. Instead, outbursts cluster in specific regions at certain times. This change is a clue to the underlying processes causing Io's outbursts, which might be correlated with dense clusters of mountains. Mini-outbursts by contrast have a more constant spatial distribution that differs from outbursts - mini-outbursts are significantly clustered in the trailing hemisphere where mountains are less frequent. More data and analysis are necessary to understand whether different mechanisms are responsible for mini-outbursts. Locations like Pillan Patera show that outbursts, mini-outbursts, and semi-persistent hot spots can occur at the same location, but this is rare. Most outbursts and mini-outbursts belong to separate locations. Furthermore, these findings highlight the importance of continuous, state-of-the-art monitoring of Io's volcanic activity. Io's volcanism is highly variable and cannot be explained theoretically, and more data is paramount to future discoveries. Our survey of the past 45 years emphasizes the critical importance of continuous, state-of-the-art monitoring of Io's volcanic activity to achieve a more comprehensive understanding. We hope these observations and trends can inform future studies when more examples of outbursts enable more rigorous analyses. ## 7 Acknowledgements Special thanks to John R. Spencer for his valuable comments on the manuscript. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
2309.13898
Extrinsic vs Intrinsic Criticality in Systems with Many Components
Biological systems with many components often exhibit seemingly critical behaviors, characterized by atypically large correlated fluctuations. Yet the underlying causes remain unclear. Here we define and examine two types of criticality. Intrinsic criticality arises from interactions within the system which are fine-tuned to a critical point. Extrinsic criticality, in contrast, emerges without fine tuning when observable degrees of freedom are coupled to unobserved fluctuating variables. We unify both types of criticality using the language of learning and information theory. We show that critical correlations, intrinsic or extrinsic, lead to diverging mutual information between two halves of the system, and are a feature of learning problems, in which the unobserved fluctuations are inferred from the observable degrees of freedom. We argue that extrinsic criticality is equivalent to standard inference, whereas intrinsic criticality describes fractional learning, in which the amount to be learned depends on the system size. We show further that both types of criticality are on the same continuum, connected by a smooth crossover. In addition, we investigate the observability of Zipf's law, a power-law rank-frequency distribution often used as an empirical signature of criticality. We find that Zipf's law is a robust feature of extrinsic criticality but can be nontrivial to observe for some intrinsically critical systems, including critical mean-field models. We further demonstrate that models with global dynamics, such as oscillatory models, can produce observable Zipf's law without relying on either external fluctuations or fine tuning. Our findings suggest that while possible in theory, fine tuning is not the only, nor the most likely, explanation for the apparent ubiquity of criticality in biological systems with many components.
Vudtiwat Ngampruetikorn, Ilya Nemenman, David J. Schwab
2023-09-25T06:30:23Z
http://arxiv.org/abs/2309.13898v1
# Extrinsic vs Intrinsic Criticality in Systems with Many Components ###### Abstract Biological systems with many components often exhibit seemingly critical behaviors, characterized by atypically large correlated fluctuations. Yet the underlying causes remain unclear. Here we define and examine two types of criticality. _Intrinsic_ criticality arises from interactions within the system which are fine-tuned to a critical point. _Extrinsic_ criticality, in contrast, emerges without fine tuning when observable degrees of freedom are coupled to unobserved fluctuating variables. We only both types of criticality using the language of learning and information theory. We show that critical correlations, intrinsic or extrinsic, lead to diverging mutual information between two halves of the system, and are a feature of learning problems, in which the unobserved fluctuations are inferred from the observable degrees of freedom. We argue that extrinsic criticality is equivalent to standard inference, whereas intrinsic criticality describes _fractional learning_, in which the amount to be learned depends on the system size. We show further that both types of criticality are on the same continuum, connected by a smooth crossover. In addition, we investigate the observability of Zipf's law, a power-law rank-frequency distribution often used as an empirical signature of criticality. We find that Zipf's law is a robust feature of extrinsic criticality but can be nontrivial to observe for some intrinsically critical systems, including critical mean-field models. We further demonstrate that models with global dynamics, such as oscillatory models, can produce observable Zipf's law without relying on either external fluctuations or fine tuning. Our findings suggest that while possible in theory, fine tuning is not the only, nor the most likely, explanation for the apparent ubiquity of criticality in biological systems with many components. Our work offers an alternative interpretation in which criticality, specifically extrinsic criticality, results from the adaptation of collective behavior to external stimuli. Life emerges from an intricate interplay among a large number of components, yet how it achieves such exquisite organization remains unexplained. Several aspects of this question fall within the domain of statistical physics, which studies the emergence of collective behaviors from the interaction between microscopic degrees of freedom. Perhaps the greatest success of statistical physics is in describing spontaneous transitions between two phases of matter such as liquid and gas. For a class of phase transitions, such as between ferromagnetic and paramagnetic states, the critical point, which separates the two phases, displays unique properties, absent from either phase. Many of these properties are relevant to biological function: scale invariance allows scaling up without the need for redesign [1; 2; 3], insensitivity to microscopic details can form a basis for robust behaviors [4; 5], and strong correlations between components appear useful for effective information propagation [6; 7; 8; 9]. A tantalizing question arises whether biology operates near a critical point [10; 11]. This idea has a long history, see, e.g., Ref. [12]. However, it is not until recently that high-precision, simultaneous measurements of hundreds to thousands of components in biological systems allow quantitative empirical tests of the criticality hypothesis. Modern quantitative biology experiments have indeed observed seemingly critical behaviors in many systems across scales, from amino acid sequences [13] to spatiotemporal dynamics of gene expressions [4; 14] to firing patterns of neurons [15; 16; 17; 18; 19; 20] to velocity fluctuations in bird flocks [21; 22; 23]. In these systems, correlations among the components and susceptibilities to perturbations often appear to diverge with the system size. Yet, this ubiquity is somewhat surprising, not least because equilibrium statistical physics tells us that criticality requires a hard to achieve fine tuning of models to a special point in their parameter space. While biology may well be capable of fine tuning [24; 25], alternative explanations for the observed criticality exists [26; 27; 28]. Some signatures of criticality arise without fine tuning when observable degrees of freedom are coupled to an unobserved fluctuating variable or variables [26; 27; 29; 30]. Provided that the number of fluctuating variables is relatively small and their fluctuations are sufficiently large, this latent fluctuation needs not depend on the specifics of the observable degrees of freedom such as the system size. In this case, criticality results from an _extrinsic_ effect. Although this mechanism seems to differ from the fine-tuning explanation, they are not entirely unrelated; interacting systems at criticality also generate large fluctuations. In fact, the usual definition of criticality describes not its mechanisms, but rather the behavior of the observable degrees of freedom such as diverging correlation length, scale invariance and nonanalytic thermodynamic functions (see, e.g., Refs. [31; 32; 33]). Such properties can emerge intrinsically from interactions between components when model parameters are carefully chosen, as is often the case in statistical physics. However, this _intrinsic_ mechanism is by no means the only one, nor is it a defining feature of criticality. Here we introduce a new definition of criticality that spans both intrinsic and extrinsic mechanisms. Using the languages of learning and information theory, we show that a unifying feature of both types of criticality is a divergence of mutual information between two halves of the system.1 The rates of divergence depend on how the fluctuations scale with the system size and generally differ between intrinsic and extrinsic criticality. We show further that critical systems are equivalent to the problem of learning parameters from _iid_ samples, with the fluctuating fields playing the role of the parameters and system components of _iid_ samples. This learning problem is characterized by diverging information between the parameters and samples. Through the learning-theoretic lens, we interpret intrinsic criticality as a _fractional_ learning problem, in which only a fraction of parameters is available for learning due to a sharpening of _a priori_ distribution as the system size grows. In contrast, extrinsic criticality has a fixed _a priori_ distribution whose entropy, i.e., information available to be learned, does not shrink with the system size. Footnote 1: We use the notation \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{N_{A}}\}\) to denote the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of the _mean_ of the _mean_ of the _mean_ of _mean_ of _mean_ of the _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_of _mean_ _of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_of _mean_of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_of _mean_of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_ of _mean_ of _mean_of _mean_ of _mean_of _mean_ of _mean_of _mean that can be inferred from \(\mathbf{x}\).3 Importantly, the information between the two halves exists only to the extent that both depend on \(\phi\). In fact, this information is nearly the same as the information one has about the parameter \(\phi\) having observed \(\mathbf{x}\), i.e., \(I(\mathbf{x}^{A};\mathbf{x}^{B})\approx I(\mathbf{x};\phi)\)[35] (see Fig. 1C). We emphasize that this logarithmic divergence arises _without_ fine tuning and for a broad range of assumptions about \(P(\phi)\) and \(P(x_{i}\mid\phi)\). Footnote 3: We assume that the number of parameters \(k\) is much smaller than the sample size \(N\). For \(1\ll N<k\), the information can grow _linearly_ with \(N\)[46]. **Extrinsic criticality is equivalent to statistical learning.** We can interpret the information divergence in inference problems as a signature of criticality, imposed extrinsically by the unknown, fluctuating variable \(\phi\). We consider a physical manifestation of an extrinsically critical system [26, 27] and write down the joint probability of \(N\) noninteracting, identical binary spins in an external magnetic field \(\phi\), \[P(\mathbf{\sigma}\mid\phi)=\prod\nolimits_{i=1}^{N}\frac{e^{\phi\sigma_{i}}}{2 \cosh\phi} \tag{4}\] where \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\ldots,\sigma_{N})\) and \(\sigma_{i}\in\{\pm 1\}\). If the field is not constant, but a random variable that fluctuates for different realizations of the system, then marginalizing over this field yields \[P(\mathbf{\sigma})=\int\!d\phi\,P(\phi)\prod\nolimits_{i=1}^{N}\frac{e^{\phi\sigma _{i}}}{2\cosh\phi}, \tag{5}\] where \(P(\phi)\) is the marginal distribution of the fluctuating field. This equation takes the exact same form as Eq. (2), with the magnetic field playing the role of the model parameter and the spins of the \(i\!d\) samples. As a result, it follows immediately that \(I(\mathbf{\sigma};\phi)\approx I(\mathbf{\sigma}^{A};\mathbf{\sigma}^{B})\approx\frac{1}{ 2}\log_{2}N\)[35], where \(\mathbf{\sigma}^{A}\) and \(\mathbf{\sigma}^{B}\) denote two halves of the system, see Fig. 1B. In fact, we can derive this result by noticing that the _a posteriori_ distribution, \(P(\phi\mid\mathbf{\sigma})\sim P(\phi)\exp(\phi\sum_{i}\sigma_{i}-N\ln\cosh\phi)\), sharpens as \(N\) grows, with the asymptotic variance decreasing as \(\operatorname{var}(\phi\mid\mathbf{\sigma})\sim 1/N\). In other words, as our knowledge of the parameter improves with \(N\), its _a posteriori_ differential entropy decreases as \(S(\phi\mid\mathbf{\sigma})\approx-\frac{1}{2}\log_{2}N\). Given that the _a priori_ entropy \(S(\phi)\) is finite and does not change with \(N\), we obtain \[I(\mathbf{\sigma}^{A};\mathbf{\sigma}^{B})\approx I(\mathbf{\sigma};\phi)=S(\phi)-S(\phi \mid\mathbf{\sigma})\approx\frac{1}{2}\log_{2}N, \tag{6}\] where the approximations hide the terms of order \(O(N^{0})\). We can glean additional insights from a more traditional statistical mechanics argument. First we recast Eq. (5) as \[P(\mathbf{\sigma})=2^{-N}\int\!d\phi\,P(\phi)e^{N(\phi m-\ln\cosh\phi)}, \tag{7}\] where \(m=m(\mathbf{\sigma})=\sum_{i}\sigma_{i}/N\) is the magnetization. For large \(N\), we evaluate the above integral, using the saddle point approximation and assuming that \(P(\phi)\) satisfies certain technical conditions [26], \[P(\mathbf{\sigma})\approx 2^{-N}\sqrt{\frac{2\pi}{N(1-m^{2})}}P(\phi^{*})e^{N( \phi^{*}m-\ln\cosh\phi^{*})}, \tag{8}\] where \(\phi^{*}=\phi^{*}(m)=\tanh^{-1}m\) denotes the saddle point. Defining the energy function as \(E(\mathbf{\sigma})=-\ln P(\mathbf{\sigma})\) and recalling that the thermodynamic entropy is the logarithm of the density of states--i.e., \(\mathcal{S}(m)=\ln[\frac{N}{2}\left(\begin{smallmatrix}N\\ N_{*}\end{smallmatrix}\right)]\) with \(N_{*}=N(1+m)/2\)--we obtain to the leading order in \(N\)[26] \[E(m)-\mathcal{S}(m)=\ln(1-m^{2})-\ln P(\phi^{*})+O(N^{-1}), \tag{9}\] Figure 1: **Divergent information signifies criticality.** A: Intrinsic criticality in equilibrium generally requires fine tuning. We depict the mutual information between two equal halves of the system, \(I_{1/2}(\mathbf{\sigma})=I(\mathbf{\sigma}^{A};\mathbf{\sigma}^{B})\) with \(N_{A}=N_{B}=N/2\), for the fully connected Ising model as a function of temperature for a range of system sizes \(N\) (see legend). We see that the information diverges with \(N\) only at the critical temperature \(T=T_{c}\). This divergence is logarithmic, with the asymptotic information given approximately by \(\frac{1}{4}\log_{2}N\) (inset). B: Extrinsic criticality, on the other hand, emerges without fine tuning. For a system of noninteracting spins, coupled to a common Gaussian fluctuating field \(\phi\) (inset), the information \(I_{1/2}(\mathbf{\sigma})\) always diverges with \(N\). In contrast to the fully connected Ising model at criticality, the asymptotic information grows faster with \(N\), \(I_{1/2}(\mathbf{\sigma})\approx\frac{1}{2}\log_{2}N\). C: The scaling exponent of the critical fluctuation, \(\operatorname{var}(\phi)\sim N^{-\gamma}\), controls the information divergence rate. We illustrate the information between two halves of the system \(I_{1/2}(\mathbf{\sigma})\) (filled circles), and between the system and the latent field \(I(\mathbf{\sigma};\phi)\) (empty circles) for conditionally independent identical spins, Eq. (5), under a Gaussian fluctuating field \(\phi\sim\mathcal{N}(\mu=0,s^{2}=N^{-\gamma})\) for various values of \(\gamma\) (see legend). The asymptotic scaling of the information is in good agreement with the expected logarithmic divergence \(\frac{1-\gamma}{2}\log_{2}N\) (lines), Eq. (19). We also see that \(I_{1/2}(\mathbf{\sigma})\leq I(\mathbf{\sigma};\phi)\), as expected from the data processing inequality for the Markov chain \(\sigma^{A}-\phi-\mathbf{\sigma}^{B}\). which does not grow with \(N\). This equivalence between the energy and entropy to all increasing order in \(N\) signifies a very strong form of criticality [10]. To compute the mutual information, we write down the entropy of the system \[S(\mathbf{\sigma})=-\sum_{\mathbf{\sigma}}P(\mathbf{\sigma})\ln P(\mathbf{\sigma})=\int_{-1}^{1 }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! spins provides information about the specific realization of \(\phi\), and hence about the other spins. However, the entropy of intrinsically induced critical fluctuations decreases with \(N\) quite generally, \(0\!<\!\gamma\!<\!1\), resulting effectively in only a _fraction_ of a parameter being available for learning, thereby a decrease in the information from its maximum possible of \(\frac{1}{2}\log_{2}N\). In Fig. 1C, we see that Eq. (19) agrees well with the asymptotic behavior of mutual information for a range of \(\gamma\). Thus, we have shown that, at least for a simple model, intrinsic criticality can be viewed as a learning problem, where the underlying large fluctuations in the order parameter leave sufficient freedom to learn its specific realization from observations of the system state. The expression of the information in terms of the difference of the _a priori_ and the _a posteriori_ entropy, Eq. (6), shows that these results will generalize to other critical systems: critical exponents will govern the _a priori_ differential entropy of the order parameter, while the _a posteriori_ differential entropy remains approximately \(-\frac{1}{2}\log_{2}N\). Similarly, for multi-dimensional order parameters, each dimension will contribute to the mutual information essentially independently. For correlated fluctuating parameters, the logarithmic divergence rate provides a measure of effective dimensions of the parameters [47]. Finally, extrinsic and intrinsic criticality will add up as well so that each extrinsically or intrinsically critical field (i.e., with \(N\)-independent or \(N\)-dependent fluctuations) will contribute \(1/2\) or a smaller amount to the coefficient in front of \(\log_{2}N\) in the information. ## II Signature of criticality in finite systems In reality, we can only observe a finite number of components, and the analysis of asymptotic behaviors, while instructive, becomes less precise. To this end, we now turn to the observability of criticality in finite systems. Criticality admits a number of potentially observable signatures. We focus on the properties of the empirical joint distribution of the system components, which can be constructed directly from observational data and thus is readily usable in the context of living systems. A critical system is expected to exhibit Zipf's law, i.e., an inverse relationship between ranks and frequencies of the system states [10]. First we recap how Zipf's behavior emerges from large fluctuations [26; 27] in the asymptotic limit. Consider the joint distribution of \(N\) conditionally independent spins, \[P(\mathbf{\sigma})=\int\!\!d\phi\,P(\phi)\prod_{i=1}^{N}P(\sigma_{i}\mid\phi). \tag{20}\] We see that \(\sum_{i}\ln P(\sigma_{i}\mid\phi)\sim N\), and thus \(P(\mathbf{\sigma}\mid\phi)\) becomes a sharper function of \(\phi\) as \(N\) increases, with a characteristic width that scales as \(1/\sqrt{N}\). This scaling sets a threshold above which fluctuations are critical--that is, critical fluctuations are characterized by \(\mathrm{var}(\phi)\sim N^{-\gamma}\) with \(\gamma<1\). As \(N\to\infty\), a critical prior, \(P(\phi)\), appears flat with respect to \(P(\mathbf{\sigma}\mid\phi)\) which becomes infinitely sharp. Therefore, we can make the approximation, \[P(\mathbf{\sigma}\mid\phi)\approx e^{-\mathcal{S}(\phi)}\delta(\phi-\phi_{\mathbf{ \sigma}}^{*}), \tag{21}\] where \(\mathcal{S}(\phi)\!\equiv\!\ln\sum_{\sigma}\delta(\phi-\phi_{\mathbf{\sigma}}^{*})\) plays the role of the thermodynamic entropy and \(\phi_{\mathbf{\sigma}}^{*}\) is the maximum of \(P(\mathbf{\sigma}\mid\phi)\), assuming only one exists. Substituting the above approximation into Eq. (20) yields \[P(\mathbf{\sigma})\approx P(\phi_{\mathbf{\sigma}}^{*})e^{-\mathcal{S}(\phi_{\mathbf{ \sigma}}^{*})}\equiv e^{-\mathcal{E}(\phi_{\mathbf{\sigma}}^{*})}, \tag{22}\] Figure 3: **Zipf’s law emerges robustly for adequately large extrinsic fluctuations.** We show exact rank-ordered distributions for independent spins under a Gaussian fluctuating field of varying variance (see legend). We see that Zipf’s law becomes more accurate as the variance of the fluctuations and the system size increase. However, the rank-ordered distributions approach Zipf’s scaling over the entire range only when the system is adequately large (B). Here we obtain the approximate slope (bottom row) using the same method as in Fig. 2. Figure 2: **Zipf’s law is an inaccurate description of critical mean-field systems.** We depict exact rank-ordered distributions for the fully connected Ising model [Eq. (13)] at various temperatures \(T\) (see legend) for the system size \(N\!=\!2^{6}\) and \(2^{10}\) (A and B, respectively). Note that the axes correspond to log-rank and log-frequency. The slope of the smoothed log-log plot illustrates how close the models are to Zipf’s law (dashed). Here we obtain the approximate slope (bottom row) by fitting a cubic polynomial to the ‘knees’ of the rank-frequency log-log plot. We see that the rank-frequency plots deviate from the power-law behavior at all temperatures, and this deviation does not appear to improve as the system grows. Importantly the critical model (\(T\!=\!T_{c}\)) exhibits Zipf’s scaling (dashed) only for the least frequent states in the tail region of the rank-ordered distributions and only when the system is large enough (B). In fact, in a smaller system, the rank-frequency plot can appear more Zipf-like at \(T\!<\!T_{c}\) (A), see also Fig. 6. which illustrates that the energy function depends on \(\mathbf{\sigma}\) only through \(\phi_{\sigma}^{*}\), i.e., \(E(\mathbf{\sigma})=-\ln P(\mathbf{\sigma})=\mathcal{E}(\phi_{\sigma}^{*})\). Importantly, the above equation signifies Zipf's law via the equivalence between the extensive parts of the entropy and energy [10], \[\lim_{N\to\infty}\left(\mathcal{E}(\phi)-\mathcal{S}(\phi)\right)/N=\lim_{N\to \infty}-N^{-1}\ln P(\phi)=0. \tag{23}\] For mean-field criticality, \(\ln P(\phi)\sim N\phi^{4}\) [Eq. (17)] and the above cancellation holds when \(\phi\) is adequately small. This condition is guaranteed for a typical realization of \(\phi\) since \(\mathrm{var}(\phi)\sim 1/\sqrt{N}\). For extrinsic criticality, \(\ln P(\phi)\sim O(N^{0})\) and the entropy-energy equivalence needs not rely on \(\phi\) being small. In Fig. 2, we depict exact (infinite samples) rank-frequency plots for the fully connected Ising model at a range of temperatures. We see a clear deviation from the power-law behavior at all temperatures. The rank-frequency plot at \(T_{c}\) approaches Zipf's scaling, but only in the tail region and for an adequately large system (Fig. 2B). In a smaller system, Zipf's law can appear more accurate at \(T<T_{c}\) (Fig. 2A). This disconnect between the Zipf behavior and criticality in mean-field models is likely to be more visible under finite samples, which can only probe parts of the exact rank-frequency plots (see also Fig. 6). We emphasize here that the number of observations required to resolve the tail of this rank-frequency plot would be experimentally impractical, \(2^{N}\sim 10^{20}\) for \(N=2^{6}\) and \(2^{N}\sim 10^{300}\) for \(N=2^{10}\) (Fig. 2A&B). On the other hand, Fig. 3 shows that the rank-ordered distributions of identical spins under extrinsic fluctuations become more Zipf-like over the _entire_ range of frequencies and ranks as the fluctuation variance increases and as the system grows. However, the rank-frequency plots approach Zipf's law only when the system is sufficiently large (Fig. 3B). Although mean-field criticality results in a Zipf-like rank-frequency plot only in the hard-to-observe tail region, intrinsic criticality with a more general critical exponent--i.e., \(\mathrm{var}(\phi)\sim N^{-\gamma}\) with \(\gamma\neq 1/2\)--can generate rank-ordered distributions that are much closer to Zipf's law. In Fig. 4, we compare the system-size dependence of the rank-frequency plots for the fully connected Ising model at \(T_{c}\) (\(\gamma=1/2\)) to several models with larger fluctuations, including those potentially induced by intrinsic criticality of non-mean-field models (\(0<\gamma<1/2\)) as well as the limiting case of extrinsic criticality (\(\gamma=0\)). Figure 4A shows that the critical mean-field model exhibits Zipf scaling only in the tail region of the rank-ordered distribution (see also Fig. 2) and the agreement with Zipf's law does not improve, nor degrade, as the system grows. On the other hand, we see that if the fluctuation variance decreases more slowly with \(N\), i.e., \(\gamma<1/2\), Zipf's law gradually becomes more accurate as \(N\) increases, see Fig. 4B-F. This behavior implies that empirically observed Zipf behavior is likely to indicate either extrinsic criticality or intrinsic criticality of non-mean-field type, characterized by critical fluctuations that scale only weakly with the system size. ## III Zipf's law as a signature of criticality under finite samples Experimental measurements are finite in not only the number of observable degrees of freedom but also the number of observations. We now turn to examine the behavior of rank-frequency plots constructed from finite samples. In the following, we generalize our conditionally independent model, Eq. (5), to describe nonidentical spins, \[P(\mathbf{\sigma}\mid\phi)=\prod_{i}\frac{e^{\sigma_{i}w_{i}\phi}}{2\cosh w_{i} \phi}. \tag{24}\] Here \(w_{i}\sim O(N^{0})\) is the coupling strength between the spin \(\sigma_{i}\) and the field \(\phi\), which can differ from one spin to another. For convenience, we also define \[\Delta\equiv 1/\sqrt{\sum_{i}w_{i}^{2}}, \tag{25}\] Figure 4: **Rank-ordered distributions exhibit a smooth crossover between intrinsic and extrinsic criticality.** Normalized exact rank-frequency plots for a range of system sizes \(N\) (see legend in F) illustrate a gradual crossover from the fully connected Ising model at criticality (A) to spins under an extrinsic fluctuating field (F). This crossover is induced by varying the scaling exponent of the variance of fluctuations \(\mathrm{var}(\phi)\sim N^{-\gamma}\), from \(\gamma=1/2\) for the mean-field criticality in A [see Eq. (15)] to \(\gamma<1/2\) for non-mean-field intrinsic criticality in B-E (see label) and \(\gamma=0\) for extrinsic criticality in F. We see that the rank-frequency plots approach Zipf’s law (dashed) as the system grows, and at a faster rate for a smaller scaling exponent \(\gamma\). For mean-field criticality, the agreement with Zipf scaling neither improves, nor degrades, with increasing \(N\) (Panel A). In B-E, the prior over fluctuations takes the form \(P(\phi)\sim e^{-c\phi^{4}}\) where \(c\propto N^{2\gamma}\) [cf. Eq. (17)]. In all panels, the fluctuation variance at \(N=64\) (the smallest \(N\) shown) is the same and equal to that of the critical mean-field case. That is, in B-F, we have \(\mathrm{var}(\phi)=\lambda_{0}(64/N)^{\gamma}\) with \(\lambda_{0}\) denoting the variance of critical mean-field fluctuation at \(N=64\). which is the characteristic width of \(P(\mathbf{\sigma}\,|\,\phi)\) at large \(N\), i.e., \(P(\mathbf{\sigma}\,|\,\phi)\sim e^{-(\phi-\phi_{\sigma}^{*})^{2}/2\Delta^{2}}\) with \(\phi_{\sigma}^{*}\!=\!\sum_{i}w_{i}\sigma_{i}/\!\sum_{i}w_{i}^{2}\). Following the argument in the preceding section, we expect Zipf behavior when \(\operatorname{var}(\phi)\!\gg\!\Delta^{2}\!\sim\!O(N^{-1})\). Similarly, for mean-field criticality, we consider the rank-one Ising model, which generalizes the fully connected Ising model to nonidentical spins and is defined by the energy function, \[E(\mathbf{\sigma})=-\frac{1}{4N}\sum_{i\!j}w_{i}w_{j}\sigma_{i}\sigma_{j}=-\frac{1 }{4N}\left(\sum_{i\!i}w_{i}\sigma_{i}\right)^{2}, \tag{26}\] where \(w_{i}\!\sim\!O(N^{0})\) and \(w_{i}w_{j}\) describes the pairwise interaction between spins \(i\) and \(j\). This model can be recast as a conditionally independent model, with the conditional distribution of the spins given by Eq. (24) and an _a priori_ distribution that depends on both \(N\) and \(\{w_{i}\}\) (see Appendix B), \[P_{N}(\phi)=\frac{2^{N}}{Z}\sqrt{\frac{N}{\pi\beta}}e^{-\frac{N}{2}\phi^{2}+ \sum_{i\!\text{ in cosh }w_{i}\phi}}, \tag{27}\] where \(Z\!=\!\sum_{\sigma}e^{-\beta E(\mathbf{\sigma})}\). The thermodynamic critical temperature, \(\beta_{c}\!=\!2N/\sum_{i}w_{i}^{2}\), marks the point at which this distribution changes from unimodal to bimodal. Figure 5 illustrates that critical mean-field fluctuations are too small to generate experimentally observable Zipf's law. We consider a system of 60 conditionally independent spins under a number of _a priori_ distributions, \(P(\phi)\) (see inset), including that induced by a rank-one Ising model at \(T_{c}\) [Eq. (27)]. For each _a priori_ distribution, we draw \(10^{8}\)_iid_ realizations of the system and construct an empirical rank-frequency plot. In Fig. 5A, we see that the rank-one Ising model at \(T_{c}\) does not produce Zipf's law. Yet, if we make the fluctuation larger while fixing the shape (standardized moments) of the fluctuation prior, the resulting rank-frequency plot edges closer to Zipf scaling. However, the fluctuation variance is not the only factor that controls the behavior of the rank-frequency plot. In Fig. 5B, we see that at a fixed variance, an _a priori_ distribution with thicker tails (larger standardized moments) produces a more Zipf-like rank-order plot. We emphasize that the resolution of these plots, especially in the tails, is limited by the number of samples. While we do not rule out the possibility that mean-field criticality may exhibit Zipf behavior in the tail region (see also Fig. 2), observing such behavior would require orders of magnitude more samples than \(10^{8}\) and would therefore be experimentally impractical. Extrapolating the asymptotic critical temperature to finite systems is, of course, somewhat dubious. In Fig. 6, we consider another frequently used empirical definition of criticality which identifies the critical point with the maximum in the specific heat or equivalently the energy variance. In the asymptotic limit \(N\!\to\!\infty\), this definition is identical to the thermodynamic critical temperature \(T_{c}\). For finite systems, however, the specific heat maximum occurs at a lower temperature \(T^{*}\!<\!T_{c}\) (Fig. 6B). This temperature also coincides roughly with another possible empirical definition of criticality, namely the maximum of the mutual information \(I(\mathbf{\sigma};\phi)\) which indicates maximum correlations and learnability (Fig. 6B). In Fig. 6A, we see that lowering the temperature of the rank-one Ising model from \(T_{c}\) to \(T^{*}\) makes the system closer to, but still visibly different from, Zipf's law. In fact, the closest agreement to Zipf's law occurs at an even lower temperature. Two factors contribute to this intriguing temperature dependence. First, for mean-field criticality, Zipf scaling is expected only in the tail of the rank-frequency plot (see Fig. 2B), which requires a very large number of samples to resolve. Second, the correspondence between criticality and Zipf behavior is blurred in finite systems with Figure 5: **Critical mean-field fluctuations are too small to support empirically observable Zipf behavior.** Panels A and B illustrate the effects of the width and the structure of the tails of the _a priori_ distribution \(P(\phi)\) (inset) on finite-sample rank-frequency plots, respectively. The red dotted curves correspond to the rank-one Ising model at criticality [Eqs. (26-27)]. In A, we consider linear scaling of critical mean-field fluctuations such that \(\operatorname{var}(\phi)\!=\!\delta^{2}\lambda_{c}\), where \(\lambda_{c}\) is the fluctuation variance of the rank-one Ising model at \(T_{c}\), for various scaling coefficient \(\delta\) (see legend). We see that critical mean-field fluctuations (\(\delta\!=\!1\)) do not result in Zipf behavior. Increasing the fluctuation variance while maintaining the overall shape of the fluctuation prior produces rank-frequency plots that progressively appear closer to Zipf scaling (dashed). In B, we consider the fluctuating field of the form \(P(\phi)\!-\!e^{-c|\phi|^{q}}\). We vary the probability in the tails of \(P(\phi)\) with the shape parameter \(q\) (see legend), and choose the scale parameter \(c\) such that the fluctuation variance is fixed and equal to that of critical mean-field fluctuations (red dotted lines). Decreasing \(q\) increases the probability of large \(\phi\), i.e., puts more mass in the tails of \(P(\phi)\), and improves the agreement between rank-frequency plots and Zipf’s law. Overall, while instructive in understanding critical behaviors, mean-field models are an unlikely candidate for explaining experimentally observed Zipf behavior. Here the results are for a system of 60 spins, \(10^{8}\) realizations per model and \(w_{i}\!\sim\!N(\mu\!=\!1,s\!=\!0.3)\) [see Eq. (24)]. the tendency for Zipf's law to be more accurate at subcritical temperatures (see Fig. 2A). In sum, we demonstrate that for mean-field models, empirically observable Zipf behavior can be completely uncoupled from the usual notion of criticality. Figure 6C illustrates the interaction-induced _a priori_ distribution \(P(\phi)\) at various temperatures. We see that \(P(\phi)\) is flat around its maximum at \(T_{c}\). In the thermodynamic limit, this condition leads to non-Gaussian fluctuations which break the central limit theorem and generate critical correlations. However, in finite systems, the field \(\phi\) is not well described by fluctuations in the immediate vicinity of its most likely values. We see that at the specific heat maximum \(T^{*}\), the _a priori_ distribution has a non-negligible density at \(\phi\!=\!0\) even though it is bimodal with maxima at \(\phi\!\neq\!0\). This distribution results in a larger fluctuation than at \(T_{c}\), resulting in higher energy variance as well as a rank-ordered plot closer to Zipf's law. As the temperature drops below \(T^{*}\), the most likely field values move further away from zero. Larger values of \(\phi\) suppress the variability of the system: each spin \(\sigma_{i}\) aligns with \(\mathrm{sign}(w_{i}\phi)\) with increasing probability, thereby the decrease in the energy variance. At high temperatures \(T\!>\!T_{c}\), the _a priori_ distribution becomes sharply peaked at \(\phi\!=\!0\), resulting in more random systems and thus a decrease in correlations. Rank-ordered plots also reflect this competition; reduced variability leads to a rank-ordered plot that decays faster than Zipf's law at low \(T\), whereas increased randomness yields a plot that appears flatter than Zipf's law at high \(T\). **Intrinsic fluctuations can lead to Zipf's law without fine tuning.** So far we see that, in the absence of external fluctuations, the empirical signatures of large correlated fluctuations, such as Zipf-like distributions, are hard to observe. While this statement is true for equilibrium systems, critically large intrinsic fluctuations can emerge generically when the system is endowed with certain dynamics. To illustrate this point, we consider a discrete-time, dynamical generalization of the conditionally independent spin model [Eq. (24)], \[P(\mathbf{\sigma}_{t+1}\mid\phi_{t})=\prod_{i}\frac{e^{\sigma_{i,x}w_{i}\phi_{t}}} {2\cosh w_{i}\phi_{t}}, \tag{28}\] with \(\phi_{t}\!=\!\beta(m_{t}+\alpha m_{t-1})\) and \(m_{t}\!=\!\frac{1}{N}\sum_{i}w_{i}\sigma_{i,t}\). Here the index \(i\) labels each spin and \(t\) the time step. The parameter \(\beta\) controls the stochasticity of the spins much like the inverse temperature and \(\alpha\) couples states separated by two time steps. We illustrate the dynamics of the fluctuating field \(\phi_{t}\) for various \(\beta\) and the corresponding _a priori_ distributions in Fig. 7B&C, respectively. In Fig. 7A, we see that this model can result in Zipf behavior over a range of model parameters, demonstrating that fine tuning is not a requirement for empirically observable Zipf behavior in the absence of external fields. Although the fluctuations are generated entirely internally, we can interpret this emergence of Zipf's law as extrinsic criticality. To see this, we note that the model parameters \(\alpha\) and \(\beta\) control the amplitude of the oscillations and thus the fluctuation variance (Fig. 7B&C). The system size \(N\) only determines the stochasticity of the dynamics. As a result, the _a priori_ distribution depends only weakly on \(N\) and becomes completely independent of \(N\) in the asymptotic limit. This diminishing system-size dependence makes the resulting fluctuations indistinguishable from extrinsic ones. Figure 6: **Empirical Zipf behavior needs not coincide with the critical point of the system.** A: We depict empirical rank-ordered distributions of the rank-one Ising model [Eqs. (26-27)] at various temperatures (see color legend in B). We see that the distribution is closest to Zipf’s law at an intermediate temperature, significantly lower than the critical temperature \(T_{c}\). As an empirical, alternative definition of a critical temperature for finite systems, we consider the maximum of the specific heat or, equivalently the energy variance. This definition results in \(T^{*}\!<\!T_{c}\) (see B), which is still significantly higher than the temperature that exhibits approximate Zipf behavior. B: The energy variance \(\mathrm{var}(E)\) (solid, left axis) and mutual information \(I(\mathbf{\sigma};\phi)\) (dashed, right axis) provide measures of correlations in the system. Both maximize below \(T_{c}\) roughly at the same temperature (\(T^{*}\) for \(\mathrm{var}(E)\)). C: The prior \(P(\phi)\) changes from unimodal to bimodal at \(T_{c}\), which indicates maximum correlations in the thermodynamic limit. However, in finite systems, \(P(\phi)\) cannot be accurately described by its behavior near maxima. We see that the specific heat peaks at \(T^{*}\!<\!T_{c}\) (see B), where \(P(\phi)\) is bimodal but with significant density at \(\phi\!=\!0\). Here the results are for a system of 60 spins, \(10^{8}\) realizations per model and \(w_{i}\!\sim\!N(\mu\!=\!1,s\!=\!0.3)\) [see Eq. (24)]. Figure 7: **Zipf’s law can emerge from intrinsically induced fluctuations without fine tuning.** A: Rank-ordered distributions for a dynamical spin model, Eq. (28), display Zipf’s law for a range of effective inverse temperatures \(\beta\) (see legend). B&C: We illustrate typical empirical distributions of the fluctuating field \(\phi\) and its typical dynamics (same legend as A). The results shown are for a system of 60 spins, \(10^{8}\) realizations per model and \(w_{i}\!\sim\!N(\mu\!=\!1,s\!=\!0.6)\), and we set \(\alpha\!=\!-0.8\) [see Eq. (28)]. Discussion Here we introduced general definitions of criticality encompassing extrinsically and intrinsically critical systems, and we showed that information-theoretic and learning-theoretic considerations allow us to view all non-spatial, critical systems on a similar footing. Namely, criticality leads to the logarithmic divergence in the information between two subsystems or between the system's observable degrees of freedom and its fluctuating latent field. The coefficient in front of the divergence is semi-integer for systems with extrinsic criticality, and other fractions for intrinsic criticality. Both situations can be viewed as learning the parameter from measurements of the state of system components, and the _a priori_ variance of this parameter is independent of the system size (extrinsic criticality) or decreases as the system size grows (intrinsic criticality). We focused on the scenario where the most critical a system can be is when it has \(\frac{1}{2}\log_{2}N\) bits of mutual information per dimension of the order parameter, which is equivalent to the _iid_ learning problems. Additional intrinsic couplings would then reduce the _a priori_ variance of the order parameter and hence reduce the mutual information. However, some biological systems may have the _a priori_ parameter variance that increases with \(N\)[48], so that more than \(\frac{1}{2}\log_{2}N\) bits are contributed to the mutual information per latent field. One can imagine this happening in an optimally designed sensory system, where spins are coupled to the field in a way to reduce the redundancy of the information they obtain about it. Investigating the properties of such systems from information-theoretic, learning, and statistical physics angles is clearly needed. Similarly, it is worth investigating systems in which the mutual information between macroscopic parts scales as a sublinear power of \(N\), (rather than a logarithm), which correspond to an infinite number of latent fields with hierarchically smaller _a priori_ variances [35].4 Finally, since all of our learning and information-theoretic arguments are asymptotic, and \(O(1)\) corrections may not be negligibly small compared to \(\log_{2}N\), subleading corrections are also worth investigating. Footnote 4: We note that some models, such as the random energy model, can generate extensive information [49], but these models are generally not learnable from finite data and hence of less experimental relevance [50]. In addition, there is a striking similarity between the critical behavior of mutual information in classical systems and entanglement entropy, a quantum information-theoretic measure of correlations. At quantum critical points, long-ranged correlations lead to diverging entanglement entropy, violating the _area law_[51]. For infinite quantum critical spin chains, this divergence is logarithmic in the subsystem size with a universal prefactor that is related to the central charge of the corresponding conformal field theory [52; 53; 54; 55; 56]. It would be interesting to develop a learning-theoretic picture of quantum criticality and explore whether and how the central charge relates to the effective number of latent parameters. We also investigated the observability of an empirical signature of criticality, namely Zipf's law. While the correspondence between Zipf behavior and criticality is precise in the thermodynamic limit, whether it holds for a finite system depends on how critical the system is, i.e., how large the _a priori_ variance \(\text{var}(\phi)\) is, compared to the width of the conditional distribution \(P(\mathbf{\sigma}\,|\,\phi)\). For extrinsic criticality, Zipf's law emerges robustly under an adequately broad _a priori_ fluctuation distribution. Intrinsic criticality, on the other hand, does not always induce large enough fluctuations to support Zipf behavior. In particular, mean-field critical fluctuations are too small to generate Zipf's law even in the infinite-sample limit. Indeed, under finite samples, the closest agreement to Zipf's law can occur at a temperature significantly lower than the thermodynamic critical temperature as well as the specific heat maximum, an alternative, empirical definition of a critical point. This approximate Zipf behavior at intermediate temperature results from the competition between order-promoting interactions and thermal noise, and is unlikely to be a signature of equilibrium intrinsic criticality in the usual sense. Perhaps the disconnect between intrinsic criticality and Zipf behavior in finite systems is unsurprising, not least because of the blurred notion of criticality away from the thermodynamic limit. While using the specific heat maximum to indicate criticality is tempting, it requires assumptions on the probabilistic model that describes the data. Zipf behavior offers an alternative, model-free definition, but as we showed, it can be nontrivial to observe in intrinsically critical systems. We emphasize that no finite-system definition of criticality captures all of its thermodynamic signatures; for instance, finite-sample Zipf behavior does not correspond to maximum energy fluctuations or maximum correlations (see Fig. 6). While our asymptotic analysis suggests that the correspondence between intrinsic criticality and Zipf behavior is more precise for larger systems, we focus on a relatively small system of 60 spins (Figs. 5-7) since it is more relevant to real measurements. In particular, a well-sampled rank-ordered plot becomes exceedingly difficult to achieve as the system size grows. For example, a rank-ordered distribution for the critical rank-one Ising model with 80 spins shows almost no structure even at \(10^{8}\) samples (see Fig. 8). This loss of structure due to finite samples is less severe for extrinsic fluctuations but the agreement with Zipf's law degrades with increasing \(N\) (Fig. 9), in contrast to the infinite-sample case, in which Zipf's law becomes more accurate as the system grows (Fig. 4F). We show further that some oscillatory systems can generate observable Zipf's law without fine tuning or external fluctuations. We argue that the mechanism behind this behavior is mathematically equivalent to the extrinsic mechanism since the scale of the collective dynamics--hence the variance of the fluctuations once the time variable is integrated out--is often independent of the system size. Indeed, collective oscillations are common both in mathematical models (e.g., Refs. [57; 58]) and in biological systems (e.g., Refs. [59; 60; 61]). Our results suggest that models with a global dynamical variable that stays within a certain range could offer another plausible explanation for empirically observed Zipf's law, without the need for extrinsic fluctuations. Finally, we discuss how subsampling may affect the observability of Zipf behavior. Many experiments do not measure the system in its entirety and what we can vary is the number of the observed components rather than the system size. Perhaps the unobserved degrees of freedom could play the role of an extrinsic source of fluctuations for the observed ones, resulting in extrinsically induced criticality and thus making observation of Zipf's law more probable. However, our simple model of intrinsic criticality does not support this thinking. Suppose we observe \(K\) out of the total of \(N\) spins. Intrinsic fluctuations quite generally become smaller with \(N\), i.e., \(\operatorname{var}(\phi)\sim N^{-\gamma}\) with \(0<\gamma\), whereas the width of the conditional probability \(P(\mathbf{\sigma}\mid\phi)\) decreases with \(K\), i.e., \(\Delta\sim K^{-1/2}\) [cf. Eq. (25)]. As a result, the relative fluctuation variance is \(\operatorname{var}(\phi)/\Delta^{2}\sim(K/N)\times N^{1-\gamma}\) with \(\gamma<1\) for critical systems. In other words, decreasing the observable fraction makes the fluctuation appear smaller and Zipf's law less likely (even though the smaller number of degrees of freedom makes it easier to obtain better-sampled rank-frequency plots, see Fig. 8). This effect is even more acute when the system size far outnumbers the observed components, e.g., a recording of neural spikes in the brain. Real systems can of course be more complicated than our simple model. For example, in spatially extended systems, the order parameter could be a field in space and the number of inferable parameters, e.g., the Fourier components of the order parameter, can depend on how many spins we observe. Thus, the bits available to be learned can depend on the number of observed spins, even though the _a priori_ variances of the parameters do not (as they are set by the size of the entire system). Investigations of criticality and its empirical signatures in this setting are in order. We end by pointing out that many biological critical systems become more Zipf-like as they grow [10], which begs the question of why this happens. As pointed out in Ref. [26], consider a sensory system that is learning the state of the outside world (that is, responds to its different values differently); one would expect this system to be constructed in a way not to decrease the variability of the world when the system size grows. Such systems would always be critical, and specifically extrinsic critical, maybe explaining their ubiquity. ###### Acknowledgements. We thank William Bialek, Stephanie Palmer, and Pankaj Mehta for valuable discussions. VN and DJS acknowledge support from the National Science Foundation, through the Center for the Physics of Biological Function (PHY-1734030). DJS is supported in part by the Simons Foundation and by the Sloan Foundation. IN was supported in part by the Simons Foundation Investigator grant and the NIH grants 1R01NS099375 and 2R01NS084844. ## Appendix A Mutual information in conditionally independent models In this appendix, we consider conditionally independent spin models and derive the leading contribution to the mutual information between the spins and the fluctuating field and between two halves of the system in the many-spin limit. First, we write down the probability distribution of the spins, \[P(\mathbf{\sigma})=\int\!\!d\phi\,P(\phi)\,\prod_{i}P(\sigma_{i}\mid\phi), \tag{10}\] where \(P(\phi)\) denotes the distribution of the fluctuating field \(\phi\). The conditional probability of each spin reads \[P(\sigma_{i}\mid\phi)=\frac{e^{\sigma_{i}w_{i}\phi}}{2\cosh w_{i}\phi}, \tag{11}\] Figure 9: **Zipf’s law emerges robustly from large extrinsic fluctuations.** We display rank-ordered distributions for the conditionally independent model with \(N\) spins for \(N\!=\!40,60,80\) (left to right) under a Gaussian fluctuating field of different widths (see legend). When the fluctuation \(\phi\) is large compared to the width of the conditional distribution \(P(\mathbf{\sigma}\mid\phi)\)—i.e., when \(\operatorname{std}(\phi)\gg\Delta\), see Eq. (25)—the rank-ordered plots exhibit Zipf behavior (dashed). The empirical rank-ordered distribution displays meaningful structures only when constructed from adequate samples. Larger systems require more samples; for \(N\!=\!80\), the distribution is completely flat for \(\operatorname{std}(\phi)/\Delta=2\) even with \(10^{8}\) samples (c). The results shown are for \(10^{8}\) realizations per model and \(w_{i}\sim\mathcal{N}(\mu\!=\!1,s\!=\!0.3)\). Figure 8: **Subsampling does not make intrinsically critical systems appear more Zipf-like.** We depict the rank-ordered frequency for the rank-one Ising model of \(N\) spins, Eq. (26), at the critical temperature, for \(N\!=\!40,80\) (a & b), and a range of observed subsystem size \(K\) (see legend). We see that limiting observation to a fraction of the full system, i.e., \(K<N\), does not result in more Zipf-like behavior (dashed). It leads however to more structured rank-ordered distributions, especially when the system is large (b), since a system of fewer spins requires a smaller sample size to be well-sampled. The results shown are for \(10^{8}\) realizations per model and \(w_{i}\sim\mathcal{N}(\mu\!=\!1,s\!=\!0.3)\). where \(w_{i}\) parametrizes the influence of the fluctuating field \(\phi\) on spin \(i\). For convenience, we introduce \[g(\phi;\mathbf{\sigma})=-\tfrac{1}{N}\sum\nolimits_{i}(\sigma_{i}w_{i}\phi-\ln \cosh w_{i}\phi), \tag{10}\] such that \[P(\mathbf{\sigma}\mid\phi)=\prod\nolimits_{i}P(\sigma_{i}\mid\phi)=e^{-Ng(\phi; \mathbf{\sigma})}/2^{N}. \tag{11}\] Therefore the full joint distribution can be written as \[P(\mathbf{\sigma},\phi)=P(\mathbf{\sigma}\mid\phi)P(\phi)=P(\phi)\times e^{-Ng(\phi; \mathbf{\sigma})}/2^{N}. \tag{12}\] We now consider the limit \(N\to\infty\). We assume that the weights \(\{w_{i}\}\) are independent of the system size \(N\) (e.g., they are drawn from a fixed distribution) such that \(g(\phi;\mathbf{\sigma})\) is intensive. For a _smooth_ prior--i.e., \(\lim_{N\to\infty}\tfrac{1}{N}\ln P(\phi)=0\)--the joint distribution [Eq. (12)] is dominated by fluctuations around the minimum of \(g(\phi;\mathbf{\sigma})\), \[P(\mathbf{\sigma},\phi)\approx P(\phi^{*}_{\mathbf{\sigma}})P(\mathbf{\sigma}\mid\phi^{*} _{\mathbf{\sigma}})\times e^{-\tfrac{N}{2}g^{*}(\phi^{*}_{\mathbf{\sigma}})(\phi- \phi^{*}_{\mathbf{\sigma}})^{2}_{\mathbf{\sigma}}}, \tag{13}\] where \(\phi^{*}_{\mathbf{\sigma}}\) is the root of \(g^{\prime}(\phi;\mathbf{\sigma})=0\) and we drop the superfluous dependence on \(\mathbf{\sigma}\) from \(g^{*\prime}(\phi;\mathbf{\sigma})=\tfrac{1}{N}\sum\nolimits_{i}w_{i}^{2}\, \operatorname{sech}^{2}w_{i}\phi\). We see that when conditioned on the spins, the fluctuation is Gaussian, \[P(\phi\mid\mathbf{\sigma})\approx\mathcal{N}\left(\phi\mid\mu\!=\!\phi^{*}_{\mathbf{ \sigma}},\;s^{2}\!=\!\frac{1}{Ng^{*\prime}(\phi^{*}_{\mathbf{\sigma}})}\right). \tag{14}\] As a result, we obtain the conditional differential entropy, \[S(\phi\mid\mathbf{\sigma})\approx-\frac{1}{2}\ln N+\frac{1}{2}\ln 2\pi e-\frac{1} {2}\sum\limits_{\mathbf{\sigma}}P(\mathbf{\sigma})\ln g^{*\prime}(\phi^{*}_{\mathbf{\sigma }}). \tag{15}\] We see that the logarithmic divergence is the leading contribution since the last two terms do not grow with \(N\). For extrinsic fluctuations, \(P(\phi)\), and thus \(S(\phi)\), is independent of \(N\); therefore, we obtain \[I(\phi;\mathbf{\sigma})=S(\phi)-S(\phi\mid\mathbf{\sigma})\approx\frac{1}{2}\ln N+O(N ^{0}). \tag{16}\] On the other hand, if \(P(\phi)\sim e^{-N^{\gamma}\times c(\phi-\phi_{0})^{2}}\) for some \(\gamma\in[0,1)\) and a constant \(c>0\), its entropy diverges logarithmically \(S(\phi)\!\approx\!-\tfrac{\gamma}{2}\ln N+O(N^{0})\) and the mutual information reads \[I(\phi;\mathbf{\sigma})\approx\frac{1-\gamma}{2}\ln N+O(N^{0}). \tag{17}\] We see that the decrease in information results from the fluctuation entropy that decreases logarithmically with \(N\). The same logarithmic divergence also emerges in the mutual information between two macroscopic halves of the system. To see this, we note that the entropy of the spins is given by \[S(\mathbf{\sigma})=S(\phi)-S(\phi\mid\mathbf{\sigma})+S(\mathbf{\sigma}\mid\phi) \tag{18}\] Similarly, the entropy of each half of the system reads \[S(\mathbf{\sigma}^{\nu})=S(\phi)-S(\phi\mid\mathbf{\sigma}^{\nu})+S(\mathbf{\sigma}^{\nu} \mid\phi) \tag{19}\] where \(\nu\!\in\!\{A,B\}\) and \(\mathbf{\sigma}\!=\!(\mathbf{\sigma}^{A},\mathbf{\sigma}^{B})\). When the subsystems are large and the spins in each half are randomly chosen, we have [see Eq. (15)] \[S(\phi\mid\mathbf{\sigma}^{\nu})\approx S(\phi\mid\mathbf{\sigma})-\frac{1}{2}\ln\frac{ N_{\nu}}{N}. \tag{20}\] We now write the mutual information between the two halves in terms of the above entropy, \[I(\mathbf{\sigma}^{A};\mathbf{\sigma}^{B}) =S(\mathbf{\sigma}^{A})+S(\mathbf{\sigma}^{B})-S(\mathbf{\sigma}) \tag{21}\] \[\approx I(\phi;\mathbf{\sigma})+\frac{1}{2}\ln\frac{N_{A}N_{B}}{N^{2}}. \tag{22}\] where we use the fact that \(I(\phi;\mathbf{\sigma})=S(\phi)-S(\phi\!\mid\mathbf{\sigma})\) and the property of conditional independence, \(S(\mathbf{\sigma}\mid\phi)=S(\mathbf{\sigma}^{A}\!\mid\phi)+S(\mathbf{\sigma}^{B}\!\mid\phi)\). For \(N_{A}=N_{B}=N/2\), we see that \(I(\mathbf{\sigma}^{A};\mathbf{\sigma}^{B})\) is smaller than \(I(\mathbf{\phi};\mathbf{\sigma})\) by one bit. ## Appendix B Rank-one Ising models Here we provide an analysis of rank-one Ising models--those with pairwise interaction matrices of rank one--defined by the energy function, \[E(\mathbf{\sigma})=-\frac{1}{4N}\sum\nolimits_{ij}w_{i}w_{j}\sigma_{i}\sigma_{j}=- \frac{1}{4N}\left(\sum\nolimits_{i}w_{i}\sigma_{i}\right)^{2}, \tag{23}\] where \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\ldots,\sigma_{N})\) denotes the state of the system, \(\sigma_{i}\!\in\!\{\pm 1\}\) the spin at site \(i\!\in\!\{1,2,\ldots,N\}\), and the product \(w_{i}w_{j}\) describes the interaction between spins \(i\) and \(j\). We note that the terms with \(i\!=\!j\) only add an irrelevant constant. This model generalizes the fully-connected Ising model which corresponds to setting \(w_{i}\!=\!w_{j}\) for all \(i\) and \(j\). As usual, the probability distribution of the system configuration is given by \[P(\mathbf{\sigma})=e^{-BE(\mathbf{\sigma})}/Z, \tag{24}\] where we introduce the inverse temperature \(\beta\!=\!1/T\) and the partition function \(Z\!=\!\sum_{\mathbf{\sigma}}e^{-BE(\mathbf{\sigma})}\). Computing the partition function by directly summing over all possible spin states is generally analytically intractable. Instead, we trade this summation for an integral using the Hubbard-Stratonovich transformation, \[Z=\sum_{\mathbf{\sigma}}e^{\frac{B}{T}(\sum\nolimits_{i}w_{i}\sigma_{i})^{2}}= \sqrt{\frac{N}{\pi\beta}}\int\!\!d\phi\sum\nolimits_{\mathbf{\sigma}}e^{-\frac{N}{T }\phi^{2}+\phi\sum\nolimits_{i}w_{i}\sigma_{i}}. \tag{25}\] We see that the spins become noninteracting at the cost of introducing a new fluctuating field \(\phi\) which correlates with the spins via the joint distribution, \[P(\mathbf{\sigma},\phi)=\frac{1}{Z}\sqrt{\frac{N}{\pi\beta}}e^{-\frac{N}{T}\phi^{2}+ \phi\sum\nolimits_{i}w_{i}\sigma_{i}}. \tag{26}\] Summing out each spin variable from Eq. (25) yields \[Z=2^{N}\sqrt{\frac{N}{\pi\beta}}\int\!\!d\phi\,e^{-\frac{N}{T}\phi^{2}+\sum \nolimits_{i}\ln\cosh w_{i}\phi}. \tag{27}\] As a result, we can express various thermodynamic variables of the spins as integrals over a continuous field which are usually more convenient than summations over discrete spin states. In particular, the internal energy, entropy and heat capacity--\(U\), \(S\) and \(C\), respectively--read \[U =-\frac{\partial}{\partial\beta}\ln Z=\frac{1}{2\beta}\left(1-\frac {2N}{\beta}\langle\phi^{2}\rangle\right) \tag{10}\] \[S =\beta U+\ln Z\] (11) \[C =\beta^{2}\frac{\partial^{2}}{\partial\beta^{2}}\ln Z=2\beta U- \frac{1}{2}+\frac{N^{2}}{\beta^{2}}\operatorname{var}(\phi^{2}), \tag{12}\] where \(\langle\phi^{2}\rangle\) and \(\operatorname{var}(\phi^{2})\) denote the mean and variance of \(\phi^{2}\). In addition, we see that a rank-one Ising model is equivalent to a conditionally independent model with a fluctuation field, induced by the intrinsic interactions between spins. The marginal distribution of this field is given by \[P_{N}(\phi)=\sum_{\sigma}P(\sigma,\phi)=\frac{2^{N}}{Z}\sqrt{\frac{N}{\pi \beta}}e^{-\frac{N}{\beta}\phi^{2}+\sum_{i}\ln\cosh w_{i}\phi}, \tag{13}\] and thus we have \[P(\sigma\mid\phi)=\frac{P(\sigma,\phi)}{P_{N}(\phi)}=\prod_{i}\frac{e^{\sigma _{i}w_{i}\phi}}{2\cosh w_{i}\phi}. \tag{14}\] We see again that conditioning on the fluctuating field removes the interactions between spins. Recalling that \(\ln\cosh x\approx\frac{x^{2}}{2}-\frac{x^{4}}{12}\) for small \(x\), we see that the fluctuation distribution \(P_{N}(\phi)\) exhibits a structural transition at \[\beta_{c}=\frac{2}{\frac{1}{N}\sum_{i}w_{i}^{2}} \tag{15}\] where it changes from unimodal at \(\beta<\beta_{c}\) to bimodal at \(\beta>\beta_{c}\). In the limit \(N\to\infty\), this point corresponds to the critical temperature of an order-disorder phase transition. In the disordered phase at high temperatures \(\beta<\beta_{c}\), the spins are mostly random. In the ordered phase at low temperatures \(\beta>\beta_{c}\), on the other hand, they mimic the pattern set by the signs of \(\{w_{i}\}\).
2309.09618
A Discussion on Generalization in Next-Activity Prediction
Next activity prediction aims to forecast the future behavior of running process instances. Recent publications in this field predominantly employ deep learning techniques and evaluate their prediction performance using publicly available event logs. This paper presents empirical evidence that calls into question the effectiveness of these current evaluation approaches. We show that there is an enormous amount of example leakage in all of the commonly used event logs, so that rather trivial prediction approaches perform almost as well as ones that leverage deep learning. We further argue that designing robust evaluations requires a more profound conceptual engagement with the topic of next-activity prediction, and specifically with the notion of generalization to new data. To this end, we present various prediction scenarios that necessitate different types of generalization to guide future research.
Luka Abb, Peter Pfeiffer, Peter Fettke, Jana-Rebecca Rehse
2023-09-18T09:42:36Z
http://arxiv.org/abs/2309.09618v1
# A Discussion on Generalization in Next-Activity Prediction ###### Abstract The goal of next-activity prediction is to forecast the future behavior of running process instances. Recent publications in this field predominantly employ deep learning techniques and evaluate their prediction performance using publicly available event logs. This paper presents empirical evidence that calls into question the effectiveness of these current evaluation approaches. We show that there is an enormous amount of example leakage in all of the commonly used event logs and demonstrate that the next-activity prediction task in these logs is a rather trivial one that can be solved by a naive baseline. We further argue that designing robust evaluations requires a more profound conceptual engagement with the topic of next-activity prediction, and specifically with the notion of generalization to new data. To this end, we present various prediction scenarios that necessitate different types of generalization to guide future research in this field. Keywords:Predictive Process Monitoring Process Prediction Generalization Leakage ## 1 Introduction Predictive process monitoring (PPM), or process prediction, is a branch of process mining that is concerned with the forecasting of how a running process instance will unfold in the future [3]. For example, PPM approaches may predict what the outcome of the process instance will be, how long it will take to complete, or which activities will be executed next. In contrast to techniques like process discovery or conformance checking, process prediction is forward-facing, and aims to identify process execution problems like delays or compliance violations _before_ they occur, thus enabling an organization to preemptively take preventive counteractions [3]. Whereas older approaches to process prediction relied on explicit models of process behavior, such as transition systems or probabilistic automata [1], recent research has almost exclusively tackled the problem with neural networks [4]. The majority of this research has also focused on control-flow predictions, specifically the prediction of the _next activity_ in a trace [6]. At a high level, all existing contributions approach next activity prediction as a self-supervised machine learning problem [7, 9, 11]: An existing event log is randomly split into a training and a test set. A machine learning model, typically a deep neural network, is shown incomplete traces from the training set, such that it learns to predict the next activity in that trace. The performance of the trained model is then evaluated by predicting the next activity for incomplete traces of the unseen test set and computing performance measures. Almost all existing publications train and evaluate their models on a relatively small collection of event logs for their evaluation. This includes the Helpdesk event log [14] and the logs from the Business Process Intelligence Challenges (BPIC) 2012, 2013, and/or 2017. In this paper, we argue that this current way of training and evaluating next activity prediction models is biased in the sense that it does not evaluate how well these models would generalize to unseen data. We argue that, in order to design reliable evaluation procedures, it is necessary to first engage with the topic of next-activity prediction on a more conceptual level. Our line of argument is based on several observations about the aforementioned event logs: First, the next-activity label is almost entirely determined by the control-flow of the prefix. Second, when only considering the control-flow perspective, there is an enormous amount of example leakage in all logs, so that most predictions are made on prefixes that were already seen during training. Third, as other research has already shown [10], incomplete traces can often continue in different ways, so that the maximal achievable accuracy in this evaluation setting is unknown and probably much lower than 100%. After introducing basic concepts in section 2, we provide empirical evidence for each of these observations and demonstrate that the next-activity prediction task in these event logs is a rather trivial one that can be solved by a naive baseline (section 3). section 4 presents various scenarios for generalization in process prediction which are grouped into three types of generalization. Finally, we discuss related work in section 5 and conclude the paper in section 6 ## 2 Background **Event Log Data.** PPM works on _event log data_, gathered from the execution of business processes in information systems. An event log is a collection of cases. A case is represented by a _trace_\(t\), i.e., a sequence of events \(\langle e_{1},\ldots,e_{n}\rangle\) of length \(n\). Each event \(e\) has two mandatory attributes: the _activity_ and the _case ID_. In addition, events can have additional attributes, such as a timestamp or an executing resource, which describe the context in which the event has occurred. Similar to events, traces can also have additional attributes, such as an allocated project. A case represents a completed process execution. For PPM, we are interested in predicting the future behavior of running cases, which are represented by trace prefixes. A _trace prefix_ of a trace \(t\) of length \(p\) is defined as a subsequence \(\langle e_{1},\ldots,e_{p}\rangle\), with \(1\leq p<n\). **Next Activity Prediction.** The goal of next activity prediction is to predict which activity is performed next in a running case. Formally, this problem is framed as multi-class classification, where each class represents one activity. For each trace \(t\) in a given event log, pairs \((x,y)\) of features \(x\) and labels \(y\) are created. \(x\) is a prefix of \(t\) with length \(p\), which represents the running case. \(y\), which is often called the label of \(x\), represents the activity at position \(p+1\) of \(t\), i.e., the next activity, which should be predicted. These pairs \((x,y)\) are provided to a machine learning model, typically a deep neural network, such that it learns a predictor function \(f\) that maps the prefix to the correct next activity, i.e., the class to which the prefix belongs. To learn and evaluate \(f\), the event log is split into two parts, the training set and the test set. The model is trained on the prefix-label pairs from the training set and evaluated on those from the test set. Therefore, for each prefix \(x\), its prediction \(\hat{y}:=f(x)\) is compared with the ground truth label \(y\) and performance measures like accuracy and F1 score can be computed. ## 3 Validity Issues in Existing Research In this section, we examine various phenomena that pose threats to the validity of next-activity prediction research. To substantiate our discussion, we present empirical evidence that was generated in a setting that is representative of the typical evaluation setup used in the field. We employ five commonly used event logs (Helpdesk, BPIC12, BPIC13 Incidents, BPIC17 Offer, and MobIS [12]) and generate six splits for each log: five in which we randomly allocate traces so that 80% of them are part of the training set and 20% are part of the test set, and one in which the split is time-based so that the 20% of traces with the most recent start timestamps end up in the test set. We then generate \(n-1\) prefix-label pairs \((x,y)\) from each trace with lengths \(p\in[1,n-1]\) and calculate prediction accuracy as the percentage of prefixes in the test set for which the correct next-activity label was predicted, i.e., \(\hat{y}=y\). We do not apply log preprocessing or make any other changes to the data. The code and data needed to reproduce our results are available at [https://gitlab.uni-mannheim.de/jpmac/ppm-generalization](https://gitlab.uni-mannheim.de/jpmac/ppm-generalization). ### Example Leakage Leakage in machine learning refers to information being made available to a model during training that it would not have access to when classifying unseen data [5]. This can lead to an unrealistic assessment of the model's performance with respect to the classification task at hand. One particular type of leakage is _example leakage_, which occurs when the same example (more specifically, the same feature vector) is present in both the training and the test set. In this case, the classification is a trivial one, as the model is not required to learn generally-valid relationships between features and labels. Example leakage can be a considerable problem when doing prediction on event logs, due to the repetitive nature of the process executions recorded in them [15]. In order to quantify example leakage in next-activity prediction, we first need to establish when two prefixes can be considered identical. We can limit the set of features that need to be considered when establishing equality to those that are actually relevant for predicting the next activity. Previous research has already examined the extent to which context attributes, such as resource or time, enhance prediction performance compared to solely considering the previous control-flow recorded in a prefix [2]. They have found that, in most cases, including context attributes does improve predictions compared to only considering control-flow features, but that these improvements are rather insignificant (low single-digit percentage increases in accuracy). Based on these findings, we can conclude that in most cases, the next-activity label can be correctly predicted when only the control-flow of the prefix is known. In the following, we therefore consider two prefixes to be identical if they exhibit the same control-flow, i.e., if they have the same activities in the exact same order. With this equality criterion, we can now quantify example leakage by calculating the percentage of prefixes in the test set that is also included in the training set. The amount of example leakage in the event logs commonly used for the evaluation of next-activity prediction techniques is shown in (Figure 1). We observe that, across all datasets and splits, example leakage is above 80%, and even close to 100% in the Helpdesk and MobIS event logs. This means that most of the predictions made on the test set are trivial ones, and consequently, that one cannot draw valid conclusions about how well a prediction model would perform on unseen data from this evaluation setting. Figure 1: Example leakage percentage for each event log, averaged over the splits. ### Baseline and Accuracy Limit We can further illustrate this issue by demonstrating that the prediction accuracy of state-of-the-art models lies in a relatively narrow corridor that is bounded by a naive baseline with little to no generalization capacity on the lower end, and by the maximal accuracy that can be achieved with only control-flow features on the upper end. We construct the baseline as follows: for each unique prefix in the training set \(x:=\langle e_{1},\ldots,e_{p}\rangle\), where \(e\) represents the activity only, it simply predicts the most common next activity. If an unknown prefix is encountered (i.e., an example that has not leaked), it instead predicts the most common next activity associated with only the last activity \(e_{p}\) in the prefix, similar to a bigram model, i.e., \(x:=e_{p}\). The upper bound is based on the observation that a common implicit assumption in supervised learning, that each unique combination of feature values maps to exactly one label, does not hold in the process mining domain. Event logs nearly always contain traces that have identical control-flow up to a point but diverge afterwards, for example due to exclusive continuation paths or concurrent activity execution. In the context of next-activity prediction, this means that a prefix exhibits _label ambiguity_[10]. If a prediction model that predicts a single next-activity label is tasked with classifying a label-ambiguous prefix, the best prediction in terms of the resulting overall accuracy it can make is the activity that is most frequently associated with that prefix. All other activities will never be predicted. From this, we can derive that there is an _accuracy limit_ that a prediction model can achieve on a given (test) dataset when it only makes predictions based on the control-flow of the prefix. This accuracy limit is simply calculated as the percentage of examples in the test set in which the label is the most common label for the corresponding prefix. Figure 2 shows the prediction accuracy achieved by the baseline prediction model described above and the MPPN [9], a state-of-the-art neural network predictor that includes contextual attributes for its prediction. The accuracy limit for each test split is also included. Of course, this comparison is limited since it only includes a single state-of-the-art model. However, given that benchmark experiments in previous research have consistently shown that many next-activity prediction models achieve almost the same accuracy when evaluated on the same data (e.g., [7, 9, 11]), our observations are likely to apply to other models as well. In the Helpdesk and MobIS logs, the training and test set almost completely overlap. Predicting the next activity in these event logs is therefore trivial, and consequently, both models achieve the same prediction accuracy. In fact, the only reason that they do not reach 100% accuracy is label ambiguity, which is why the observed accuracy for these models is almost identical to the accuracy limit. In other event logs, which exhibit slightly less example leakage, the accuracy of the naive baseline is still very close to the one of the state-of-the-art model, although there is a notable gap of a few percentage points. It is, however, unclear to which extent this performance gap can be attributed to the MPPN's ability to generalize to unseen examples. An alternative explanation would be that its consideration of context features allows it to resolve the label ambiguity in some traces, and thereby improve its predictions, whereas the baseline only considers control-flow features; this would be consistent with the findings of [2], i.e., that incorporating context slightly improves prediction accuracy. Given that the evaluation setting that we used in this section has been so widely employed in existing publications on next-activity prediction, our findings suggest that a significant portion of the perceived advancements in the field may be - in a sense - illusory. As a research community, we now have a large number of proposed next-activity prediction techniques that employ several different neural network architectures, inductive biases, and strategies to incorporate different types of features. However, we have very little idea to what extent these techniques would be able to generalize well enough to make good predictions on unseen data - and consequently, if they would be able to provide value in a real-world application. Although it would also be possible to address the issues that we have pointed out in this section on a technical level, we argue that they are symptomatic of a broader problem in process prediction research, namely a lack of engagement with the topic on a conceptual level. In particular, we believe that there is an insufficient understanding of what _generalization_ means in a process prediction context. ## 4 Generalization in Process Prediction In machine learning, generalization refers to the ability of a trained model to make correct predictions on samples that it has not seen during training. This in an important capability because a model should not only be able to handle the samples that it is already familiar with, but also other samples that it will be faced with when applied in its respective application context. Figure 2: Prediction accuracy of the naive baseline and the MPPN neural network, along with the accuracy limit in the test set. Each split plotted separately. As pointed out in the previous section, splitting an event log into training and test sets with the goal of having samples in the test set that were not present in the training set does not work as expected. This means that, although generalization is a characteristic of interest for machine learning in general and process prediction in particular, the generalization capabilities of PPM algorithms have so far not been explicitly evaluated, in the sense of applying an algorithm on a test log that has little to no overlap with the training data3. Such an evaluation is undoubtedly necessary, but it requires a discussion on what generalization means in a process context and how it should be measured. Footnote 3: A notable exception to this is [8], which focuses on process model structures To contribute to this discussion, this section presents several exemplary prediction scenarios, classified into different generalization types, and discusses which predictions a PPM algorithm should reasonable make in each. These scenarios are not meant to be complete. Rather, they are intended to serve as a starting point for understanding generalization in process prediction. ### Prediction Scenarios In all scenarios, we suppose to train a prediction model on the mentioned log, i.e., we create all prefixes for all traces \(t\) in the log \(L\) and train the model on the resulting samples \((x,y)\). For each scenarios we show prefixes that are not seen so far, i.e., that are not included in the log. Given the unseen prefix as input to the model, we explain which predictions are plausible to be made. Thus, we only assume what could be the correct ground truth label. If the model is able to make this prediction on the unseen prefix, we say that it can generalize in this scenario. In all scenarios, we focus on the problem of predicting the next activity only. Predicting attributes like resource, time or properties like the process outcome are related problems, but the correct predictions differ, so they require a separate discussion. Furthermore, we assume that we do not have access to additional information like a process model; only the observations in the event log are given. #### 4.1.1 Unseen Control-flow Log \(L1\) in Table 1 shows the scenarios where activities \(C1\), \(C2\) and \(C3\) can occur in any order. \(L2\) in Table 2 shows a similar, yet more complex scenario with \(C\), \(D\), \(E\), \(F\), \(G\), \(H\) in any order. This can be caused, e.g., by concurrent activities and is a common phenomenon in real-world event logs. Another common scenario is the appearance of activities that can be executed multiple times after another as shown in \(L3\). For event logs with such patterns, four interesting scenarios can occur: 1. \(L1\) and prefix \(\langle\)A, B, C1, C3, C2, D\(\rangle\). Expected prediction: \(E\). Although the model has not seen this prefix due to a new order of \(C1\), \(C2\) and \(C3\), it should have learned that the case always continues with \(E\) after \(D\), regardless of the order of the previous activities. 2. \(L1\) and prefix: \(\langle\)A, B, C1, C3, C2\(\rangle\). Expected prediction: \(D\). Again, the prediction model should have learned that regardless of the order of \(C1\), \(C2\) and \(C3\), \(D\) always follows. 3. \(L2\) and prefix: \(\langle\)A, B, C, D, F, G\(\rangle\). As seen in \(L2\), both \(E\) and \(H\) have happened after \(G\). However, in each trace, either \(E\) or \(D\) directly follows \(G\). This is the situation of label ambiguity described in [10]. Both options, \(E\) and \(D\) are valid continuations and thus valid predictions. 4. \(L3\) and prefix: \(\langle\)A, B, B, B, C\(\rangle\). Expected prediction: \(D\). The model should have learned that the case always continues with \(D\) after \(C\), no matter how often \(B\) has happened. #### 3.2.2 Unseen Attribute Value Combinations In certain scenarios, the context attributes like involved resources, timestamp or cost carry important information to determine the continuation of the process instance [2, 12]. Considering the contextual information is an important capability when dealing with event logs which distinguishes next step prediction from other sequential prediction tasks. As an example, we show three scenarios where we expect the prediction model to generalize in presence of context attributes. Note that in these scenarios, the models have seen the context attribute values before, i.e., they are not completely new. Just the combination of activity and context has not been seen so far. The first example, \(L4\) in Table 4, shows a situation in which different resources are involved in the activities. Log \(L5\) in Table 5 gives an example where the next activity to execute depends on the amount of Euro. Lastly, log \(L6\) in Table 6 shows an example where timestamps are involved. 1. \(L4\) and prefix \(\langle\)(**A**, R1), (**B**, R1)\(\rangle\). Expected prediction: \(C\). In \(L4\), different resources are involved in activity \(B\). However, \(C\) follows \(B\) every time. Thus, the prediction model should know that regardless of the resource \(R\) in activity \(B\), \(C\) always follows. 2. \(L5\) and prefix \(\langle\)(**A**, 2E**), (**B**, 499E**)\(\rangle\). Expected prediction: \(C\). The value of Euro has changed to 499E. However, the model should have learned that with 499E \(C\) still follows. 3. \(L6\) and prefix: \(\langle\)(**A**, July 2022), (**B**, May 2023)\(\rangle\). Expected prediction: \(D\). In 2023, a drift happened causing activity \(D\) to follow \(B\) instead of \(C\), which the prediction model should be able to express. #### 4.1.2 Unseen Attribute Values Sometimes, the training log might not be complete with respect to the activities or other attributes contained. For instance, a new activity (e.g. due to new requirements in the process) or a new resource (e.g. a new person joining the process/company) might occur. To demonstrate these scenarios, we use the logs \(L4\), \(L5\) and \(L6\) from the previous section but discuss other prefixes. 1. \(L4\) and prefix \(\langle\)(A, R1), (F, R100)\(\rangle\). As \(F\) is an activity the prediction model has never seen before, there is no evidence from the event log how to continue. One option is to indicate that the model does not know, e.g., by predicting a special _UNKNOWN_ token. Another option would be to predict any label from the event log that could follow potentially, e.g., \(C\) as this has happened in the third position in all traces in the log. 2. \(L4\) and prefix \(\langle\)(A, R1), (B, R37)\(\rangle\). This scenario is similar to the previous one but with resource \(R37\) never seen before. Again, the model could indicate that it does not know or predict any label on positional basis, e.g., \(C\). 3. \(L5\) and prefix \(\langle\)(**A**, 200E**), (**B**, 200E**)\(\rangle\). The value 200E is between the seen values 2E and 499E. Thus, we argue that the prediction model should predict \(C\). 4. \(L6\) and prefix \(\langle\)(**A**, June 2024), (**B**, June 2024)\(\rangle\). The model should know that the process has changed in 2023. If tasked with 2024, the most probable next activity is \(D\). ### Implications Generalization over unseen control-flow constructs involves dealing with unseen control-flow variants in the prefix as shown in the scenarios in event logs \(L1\), \(L2\) and \(L3\) in Table 1, Table 2 and Table 3. We assume that all activities in prefix and label are known but the specific prefix has not been seen so far. The event log \(L2\) in Table 2 is a special scenario as it is linked to label ambiguity [10]. Both options \(E\) and \(H\) are valid prediction. However, a deterministic model will always make the same prediction when tasked with the same prefix. As \(H\) has the higher frequency, the prediction model will most likely always predict \(H\) although it should have - and probably has - learned that \(E\) can also follow. When evaluating process prediction methods with point-measures like top-1 accuracy, which consider only the single most probable prediction, one cannot assess generalization properly as it does not take into account whether the model has learned that more than one option can follow. When evaluating whether the model has learned that more than one option can follow, probabilistic measures can be used that assess how much probability is given for each option. For generalization over unseen context combinations, the prediction model must be able to interpret the context attributes and to distinguish between those scenarios where the context attributes have influence on the next activity to be predicted and those scenarios where they do not. This involves scenario as shown in logs \(L4\), \(L5\) and \(L6\) in Table 4, Table 5 and Table 6. There can be much more complex scenarios with other context attributes where the next activity depends not only on one but the combination of multiple attribute values. Generalization over new and unknown attribute values are scenarios where a new attribute value like a completely unknown activity or resource occur. In such scenarios, defining plausible predictions is often not trivial and might depend one the use-case. Furthermore, dealing with never seen attribute values in the input is challenging as the model has to have learned whether there is a influence on the process or not - and in case there is which influence it has. For numerical and temporal attributes, unseen attribute values are more diverse. For instance, the number of unique values for cost in Table 5 can be very large and the chance that all values have been seen is rather low. Similarly, temporal attributes can be continuous and the prediction model might in practice be tasked with prefixes with year 2024 or 2025. The most reasonable approach is to make a decently confident prediction for the most likely next activity and to indicate whether the model knows the correct answer or whether it does not know. For instance, the model might predict a certain activity which usually occurred in this position in the trace but at the same time indicate that it did predict this activity only on positional basis as it has never seen this attribute value in the trace. In practice, these scenarios might not occur in isolation. For instance, an unseen sequence of activities in the prefix can also come with unseen combination of context attributes or new attribute values which makes generalization in process prediction a challenging task. ## 5 Related Work So far, the conceptual flaws of process prediction beside label ambiguity [10] have been discussed little. The majority of papers have introduced new approaches for process prediction, starting from the first deep learning based model [4], to more complex architectures [7, 9]. Limited work has been conducted to ensure realistic evaluation settings or test generalization capabilities. Weytjens et al. [15] introduce a pre-processing algorithms to prevent leakage in process prediction focusing on the remaining time prediction problem. Their approach splits the traces on a temporal basis such that there is no temporal overlap between the prefixes used for training and test. However, this does not prevent example leakage on prefix-level. In [13], the authors compare discovery-based algorithms to sequence-learning algorithms in terms of their accurateness and generalization capabilities. The event logs are split into training and test sets. However, as the paper does not mention any technique to prevent example leakage, it is very likely that the splits used in the experiments face a similar high portion of leaked prefixes which limits the validity of generalization capabilities measured. Peeperkorn et al. [8] propose an evaluation strategy to leave certain variants out of the training set and only have them in the test set. They used this splitting strategy to evaluate whether prediction models can learn process model structure of the unknown system behind the log, focusing mainly on concurrent activities in process models. Thus, they did not systematically cover all generalization scenarios introduced in this paper. They found that the generalization capabilities of LSTM prediction models are inversely correlated with the number of variants left out. However, as they measured with accuracy, it is unclear how label ambiguity affected the experiments. In comparison to their work, we propose several generalization scenarios. ## 6 Conclusion In this paper, we have critically analyzed the current procedure of evaluating PPM algorithms in research and found that little to no generalization capabilities can be tested that way. The proposed generalization scenarios can be used to measure how much difference between train and test set there is and which generalization capabilities are required for which log, i.e., which scenarios are present and which not. Furthermore, synthetic event logs containing these pattern can be simulated and existing one split accordingly to test for generalization. Guided by the plausible predictions, new prediction algorithms can be developed that specifically account for these. While the generalization scenarios are inspired by real-world situations, real event logs are required for setting the ground truth label of unseen prefixes. In the scenarios presented, we assumed a ground truth label and argued whether such a prediction will show generalization. In some scenarios, the expected label is more clear than in other scenarios. However, these are only plausible predictions. Real generalization can only be tested if the ground truth label is not assumed but determined by the data. Although we have focused on next-activity prediction and other prediction situations were out of scope for this work, there might be more scenarios in next-activity prediction that are not yet covered. Furthermore, the high percentage of example leakage between train and test set raises the question whether generalization capabilities are actually required if the behaviour in both sets is that similar when considering the control-flow only. Following that, prediction models that take context information into account might actually be able to generalize with respect to the scenarios of unseen attribute value combination, as they reach comparable or higher accuracy as control-flow only models. Nevertheless, this has yet not been shown explicitly. In the future we plan to create a benchmark set of event logs that cover the presented generalization scenarios. Furthermore, the scenarios can be adopted to other prediction tasks like outcome prediction.
2308.16411
Keith Brueckner (1924-2014). A biographical memoir
Keith Brueckner was a theoretical physicist of considerable technical power who came of age as the mysteries of the atomic nucleus were coming into focus. His fundamental contributions to the "many-body problem" had a lasting impact on our understanding of how the macroscopic behavior of matter emerges from the underlying microscopic rules. A passionate and accomplished mountain climber, he listed the American Alpine Club below the National Academy of Sciences on his vitae. During decades of complex interactions between the physics community and the United States government, he helped build structures that allowed him and many others to provide advice on classified matters, but also actively raised funds to support opposition to the war in Vietnam. At the peak of his career, he left the Ivy League to help found and build a new university in a small village filled with Marines and retirees - La Jolla, California.
William Bialek
2023-08-31T02:49:42Z
http://arxiv.org/abs/2308.16411v1
# Keith Brueckner (1924-2014). A biographical memoir ###### Abstract Keith Brueckner was a theoretical physicist of considerable technical power who came of age as the mysteries of the atomic nucleus were coming into focus. His fundamental contributions to the "many-body problem" had a lasting impact on our understanding of how the macroscopic behavior of matter emerges from the underlying microscopic rules. A passionate and accomplished mountain climber, he listed the American Alpine Club below the National Academy of Sciences on his vitae. During decades of complex interactions between the physics community and the United States government, he helped build structures that allowed him and many others to provide advice on classified matters, but also actively raised funds to support opposition to the war in Vietnam. At the peak of his career, he left the Ivy League to help found and build a new university in a small village filled with Marines and retirees--La Jolla, California. ## I Introduction Keith Allen Brueckner was born 19 March 1924 in Minneapolis, Minnesota. His father Leo was Professor of Education at the University of Minnesota, an author of mathematics textbooks, and an adviser on educational policy. His mother Agnes (nee Holland) would take a very active role in Keith's university education during World War II. Some combination of nature and nurture produced an intensity and drive in all four of their children. Keith's twin brother John was a gifted linguist, wrote a French contextary for students, and taught high school; his older brother Richard became an insurance executive but also worked as an attorney on free speech cases; and his younger sister Patricia became a poet. Keith attended public schools in Minneapolis and went on to the University Minnesota in 1941. His first degree was based on a combination of course work at the university and extension courses during his military service. His assignment was as a weatherman in the Caribbean, where his mother sent a steady stream of the "great books." While perhaps not the most dramatic thing to be doing during World War II, Keith took pride in his service. Those who only knew the gruff and intimidating senior scientist might have been surprised to hear him break into song: We are the men, the weather men We may be wrong, oh now and then But when you see, those planes on high Just remember, we're the ones who let them fly After the war, Keith returned to the University of Minnesota for a year, collecting an MA, and then moved to the University of California at Berkeley for his PhD. The 184 inch cyclotron had started running at full energy shortly before his arrival, and Berkeley was the center of an exciting interplay between theory and experiment as prewar nuclear physics evolved into postwartic physics. Keith tried his hand at experiments, and then found his calling as a theorist, using very general arguments to understand the recent discovery that bombarding a nucleus with X-rays could produce the elementary particles called mesons. His PhD adviser was Robert Serber and his first theoretical paper was written with Marvin (Murph) Goldberger; Keith and Murph would remain friends for life. In 1950, PhD in hand, Keith went east to the Institute Figure 1: Keith Brueckner, circa 1980. From the American Institute of Physics Emilio Segre Visual Archives.
2309.16738
ELIP: Efficient Language-Image Pre-training with Fewer Vision Tokens
Learning a versatile language-image model is computationally prohibitive under a limited computing budget. This paper delves into the \emph{efficient language-image pre-training}, an area that has received relatively little attention despite its importance in reducing computational cost and footprint. To that end, we propose a vision token pruning and merging method ELIP, to remove less influential tokens based on the supervision of language outputs. Our method is designed with several strengths, such as being computation-efficient, memory-efficient, and trainable-parameter-free, and is distinguished from previous vision-only token pruning approaches by its alignment with task objectives. We implement this method in a progressively pruning manner using several sequential blocks. To evaluate its generalization performance, we apply ELIP to three commonly used language-image pre-training models and utilize public image-caption pairs with 4M images for pre-training. Our experiments demonstrate that with the removal of ~30$\%$ vision tokens across 12 ViT layers, ELIP maintains significantly comparable performance with baselines ($\sim$0.32 accuracy drop on average) over various downstream tasks including cross-modal retrieval, VQA, image captioning, \emph{etc}. In addition, the spared GPU resources by our ELIP allow us to scale up with larger batch sizes, thereby accelerating model pre-training and even sometimes enhancing downstream model performance.
Yangyang Guo, Haoyu Zhang, Yongkang Wong, Liqiang Nie, Mohan Kankanhalli
2023-09-28T05:31:07Z
http://arxiv.org/abs/2309.16738v2
# ELIP: Efficient Language-Image Pre-training with Fewer Vision Tokens ###### Abstract Learning a versatile language-image model is computationally prohibitive under a limited computing budget. This paper delves into the _efficient language-image pre-training, an area that has received relatively little attention despite its importance in reducing computational cost and footprint. To that end, we propose a vision token pruning and merging method ELIP, to remove less influential tokens based on the supervision of language outputs. Our method is designed with several strengths, such as being computation-efficient, memory-efficient, and trainable-parameter-free, and is distinguished from previous vision-only token pruning approaches by its alignment with task objectives. We implement this method in a progressively pruning manner using several sequential blocks. To evaluate its generalization performance, we apply ELIP to three commonly used language-image pre-training models and utilize public image-caption pairs with 4M images for pre-training. Our experiments demonstrate that with the removal of 30\(\%\) vision tokens across 12 ViT layers, ELIP maintains significantly comparable performance with baselines (\(\sim\)0.32 accuracy drop on average) over various downstream tasks including cross-modal retrieval, VQA, image captioning, etc. In addition, the spared GPU resources by our ELIP allow us to scale up with larger batch sizes, thereby accelerating model pre-training and even sometimes enhancing downstream model performance. Our code will be released at link. ## 1 Introduction Recent advancement in various benchmarks benefits primarily from large model pre-training. These pre-trained models stand out for their versatility and generalization ability, and are further encouraged by the scaling law [18, 23], which tells that expanding model size and training data leads to increasingly better performance. Nevertheless, the use of pre-trained large models often incurs a noticeable footprint and faces great challenges for deployment in resource-constrained environments. As a result, many efforts have been devoted to optimizing the efficiency-effectiveness trade-off of large models [21, 45, 51]. Conventional efficient learning approaches, _e.g_. knowledge distillation [17, 50], low-rank approximation [52] and quantization [9, 34], are commonly employed to compress a cumbersome model into a lightweight one. By this means, the computational overhead and memory cost are thereby reduced, despite the complexity involved in developing these compression algorithms. Since the emergence of Vision Transformers (ViTs) [11], recent research focus has been tailored to a more explainable and effective approach, _i.e_. _vision token pruning_. ViTs embed images using non-overlapped patches, which is distinct from the traditional approach of CNNs that explicitly incorporates spatial inductive bias [20]. This operation often leads to redundant vision tokens that can be safely removed without significantly compromising models' accuracy [36, 43, 54]. However, existing pruning methods in the vision-only domain **universally** rely on an objective-free approach, whereby the pruning mask is learned from signals of current or preceding layers [36, 54]. This approach may entail the risk of removing tokens that play a crucial role in achieving the task objective, especially for vision-language models. Figure 1: Visualization of attention map discrepancy between ViT and BLIP models and pipeline of our proposed method ELIP. (a) When presented with the same image, ViT and BLIP often see different regions, resulting in a large KL divergence of their attention maps. (b) ELIP achieves efficient language-image pre-training by pruning less important vision tokens. We notice that there is relatively little literature on efficient language-image pre-training [35]. In general, the natural correspondence between language and image mutually describes which token(s) is dispensable for training a generalizable multi-modal model. Besides, recent methods often employ separate pre-trained encoders for the two input modalities, wherein the encoding operation is asynchronous1. This allows us to leverage the output from the text encoder as supervision for removing vision tokens (refer to Fig. 1(b)), which differs significantly from that in the vision-only domain (In addition, the vision-only model and language-image model usually concentrate on different regions, as shown in Fig. 1(a)). The language tokens, on the other hand, are less redundant in their representation due to short context (_e.g_. 20 words per sentence) and high information density [15]. We therefore only ablate language token pruning for completeness [35]. Footnote 1: The complex parallel computing, though feasible, usually prohibits researchers from encoding language and image simultaneously. Our method does not require _any incremental trainable parameters_ beyond backbone language-image models. Building on the observation that the attention map on vision tokens becomes increasingly concentrated with deeper layers (see Fig. 2), we implement vision token pruning and merging in a progressive and multi-stage way. We integrate our ELIP as a plug-and-play module into _three_ popular language-image pre-training models and pre-train them from scratch on datasets with image-caption pairs using 4M images, where the datasets consist of MSCOCO Caption [37], Visual Genome [26], SBU [42], and Conceptual Captions [47]. Through our experimental results, we demonstrate that removing \(\sim\)30% vision tokens can well maintain the model performance (\(\sim\)0.32 accuracy drop on average) on downstream tasks including image-text retrieval, visual question answering, visual entailment, image captioning, and natural language for visual reasoning. In addition, the spared GPU memory by our method enables model scaling up with larger batch sizes and even sometimes slightly boosts downstream model fine-tuning. We also validate the effectiveness of combining our pre-training method with several parameter-efficient transfer learning approaches. _It is worth noting that we do not apply our proposed method to CLIP [44] and its successors due to two reasons:_ I) the lack of accessibility of pre-trained datasets and II) the inflexibility of adapting CLIP models to non-matching language-image downstream tasks such as VQA. To summarize, our ELIP represents an initial attempt to achieve efficient language-image pre-training with fewer vision tokens. We believe that our approach provides valuable insights for future language-image pre-training to develop more advanced models whilst reducing computational cost and footprint. ## 2 Related Work ### Pruning in Neural Networks Network pruning is leveraged to remove unnecessary or less important components in models [56]. By removing some connections or parameters, the original dense network reduces to a sparse one, in which the required capacity for storage will dwindle as well as the volume of computations [25, 58]. Based on the granularity of reduction, existing methods can be roughly grouped into unstructured pruning and structured pruning. The former refers to pruning less salient components, such as neurons or connections between layers [38, 57, 58]. In contrast, the latter aims to remove a large bundle of parameters [16, 25], on which we mainly discuss in this section. Previous structured pruning methods mostly target removing less influential Transformer heads [7], layers [59], and convolutional channels [16]. With the startling success of ViT [11], increasing research has been devoted to pruning input tokens of each layer due to the following two reasons. First, the input tokens from different layers have different redundancies and only a small number of them contribute significantly to the accuracy of models [5, 25, 43]. Second, pruning tokens leads to more visual explainability as compared to other elements such as heads or layers. Perhaps the most relevant work to ours is TRIPS [22]. It is worth noting that our method sets it apart from TRIPS by four merits: We propose to employ an enhanced pruning approach by leveraging multi-modal information, whereas TRIPS solely relies on text; Our method achieves improved efficiency; We conduct a more comprehensive evaluation to validate the generalizability of the proposed method (we consider three models while TRIPS only uses one); We further validate the effectiveness of combining our method with other parameter-efficient transfer learning techniques (refer to the supplementary material). ### Vision-Language Transformers The past few years have witnessed the popularity of Transformers in natural language processing and computer vision [10, 11, 55]. Given its sweeping trend and overwhelming performance in these related domains, researchers have actively extended this technique to vision-language tasks. In detail, a _pre-train then fine-tune_ paradigm is adopted by mainstream methods and the models are often pre-trained on certain large-scale vision-language datasets [10]. Unlike previous single modality model pre-training, the vision-language domain requires two heterogeneous inputs. The ubiquitous image-text pairs, _i.e_. textual caption regarding an image, serve as the key data format for pre-training due to their easy availability. Common datasets include Conceptual Captions [47], Visual Genome [27], COCO Captions [37], and LAION-400M [46]. At the bedrock of vision-language Transformers lies the embedding behavior of the two modalities. Pertaining to the vision embedding, the feature extraction has grown from grid [3], region features [2] of CNN models, to the recent patch features of Transformers [28]. In contrast, the text tokenization promptly changed from traditional Word2Vec to BERT-style pre-trained embeddings after the prevalence of modern language modeling [10, 55]. On top of the embedding process, there are generally two types of modal fusion approaches: dual-stream and single-stream. The former adopts a late fusion strategy, where the vision and text are separately encoded until a fusion operation to combine these two [1, 13, 19, 30, 40, 53]. The single-stream fusion method presents to encode the text and vision with a unified Transformer model, wherein the modal fusion is performed beforehand [8, 24, 33, 48, 49]. To enable the training on these large-scale captioning datasets, some pretext objectives are carefully designed, such as masked language modeling [10, 40], masked vision modeling [8, 53], and image-text matching [30, 40]. ## 3 Method ### Preliminary #### 3.1.1 Overview of language-image pre-trained models Transformers have grown into a fundamental building block of many modern language-image models [12, 24, 29, 30]. According to the common training paradigm, existing models can be split into three modules: vision encoder, text encoder, and multi-modal fusion. In this way, the language and vision encoders can be aligned with separate pre-trained models to facilitate knowledge transfer. **Vision Encoder**. Recent language-image pre-training models often leverage the ViT model [11] as the vision encoder. ViT first partitions an RGB image \(I\in\mathbb{R}^{3\times H\times W}\) into \(M\times M\) non-overlapping patches. Together with a class token [CLS], these image patches are thereafter fed into \(N\) layers with self-attention as the basic operation. To this end, a set of query, key, and value matrices are transformed from the patch embedding to token features \(\mathbf{X}^{v}\in\mathbb{R}^{(1+M^{2})\times d}\), where \(d\) denotes the embedding size, followed by several feedforward layers and residual connections. **Text Encoder**. After tokenizing the input sentence according to the BERT approach [10], current methods often employ a special [CLS] token at the beginning and a [SEP] token at the end of the sequence. These tokens serve to delimit the sentence and enable the BERT encoder to extract the token encoding, which is represented by \(\mathbf{X}^{t}\in\mathbb{R}^{(2+T)\times d}\), where \(T\) denotes the sentence length. **Modal Fusion**. The model fusion module leverages frameworks that are similar to those utilized by the Transformer decoder [55]. In particular, the common practice includes the cross-attention between the vision and text encoders [29, 30], as well as merged attention techniques [12]. **Pre-training Objectives**. Pre-training of language-image models on large-scale image-caption datasets [37, 47] is made possible by several pretext objectives. One such instance is masked language modeling, which aims to reconstruct masked text tokens given the remaining ones and the image feature. Additionally, the image-text matching (ITM) objective is employed to classify the correspondence between a given image and its accompanying text. Nevertheless, masked image modeling objectives have been largely abandoned by recent approaches due to their convergence difficulty and lack of usefulness for downstream fine-tuning performance [12, 24, 30]. #### 3.1.2 Research Motivation While achieving state-of-the-art results on downstream tasks, language-image pre-training models can suffer from computational inefficiency. To approach this problem, we Figure 2: Token similarity and attention maps across different ViT layers of BLIP [30], as well as the FLOPs proportion of different modules for three typical language-image pre-trained models. (a) The attention distribution over image tokens grows from uniform to concentrated with layers going deeper. Besides, the token similarity initially decreases but then significantly increases, indicating that more vision tokens become redundant. (b) Notably, the vision encoder (VE) accounts for the majority of the computational cost of language-image models (compared to the text encoder - TE and modal fusion - MF). first leverage Fig. 2 to illustrate two critical observations that motivate this work: **Remark 1**.: _Fig. 2(b) indicates that the vision encoder usually accounts for the majority of overhead in a language-image model, especially for ALBEF [29] and BLIP [30]. Given this observation, reducing the computational cost of the vision encoder will yield great improvement in model training efficiency._ **Remark 2**.: _The vision tokens from ViT are redundant in their representations [4, 36], as is the case for these language-image models. Moreover, the attention distribution becomes increasingly concentrated for deeper ViT layers (as seen in Fig. 2(a)). One insight from this observation is that we can progressively remove these tokens that are less useful for the image-text matching objective to achieve computational efficiency._ ### Method Architecture In light of the above two observations, this paper aims to study the _efficient language-image pre-training_ by means of _vision token pruning and merging_. We do not remove image patches in the input space [35] as we believe some background patches still provide useful features for cross-modal semantic understanding. Instead, we propose to prune the vision tokens that are less influential for the matching goal of the given image-text pair. As noted in Remark 2, the redundancy of vision tokens increases as the depth of Transformer layers grows. In view of this, we design a progressive pruning strategy with multiple-stage blocks that follow the hierarchical ViT structures, such as Swin Transformer [39] and MetaFormer [60]. Specifically, our approach involves dividing a standard ViT encoder into four distinct and non-overlapping blocks, as illustrated in Fig. 3: * **Block I** remains unaltered for the first two ViT layers. Unlike the vision-only domain, both causal and background features contribute a lot to the semantic understanding in a language-image model. It is thus preferable to leave these layers close to the input unchanged. * **Block II** consists of two layers and prunes a few of all the vision tokens (_e.g_. 10%) that are redundant. * **Block III** removes much more tokens (such as 25%) preceding the next six layers. Fig. 2 shows that the attention maps tend to exhibit increasingly concentrated for deeper layers, indicating that the model focuses primarily on a small number of representative image regions. * **Block IV** further performs token pruning and keeps \(\alpha\) (_e.g_. 40\(\%\)) of the entire vision tokens with the last two layers, which are crucial for the subsequent multi-modal fusion. Inspired by MaskAE [15] approaches, we demonstrate that we can maintain a comparable fine-tuning model performance by retaining only a small group of vision tokens. ### Vision Token Pruning and Merging Fig. 1 illustrates the encoding process typically used in language-image models [12, 29], in which text and image inputs are processed separately. This non-parallel operation allows us to leverage the output of the text encoder to help remove irrelevant tokens in the vision encoder, which can provide significant benefits over vision-only pruning models [4, 36]. Moreover, the alignment between the image and text is determined by the features extracted from the [CLS] token. As a result, we employ the fusion of these two sets of features to jointly decide which tokens are influential for each given block. We outline the process of the algorithm in the supp. Specifically, the number of tokens reduces from 1 + \(M_{i}\) to 2 + \(\alpha M\) for each block with the pruning and merging approach. Here, \(M_{i}\) and \(M\) represent the token numbers of the current block and the input, respectively. The retain Figure 3: Overview of our proposed ELIP method. ELIP is composed of four sequential blocks and the corresponding numbers of layers are respectively 2, 2, 6, and 2. To reduce the computational overhead, we prune and merge less influential vision tokens based on the features of the vision [CLS] and text [CLS] tokens for the last three blocks. ing ratio \(\alpha\) is defined in Sec. 3.2 and is always less than 1.0, _e.g_. 0.4 for the last block. To this end, we first replace the image [CLS] token features with the fusion of itself and the text [CLS] features, \[\mathbf{X}_{[CLS]}^{v}=\lambda\mathbf{X}_{[CLS]}^{v}+(1-\lambda)\mathbf{X}_{[CLS ]}^{t}, \tag{1}\] where \(\lambda\) is a coefficient hyper-parameter balancing the contribution of vision and text tokens. In the next, we perform token pruning and merging without considering gradients, _i.e_. in a _stop-gradient_ fashion. All the vision token features \(\mathbf{X}^{v}\) are thereafter fed to each layer of the current block and only the attention values of [CLS] from the final layer, namely \(\xi\) are preserved, \[\mathbf{X}^{v},\xi=\text{BLOCK}_{i}(\mathbf{X}^{v};\mathbf{\Theta}), \tag{2}\] where \(\Theta\) represents the involved parameters and with no gradient during this computation. We then retain those token features based on a pre-defined retaining ratio \(\alpha\). \[\overline{\mathbf{X}}^{v}=concat(\mathbf{X}_{[CLS]}^{v};\{\mathbf{X}_{j}^{v} \}_{j\in\{\text{top-}\alpha M(\xi)\}}), \tag{3}\] where top-\(n()\) denotes the index set with the largest \(n\) values. For the remaining tokens, we merge them into a single token according to their attention after re-normalization, \[\begin{cases}\hat{\mathcal{M}}_{i}=\mathcal{M}_{i}\setminus\{\text{top-} \alpha M(\xi)\},\\ \hat{\xi}=norm(\{\xi_{j}\}_{j\in\hat{\mathcal{M}}_{i}}),\\ \mathbf{X}_{merge}^{v}=\sum_{k\in\hat{\mathcal{M}}_{i}}\mathbf{X}_{k}^{v}\hat {\xi}_{k},\end{cases} \tag{4}\] where \(\mathcal{M}_{i}\) represents the overall token index set from \(1\to M_{i}\) of the current block \(i\). We finally concatenate it with the remaining tokens after pruning. This approach ensures that the subsequent ViT layers will consider a smaller number of tokens, leading to more efficient processing. ### Method Analysis #### 3.4.1 An In-depth Understanding Our method leverages the multi-modal information (_i.e_. weighted sum) for pruning, as demonstrated in Eqn. 1. We illustrate two extreme cases where \(\lambda\) takes the values of 0 or 1. On the one hand, when \(\lambda=0\), our method degrades to vision-only pruning, wherein there is no supervision from the text. On the other hand, when \(\lambda=1\), similar to TRIPS [22], the pruning is solely determined by the text, resulting in a significant drop in model performance. We relate this result to that of momentum update in MoCo [14]. Specifically, replacing the vision [CLS] with the text [CLS] introduces a substantial modality gap, which confuses the model's learning process in terms of which tokens it should focus on. In contrast, a slowly evolving vision [CLS] (\(0<\lambda<1\)) serves as the core to harmoniously maintain modality consistency. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{TFLOPs} & \multirow{2}{*}{Latency} & \multirow{2}{*}{Mem} & \multicolumn{6}{c|}{Flickr30K} & \multicolumn{6}{c}{MSCOCO} \\ \cline{4-14} & & & & \multicolumn{3}{c|}{TR} & \multicolumn{3}{c|}{IR} & \multicolumn{3}{c|}{TR} & \multicolumn{3}{c|}{IR} \\ \cline{4-14} & & & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline ViLT [24] & 9.74 & 573 & - & 83.5 & 96.7 & 98.6 & 64.4 & 88.7 & 93.8 & 61.5 & 86.3 & 92.7 & 42.7 & 72.9 & 83.1 \\ UNITER [8] & 0.20 & 31 & - & 87.3 & 98.0 & 99.2 & 75.6 & 94.1 & 96.8 & 65.7 & 88.6 & 93.8 & 52.9 & 79.9 & 88.0 \\ VILLA [13] & \(\sim\)0.60 & \(\sim\)93 & - & 87.9 & 97.5 & 98.8 & 76.3 & 94.2 & 96.8 & - & - & - & - & - & - \\ UNIMO [32] & - & - & - & 89.4 & 98.9 & 99.8 & 78.0 & 94.2 & 97.1 & - & - & - & - & - & - \\ \hline METER [12] & \(8.66\) & 494 & \(90.0\) & \(89.6\) & \(98.3\) & \(99.4\) & 77.0 & 94.5 & 97.5 & - & - & - & - & - & - \\ - EViT & 4.68 & 325 & 64.8 & 60.5 & 86.6 & 92.6 & 44.9 & 77.4 & 86.6 & - & - & - & - & - & - \\ - ELIP & 6.43 & 420 & 70.4 & 89.3 & 98.8 & 99.6 & 76.0 & 94.7 & 97.4 & - & - & - & - & - & - \\ \hline ALBEF [29] & 9.65 & 594 & 88.1 & 93.6 & 99.1 & 99.9 & 81.0 & 96.0 & 97.8 & 72.2 & 91.8 & 96.1 & 55.9 & 81.4 & 88.8 \\ - EViT & 3.21 & 262 & 50.8 & 87.7 & 97.8 & 98.6 & 75.4 & 93.1 & 96.7 & 65.7 & 88.4 & 94.0 & 49.7 & 77.1 & 85.8 \\ - ToMe & 6.66 & 450 & 69.6 & 92.1 & 98.7 & 99.6 & 78.1 & 94.6 & 97.6 & 68.8 & 90.1 & 94.9 & 51.9 & 79.1 & 87.1 \\ - ELIP & 8.50 & 518 & 69.6 & 93.4 & 99.3 & 99.8 & 80.6 & 95.4 & 97.7 & 71.8 & 91.6 & 95.7 & 55.0 & 80.8 & 88.4 \\ \hline BLIP [30] & 11.03 & 1,102 & 90.8 & 94.2 & 99.1 & 99.9 & 81.4 & 95.6 & 98.1 & 72.8 & 92.1 & 96.1 & 56.6 & 81.7 & 88.9 \\ - EViT & 4.80 & 536 & 60.8 & 87.3 & 98.5 & 99.4 & 75.1 & 93.5 & 96.4 & 66.8 & 88.9 & 93.9 & 50.8 & 77.9 & 86.3 \\ - ToMe & 6.98 & 740 & 72.0 & 91.5 & 98.8 & 99.4 & 80.5 & 95.6 & 97.9 & 71.5 & 91.6 & 95.9 & 55.3 & 81.2 & 88.7 \\ - ELIP & 9.34 & 960 & 74.7 & 92.2 & 99.1 & 99.7 & 80.3 & 96.0 & 98.0 & 72.0 & 91.9 & 95.9 & 56.3 & 81.2 & 88.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of text retrieval (TR) and image Retrieval (IR) performance on Flickr30K and MSCOCO datasets. The TFLOP calculations are based on a batch size of 36, and the memory usage estimates are only applicable to the tested backbones and our proposed methods. Latency: ms; Mem: GB. \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{VQA} & \multicolumn{2}{c}{NLVR\({}^{2}\)} \\ \cline{2-6} & test-dev & test-std & dev & test-P \\ \hline VisualBERT [31] & 70.80 & 71.00 & 67.40 & 67.00 \\ ViLT [24] & 71.26 & - & 75.24 & 76.21 \\ LXMERT [53] & 72.42 & 72.54 & 74.90 & 74.50 \\ UNITER [8] & 72.70 & 72.91 & 77.18 & 77.85 \\ 12-in-1 [41] & 73.15 & - & - & 78.87 \\ \hline ALBEF [29] & 74.57 & 74.79 & - & - \\ - ELIP & 74.33 & 74.48 & - & - \\ \hline METER [12] & 74.72 & 74.71 & 78.69 & 79.66 \\ - ELIP & 74.16 & 74.31 & 78.41 & 79.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison on VQA and NLVR\({}^{2}\) tasks. #### 3.4.2 Complexity Analysis To explain the efficiency of our method, let us consider the ViT-Base model, which contains a total of \(\pi\) vision tokens. Given that the ViT model comprises twelve layers, the overall memory complexity can be roughly estimated as \(\mathcal{O}(f(12\times\pi))\), where \(f\) denotes the token processing in a single layer. Let us assume the retaining ratios are 90%, 65%, and 40% for the last three blocks2, respectively. With our ELIP method, the resulting memory complexity with respect to tokens is reduced to \(\mathcal{O}(f((2+2\times 0.9+6\times 0.65+2\times 0.4)\times\pi))\approx \mathcal{O}(f(8.5\times\pi))\), which corresponds to a reduction of approximately 30% in memory usage relative to the original baseline model. This reduction enables the pre-training of language-image models using larger batch sizes or deeper layers, while also reducing the computational complexity as fewer tokens are taken in the self-attention operation. Footnote 2: The ratios are defined based on the overall tokens. ## 4 Experiments ### Experimental Settings #### 4.1.1 Pre-training We utilized four publicly available large-scale datasets for pre-training: MSCOCO Caption [37], Visual Genome [26], SBU [42], and Conceptual Captions [47], which together provide image-text pairs with \(\sim\)4M images. We employed five downstream vision-language tasks in this work. To evaluate the generalization performance, we applied our ELIP method to three recent popular language-image pre-trained models, _i.e_. ALBEF [29], BLIP [30], and METER [12]. For each individual model, we trained it from scratch using four NVIDIA A5000 GPUs and kept most of the experimental settings untouched, except for reducing the batch size due to resource constraints. We fixed the coefficient parameter \(\lambda\) to 0.8 in Eqn. 1 for all models. The detailed implementation of each model can be found in the supplementary material. Moreover, we adopted the PyTorch profiler API to quantify the FLOPs and latency consumed by each model. #### 4.1.2 Compared Baselines As the efficient language-image models are quite sparse in literature, we thus adapted two SOTA vision token pruning baselines in the vision-only domains for comparison - **EViT**[36] and **ToMe**[4]. Both methods prune the vision tokens for each ViT layer in an unsupervised manner. Specifically, EViT leverages the corresponding class token attention, while ToMe merges redundant tokens based on a bipartite soft matching algorithm. ### Overall Results In Table 1, 2, and 3, we present the performance comparison of our approach with other state-of-the-art methods on five downstream language-image tasks, involving six datasets in total. The reported TFLOP values (both forward and backward) are estimated with a batch size of 36, and the GPU memory usage is calculated based on four A5000 GPUs (only backbone with and without ELIP method). We excluded some experiments due to resource reasons, _e.g_. VQA on the BLIP approach, or incompatibilities, such as METER and ToMe. The main observations are as follows: * Previous strong language-image pre-training methods, such as UNITER [8] and VILLA [13] often employ pre-extracted object features for vision encoder. While these approaches can be less computationally expensive in terms of TFLOPs, the retrieval results are often inferior to the recent models with ViT encoders such as ALBEF [29] and BLIP [30]. * EViT and ToMe, though reduce the model complexity by a large margin, often trade drastic model performance over these downstream tasks. For example, when applying EViT to the METER model, there is a significant drop of 20 to 30 points in R@1 for both text and image retrieval. * Unlike the two baseline methods and other compared approaches, our ELIP model achieves a superior efficiency-effectiveness trade-off. Specifically, across all five downstream tasks, our model yields an average accuracy drop of less than 0.33 for the three backbone models, evidently demonstrating its effectiveness and generalization ability. ### Ablation Study #### 4.3.1 Text Token Removal In typical language-image pre-training datasets, such as Conceptual Captions [47], the text is often accompanied by a short context, consisting of approximately 20 words per sentence. Moreover, previous studies have shown that language tokens are typically less redundant and have a higher \begin{table} \begin{tabular}{l|c c} \hline \hline Model & \multicolumn{1}{c}{val} & \multicolumn{1}{c}{test} \\ \hline 12-in-1 [41] & - & 76.95 \\ UNITER [8] & 78.59 & 78.28 \\ \hline ALBEF [29] & 79.33 & 79.41 \\ - EViT & 78.54 & 78.75 \\ - ToMe & 78.69 & 78.76 \\ - ELIP & 79.24 & 79.38 \\ \hline METER [12] & 79.94 & 79.41 \\ - EViT & 74.65 & 73.89 \\ - ELIP & 79.59 & 79.10 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison on SNLI-VE (a) and Image Captioning (b) tasks. information density in their representation [15, 35]. As a result, in our method, we did not introduce text token pruning and only performed ablation experiments to investigate its influence on model performance. Specifically, we preserved the first half of all Transformer layers and pruned 40% of text tokens in the second half to achieve a balance between efficiency and effectiveness. We designed three pruning strategies for this experiment: _Random_ - The tokens are randomly pruned; _Post_ - We prioritized pruning the post tokens; and _Learned_ - We used the [CLS] token features from the vision encoder to guide the pruning of text tokens. We run these models for **three** pre-training epochs and report the results in Table 4. One can observe that: I) Among the three approaches, random pruning tends to perform unsatisfactorily due to the unexpected removal of crucial text tokens, leading to inferior performance. II) Pertaining to text token pruning, the BLIP model is less affected than the METER model. For instance, with the post-pruning approach, the BLIP model's performance even slightly surpasses that of the non-pruned model. #### 4.3.2 Token Merging _v.s._ Pruning One alternative way to deal with the inattentive tokens is to directly prune them. Note that this pruning-only strategy leads to a slight efficiency improvement compared to the merging one. To study its effectiveness in downstream fine-tuning, we removed the merging operation in Eqn. 4 and observed the performance change of this model. As shown in Fig. 4, we can see that compared with the token merging, the pruning-only strategy usually results in inferior downstream performance. This finding implies that these less attentive tokens still contribute to the final model performance, and removing them, especially from the input image space (as proposed in [35]) may lead to sub-optimal downstream fine-tuning results. #### 4.3.3 Effect of Coefficient Parameter \(\lambda\) We also experimented with different coefficient values in Eqn. 1. We conducted this experiment with three pre-training epochs and show the downstream performance change using different \(\lambda\) in Fig. 4. The figure indicates that using the vision or text [CLS] token only for the supervision of token pruning leads to inferior outcomes. On the other hand, the combination of these two, _i.e._ when \(\lambda=0.6\) consistently outperforms the other values tested. This result supports the validity of leveraging multi-modal feature interaction for token pruning and merging in our method. In addition, we also conducted experiments on the influence of downstream pruning and pre-training epochs, and reported the results in the supplementary material. ### Pre-training Scaling Our method uses fewer vision tokens during training compared to baseline models, allowing us to spare GPU memory and scale the model to larger batch sizes. To study this effect, we carefully increased the pre-training batch size while ensuring that the required GPU memory remained less than the original pre-training. Besides, we also estimated the latency of each pre-training epoch. We performed this test on the Flickr30K dataset and illustrated the results in Table 5. Our observations for this result are three-fold: * Our ELIP method is able to maintain performance similar to the base model, while also accelerating the pre-training process and reducing the required GPU memory usage. * The spared GPU memory enables us to scale the pre-training with larger batch sizes, _i.e._ ELIP\({}_{+}\) approach. For example, with METER, we increased the batch size from 36\(\times\)4 to 54\(\times\)4, resulting in a significant improvement in training efficiency, and a reduction in BLIP pre-training time by approximately 15%. * In terms of fine-tuning, our ELIP\({}_{+}\) surpasses the ELIP by a large margin, and even slightly outperforms the base model in some cases. These results are rather promising as scaling model pre-training brings significant improvement in both downstream performance and efficiency. \begin{table} \begin{tabular}{c|c|c c|c c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{2}{c|}{TR} & \multicolumn{2}{c}{IR} \\ \cline{3-6} & & R@1 & R@5 & R@1 & R@5 \\ \hline \multirow{4}{*}{METER} & No Pruning & 84.2 & 97.7 & 69.5 & 92.3 \\ & Random & 73.8 & 93.6 & 57.9 & 88.1 \\ & Post & 76.1 & 94.6 & 61.6 & 89.0 \\ & Learned & 76.9 & 94.8 & 60.9 & 89.2 \\ \hline \multirow{4}{*}{BLIP} & No Pruning & 91.5 & 98.8 & 77.9 & 94.6 \\ & Random & 89.9 & 98.2 & 75.8 & 94.1 \\ \cline{1-1} & Post & 91.6 & 98.9 & 78.2 & 94.5 \\ \cline{1-1} & Learned & 91.1 & 98.8 & 77.7 & 94.5 \\ \hline \hline \end{tabular} \end{table} Table 4: The effect of three text pruning approaches on retrieval results of the Flickr30K dataset. Figure 4: Component effect on the text retrieval performance over the Flickr30K dataset. Left: Performance comparison of pruning-only and pruning-then-merging approaches. Right: Performance change with respect to the feature combination coefficient parameter \(\lambda\) in Eqn. 1. ### Visualization As illustrated in Sec. 3.2, our method consists of four blocks, wherein we perform pruning and merging in the last three blocks. To quantitatively demonstrate the effectiveness of our pruning approach, we randomly selected two cases and presented them in Fig.5. In particular, we mainly show the pruned attention map for two ViT layers: 2 and 10, and the effective vision tokens are gradually reduced with deeper ViT layers. From this figure, we can observe that our method progressively removes less important vision tokens with deeper ViT layers. For example, in the first case, the model gradually filters out the background information and focuses more on the _sheep_. A similar observation can also be found in the second case where the _kite_ gains more attention in the 10\(th\) ViT layer. More visualizations can be found in the supplementary material. ## 5 Conclusion and Future Work In this paper, we propose a novel approach to achieve efficient language-image pre-training without introducing any additional trainable parameters. We progressively prune and merge less influential vision tokens based on the language output using multiple stages. Despite its simplicity, we show that our approach helps remove 30% vision tokens whilst maintaining comparable performance with backbones over diverse downstream fine-tuning tasks. Our method offers valuable insights for future research in language-image pre-training under limited computing resources, and may potentially benefit other multi-modal pre-training tasks such as video-language pre-training. While our method demonstrates effectiveness in efficiency and scalability, one limitation is the lack of flexibility in the pruning ratio definition. Therefore, an adaptive approach may be more helpful as different images often exhibit varying degrees of information sparsity. In addition, our method can be seamlessly integrated with other efficient techniques, such as mixed-precision computation and checkpointing, and thus build an even more efficient and lightweight language-image pre-training model. Figure 5: Visualization of pruning results with respect to two ViT depths: 2 and 10. Note that the effective vision tokens are gradually decreased by our method. We omit the merged tokens and show only the attention maps of the remaining ones for a clear illustration. \begin{table} \begin{tabular}{c|c|c|c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{2}{c|}{batch size\({\uparrow}\)} & \multicolumn{2}{c|}{latency\({\downarrow}\)} & \multicolumn{4}{c|}{TR} & \multicolumn{4}{c}{IR} \\ \cline{4-10} & & & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline \multirow{3}{*}{METER} & Base & 36\(\times\)4 & 595m & 89.6 & 98.3 & 99.4 & 77.0 & 94.5 & 97.5 \\ & ELIP & 36\(\times\)4 & 570m & 89.3 & 98.8 & 99.6 & 76.0 & 94.7 & 97.4 \\ & ELIP\({}_{+}\) & 54\(\times\)4 & 500m & 88.7 & 98.4\({}_{+1}\) & 99.4 & 75.8 & 94.2 & 97.2 \\ \hline \multirow{3}{*}{ALBEF} & Base & 40\(\times\)4 & 441m & 93.6 & 99.1 & 99.9 & 81.0 & 96.0 & 97.8 \\ & ELIP & 40\(\times\)4 & 406m & 93.4 & 99.3 & 99.8 & 80.6 & 95.4 & 97.7 \\ & ELIP\({}_{+}\) & 58\(\times\)4 & 369m & 93.7\({}_{+1}\) & 99.3\({}_{+2}\) & 100.0\({}_{+1}\) & 81.1\({}_{+1}\) & 95.6 & 98.0\({}_{+2}\) \\ \hline \multirow{3}{*}{BLIP} & Base & 42\(\times\)4 & 722m & 94.2 & 99.1 & 99.9 & 81.4 & 95.6 & 98.1 \\ & ELIP & 42\(\times\)4 & 628m & 92.2 & 99.1 & 99.7 & 80.3 & 96.0 & 98.0 \\ \cline{1-1} & ELIP\({}_{+}\) & 56\(\times\)4 & 587m & 92.7 & 99.2\({}_{+1}\) & 99.7 & 80.7 & 95.4 & 98.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of scaling ELIP pre-training to larger batch sizes. We also show the relative improvement of ELIP\({}_{+}\) over the original Base models on the retrieval results.
2309.10349
Resolving moving heliospheric structures using interplanetary scintillation observations with the Murchison Widefield Array
We have conducted a blind search in 49 consecutive days of interplanetary scintillation observations made by the Murchison Widefield Array from mid-2019, with overlapping daily observations approximately East and South-East of the Sun at an elongation of $\sim$30 degrees and a field of view of 30 degrees. These observations detect an unprecedented density of sources. In spite of these observations being taken at sunspot minimum, this search has revealed several interesting transitory features characterised by elevated scintillation levels. One solar wind enhancement is captured in two observations several hours apart, allowing its radial movement away from the Sun to be measured. We present here a methodology for measuring the plane-of-sky velocity for the moving heliospheric structure. The plane-of-sky velocity was inferred as $0.66\pm0.147\,^{\text{o}}\text{hr}^{-1}$, or $480\pm106\,\text{km}\,\text{s}^{-1}$ assuming a distance of 1AU. After cross-referencing our observed structure with multiple catalogues of heliospheric events, we propose that the likely source of our observed structure is a stream-interaction region originating from a low-latitude coronal hole. This work demonstrates the power of widefield interplanetary scintillation observations to capture detailed features in the heliosphere which are otherwise unresolvable and go undetected.
A. Waszewski, J. S. Morgan, R. Chhetri, R. Ekers, M. C. M. Cheung, N. D. R Bhat, M. Johnston-Hollitt
2023-09-19T06:18:39Z
http://arxiv.org/abs/2309.10349v1
Resolving moving heliospheric structures using interplanetary scintillation observations with the Murchison Widefield Array ###### Abstract We have conducted a blind search in 49 consecutive days of interplanetary scintillation observations made by the Murchison Widefield Array from mid-2019, with overlapping daily observations approximately East and South-East of the Sun at an elongation of \(\sim\)30 degrees and a field of view of 30 degrees. These observations detect an unprecedented density of sources. In spite of these observations being taken at sunspot minimum, this search has revealed several interesting transitory features characterised by elevated scintillation levels. One solar wind enhancement is captured in two observations several hours apart, allowing its radial movement away from the Sun to be measured. We present here a methodology for measuring the plane-of-sky velocity for the moving heliospheric structure. The plane-of-sky velocity was inferred as \(0.66\pm 0.147\,\mathrm{\SIUnitSymbolMicro hr}^{-1}\), or \(480\pm 106\,\mathrm{km\,s^{-1}}\) assuming a distance of 1AU. After cross-referencing our observed structure with multiple catalogues of heliospheric events, we propose that the likely source of our observed structure is a stream-interaction region originating from a low-latitude coronal hole. This work demonstrates the power of widefield interplanetary scintillation observations to capture detailed features in the heliosphere which are otherwise unresolvable and go undetected. 1International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102, Australia 2CSIRO Space and Astronomy, P.O. Box 1130, Bentley, WA 6102, Australia 3CSIRO Space and Astronomy, P.O. Box 76, Epping, NSW 1710, Australia 4Curtin Institute for Data Science, Curtin University, Bentley, WA 6102, Australia ## 1 Introduction Interplanetary scintillation (IPS) was discovered in 1964 (Clarke, 1964; Hewish et al., 1964). It was observed as amplitude scintillation at radio wavelengths as radiation from compact objects (0.3 arcsec for 162 MHz) traverse through the irregularities of the solar wind (Coles, 1978). IPS, as a general technique, has been used to study the solar wind, solar wind transients, and for inner-heliospheric observations for over 55 years. This technique can give exclusive perspectives of the behaviour of the heliosphere. In particular, IPS observations allow for the solar wind to be inferred over all solar latitudes and a wide range of heliocentric distances (e.g. Bisi et al., 2009). IPS is often characterised by the g-level (e.g., Gapper et al., 1982), also known as the scintillation enhancement factor, which is the simplest measure of space weather. As g-level is dependent on scintillation, it can be used as a proxy for the density of the solar wind along the line-of-sight (Tappin, 1986). As is common with space weather analysis using IPS, the g-level measure is widely used in this paper (e.g., Section 2.2 and following sections). Recent studies that use IPS techniques for solar weather purposes (e.g. Iwai et al., 2019, 2021; Tokumaru et al., 2000; Tokumaru, 2013; Kojima et al., 2013; Chang et al., 2016; Jackson et al., 2007, 2008; Tokumaru et al., 2019) all use instruments such as the Solar Wind Imaging Facility (SWIFT, Tokumaru et al., 2011), the Ooty Radio Telescope (ORT, Manoharan, 2010), the Low-Frequency Array (LOFAR, van Haarlem et al., 2013), and the Mexican Array Radio Telescope (MEXART, Gonzalez-Esparza et al., 2004). The Murchison Widefield Array (MWA, Tingay et al., 2013; Wayth et al., 2018), brings unique capabilities to the field as it can be used to make IPS measurements of hundreds of sources across the southern sky simultaneously, owing to its large field of view (\(\sim 900\) deg\({}^{2}\) at 160 MHz) and higher instantaneous sensitivity. The MWA is a low-frequency radio telescope, operating in the frequency range of 70-300 MHz and is located in the Murchison shire of Western Australia. It consists of 4 096 antennas arranged in 256 tiles (128 of which are used at any one time) of 4\(\times\)4 dipoles, which are distributed over an area extending over several kilometres in radius. With longer baselines extending out to \(\sim\)5-6 km, the MWA provides an angular resolution \(\sim\)1 arc-minute (at 160MHz). The MWA was designed and built to be a flexible general purpose instrument, supporting many different science goals (Beardsley et al., 2019). The MWA is able to provide a time resolution of 0.5 s in imaging mode, short enough to measure IPS, allowing Morgan et al. (2018) and Chhetri et al. (2018) to use IPS in order to identify and characterise compact radio sources. At MWA frequencies, this can most easily be done at solar elongations of around 30\({}^{\circ}\). More recently, Morgan et al. (2023) demonstrated that a Coronal Mass Ejection (CME), detected at launch in white-light coronagraph images, could be detected in interplanetary space using MWA IPS observations, with the unprecedented number of lines of sight allowing the CME to be mapped in exquisite detail, using just two 5-minute observations. More recent data, taken by the upgraded Phase II MWA (Wayth et al., 2018) has been synthesised by Morgan et al. (2022) into the first data release of the Phase II IPS survey, which catalogs the IPS properties of over 40 000 sources from the GLEAM survey (Wayth et al., 2015; Hurley-Walker et al., 2017), with IPS strongly detected in over 7000 of them. As well as providing baseline scintillation levels for all of these sources, facilitating space weather studies, a further byproduct of this survey is a set of 250 10-minute MWA IPS observations, with the scintillation index of several hundred IPS sources measured in each of them. These data are the starting point for the study presented here. A subset of 93 observations taken across 49 contiguous days of observations was chosen from the survey to form the basis of this study, since they provide continuous observations of the eastern limb of the Sun. While conducting a blind search of this data, we identified several heliospheric structures. One in particular was detected in two observations spaced 96 minutes apart, allowing us to infer the plane-of-sky angular velocity of the structure. This structure is the focus of this paper, which is organised as follows: in Section 2 we describe how the MWA IPS observations were chosen, and how they were further reduced to probe the solar wind. Section 3 outlines the identification of the heliospheric structure, and discusses the novel method of inferring the plane-of-sky velocity with errors, calculated using jack-knife tests (Tukey, 1958), of a solar wind structure in IPS observations. We also discuss the possible solar origins of this structure. Section 4 discusses the implications that this work has on future heliospheric studies and modeling, and concludes with suggestions for further work. To describe our observations we use a range of heliocentric based coordinate systems. A description of each frame can be found in Section 5. ## 2 Methodology ### Observations For the MWA Phase II IPS survey, 10-minute observations were taken almost daily between 2019-FEB-04 and 2019-AUG-18, totalling in 1 511 observations. The survey sampled from 6 principal target fields at a \(\epsilon\) of 30\({}^{\circ}\) (\(\sim\) 107 solar radii from the Sun) at \(\phi\) of 60\({}^{\circ}\), 90\({}^{\circ}\), 120\({}^{\circ}\), 240\({}^{\circ}\), 270\({}^{\circ}\), 300\({}^{\circ}\), with an occasional additional 5 target fields at 30\({}^{\circ}\), 150\({}^{\circ}\), 180\({}^{\circ}\), 210\({}^{\circ}\), 330\({}^{\circ}\) (Morgan et al., 2022); see left-most panel of Fig. 1. However, only 263 of the total observations were selected for the first data release (Morgan et al., 2022), a large number of observations remain unprocessed. For the first blind search for space weather events we chose to focus on a set of 93 observation spanning over 49 contiguous days (spanning from 2019-JUN-25 to 2019-AUG-14, with two days with no observations taken, 2019-JUL-16 and 2019-AUG-01), with two pointing directions (E and SE, specified with a solid line outline in Fig. 1) observed on all but five days (2019-JUN-25, 2019-JUN-29, 2019-JUL-02, 2019-JUL-27, and 2019-AUG-06) which only had the SE pointing direction processed. ### Space Weather Analysis of MWA Data In the radio wavelengths there are other sources of variability that can influence how a source scintillates. One of the main contributors is ionospheric effects. Ionospheric scintillation acts on a longer time-scale and also Figure 1: a) IPS survey target fields for 2019-AUG-04, including 4 of the principal target fields coloured in blue (W, 90\({}^{\rm o}\); SE, 240\({}^{\rm o}\); E, 270\({}^{\rm o}\); NE, 300\({}^{\rm o}\); all relative to Ecliptic North) and 3 of the additional target fields only outlined (SSW, 150\({}^{\rm o}\); S, 180\({}^{\rm o}\); SSE, 210\({}^{\rm o}\)) in helioprojective coordinates (HPC). All pointings are taken at 30\({}^{\rm o}\) from the Sun (0.5 AU or \(\sim\,107\) solar radii). The two pointing directions chosen for the blind search are outlined in a solid line, with pink for the SE (earlier observation) and green for the E (later observation) pointings. The locations of STEREO-A (black square) and STEREO-B (grey diamond) are also shown. b) The line of sight for both target pointings in the ecliptic plane, with their associated piercepoints (point of closest approach with the Sun) in heliocentric coordinates (HCC). The locations of STEREO-A (black square), STEREO-B (grey diamond), and SOHO (black triangle) are also shown. c) The line of sight for both target pointings in the meridional plane, with their associated piercepoints in HCC. The locations of STEREO-A, STEREO-B, and SOHO (black triangle) are also shown. affects sources that are much larger in size. These differences allow ionospheric scintillation to be easily filtered out in the variability of compact radio sources (Morgan et al., 2018). The full filtering, calibration (Offringa et al., 2015), and imaging procedure, as well as methodology for extracting variability from the image plane, are outlined in detail in Section 2 and 3 of Morgan et al. (2022). Here we describe the extra steps needed for a space weather analysis of the same data. Note that a space weather analysis of MWA IPS data containing a CME has already been carried out (Morgan et al., 2023), but the approach used here differs in that we used the published catalogue of Morgan et al. (2022) to provide the reference scintillation levels required. Morgan et al. (2022) model the scintillation index, \(m_{\rm pt}\), of an unresolved source due to the average background solar wind as \[m_{\rm pt}=0.06\lambda(ep)^{-b}, \tag{1}\] (Rickett, 1973; Manoharan, 1993), where \(\lambda\) is the wavelength (in metres) of the observation, \(e\) is the elliptical term defined as \[e=\sqrt{2.25\sin^{2}(\phi)+\cos^{2}(\phi)} \tag{2}\] (Morgan et al., 2019), and \(p\) is the point of closest approach of the life-of-sight to the Sun, also known as the piercepoint. At a solar elongation of \(\epsilon\), the piercepoint is at a distance of \(\sin(\epsilon)\) (in AU) from the Sun. The geometry of the IPS line-of-sight for the chosen pointings in this work is shown in the right-hand panels of Fig 1, showing where the piercepoint is located depending on the pointing around the Sun looking at both the ecliptic and meridional plane. The constant of proportionality for \(m_{\rm pt}\), the ellipticity term \(e\), and the power-law index \(b\), will all vary during the solar cycle, as well as between solar cycles. Where applicable we have used their long term average values for solar minimum which have been established as a constant of proportionality of 0.06 and a power law index, \(b\approx 1.6\)(Morgan et al., 2022, 2019, and references therein). We use as our starting point the scintillation indices, error estimations, and normalised scintillation indices (NSI, Chhetri et al., 2018) for each source in each observation from the processed observations used in Morgan et al. (2022). Sources in general vary in scintillation level due to the inherent source structure, therefore we use the NSI per source as it gives the scintillation of a source relative to a compact source. #### 2.2.1 Determining g-levels The g-level, as introduced in Section 1, also known as the scintillation enhancement factor, is a measure of how much a particular source's scintillation is departing from its norm in a particular observation. In basic terms it is the observed scintillation level relative to a baseline, expected scintillation and is defined as, \[g=\frac{m_{\rm obs}}{m_{\rm pt}\times\rm NSI}, \tag{3}\] where \(m_{\rm obs}\) is the observed scintillation of a source in a particular observation. As mentioned previously in Section 1, the g-level is used as a proxy for the density of the solar wind along the line-of-sight (Tappin, 1986) as it is dependent on scintillation. The scintillation level is approximately proportional to the square of the electron density integrated along the line of sight (Morgan et al., 2019, and references therein), but it should be noted that the exact relationship between the g-level and the electron density is not relevant to the analysis that is done in this work. We defer the determination of the quantitative physical characteristic to future work. However, for the purposes of detecting structures in the g-level, a proper baseline scintillation level must be used. It is necessary to define a baseline scintillation level which takes into account the distance from the Sun, as well as the fact that the polar solar wind is more diffuse. The ellipticity term defined in Eq. 2 will account for the latter, therefore we combine the expected scintillation of a source from Eq. 1 with the NSI to account for source structure, to give the expected scintillation level of the source in question. This differs to the analysis in Morgan et al. (2023), as the NSI for each source was calculated over the full set of observations taken for the survey, rather than just using the observation in question. By calculating the g-level of every source within the field of view (FOV), we can add this information to a map of the sky called a g-map, which can be used to detect regions of enhanced scintillation. An example is shown on the right hand panel of Fig. 2. In this particular observation, there are 1 345 sources, 1 055 have measured g-levels. The majority of sources in the g-map in Fig. 2 are clustered around a g-level of 1, as shown by the distribution of g-levels to the left of Fig. 2, with only a handful showing major deviations in scintillation levels. At the high signal to noise limit there is a scatter of about 20% around a g-level of 1, consistent with what is shown in Tappin (1986). Therefore we conclude that in this case there is less large-scale structure (fewer to no CMEs) which is as expected for a time of low solar activity. If denser areas of solar wind are present in the field due to increased solar activity, the g-map would contain clusters of sources with heightened g-levels. It is clear to see, that the further out from the centre of the FOV the source is, it is more likely to show a g-level other than 1. This is due to the effect of noise as the sensitivity decreases further out in the primary beam, which is where the MWA is most sensitive too. During the processing of the survey catalogue, a large number of sources at very low signal to noise (1 \(\sigma\)) were kept. This was useful for the purposes of the survey, but due to the unreliability in the individual g-level measurements that such low signal to noise sources can create, leading to an inaccurate g-map, any source with a signal to noise ratio (S/N) of less than 2 was excluded from our analysis. ## 3 Results Over the full search of 97 observations, more than 80% of the g-maps resembled the one shown in Fig. 2, with few extreme g-level sources or no obvious solar wind structures. The remaining \(\sim\)20% have either an increased number of high g-level sources, both structured and random. We defer a more comprehensive analysis of the full data set to future work, and focus here on the most interesting structure detected. ### Identification of a Moving Heliospheric Structure Our blind search revealed a handful of structures of which one event is particularly noteworthy, observed on 2019-AUG-04 at 04:56 UTC and 06:32 UTC. Three regions of enhanced scintillation are shown in Fig. 3, and were found in two observations separated by \(\sim\)1.5 hours; two prominent arcs, one closer to the Sun (Arc B), and one further east away from the Sun (Arc A), alongside a large shapeless mass on the very edge of the field. Although the structures are irregular, which make measuring the exact morphology of the features difficult, each arc is about 5\({}^{\rm o}\) in width, or \(\sim\) 18 solar radii wide, with the centre of the field being 0.5 AU (\(\sim\) 107 solar radii) from the Sun. This mass is not just on the edge of the field, it is also on the very edge of the primary beam, where the MWA is most sensitive too, while also being on the edge of the IPS survey coverage area. By being on the edge of the survey area, these particular sources within the mass were observed only a handful of times, compared to those further in being observed multiple times. This can cause uncertainties in the NSI of the sources, therefore increasing the uncertainty of their g-level. Further adding to these arguments, the mass has a high \(\epsilon\), further increasing the uncertainty of the g-levels. For these reasons, this mass was excluded from the analysis. Arc A and Arc B are slightly misaligned from each other, while still moving radially away from the Sun, with a \(\phi\) of \(\sim\)102\({}^{\rm o}\) and \(\phi\) of \(\sim\)115\({}^{\rm o}\) respectively. As ev Figure 2: Left: Distribution of g-level compared to a source’s signal to noise (S/N) within a MWA IPS observation. All sources on this distribution are included in the g-map on the left, but for further analysis, only those above the orange line are included. These sources meet the criteria of having a defined g-level with a S/N of above 2. Right: A heliocentric (Sun at the origin) g-map of a MWA IPS observation with 1 345 sources, where 78% of sources within the field have an associated g-level, either shown as a coloured circle (S/N of above 5) or a coloured plus (S/N of above 2, but below 5). Sources with no defined g-level are shown as black crosses, and those that have a defined g-level but are below a S/N of 2 are a coloured cross (colour associated to its g-level). These sources shown in crosses are excluded in further analysis. This example observation was taken on 2019-JUL-21 05:30 UTC. ident from Fig. 3, the structure appears to be moving through the FOV radially away from the Sun. ### Measuring the Angular Velocity To make a measurement of the plane-of-sky velocity (i.e. the component of the true velocity perpendicular to the line of sight) of the detected heliospheric structure, we first interpolated the g-levels of all the sources onto a finer grid giving us a smooth g-map, as shown in Fig. 4. As we expect the structure to move radially away from the Sun, we adopted a heliocentric coordinate system, as explained in Section 1. This reduces the problem to just one dimension, with change only in \(\epsilon\) for both arcs. In Fig. 4, we see both observations with their original g-map transferred to the new coordinate system on the top, with the bottom row showing this new interpolated map, with a pixel size of \(0.2^{\rm o}\times 0.1^{\rm o}\). The smoothed field was created by determining the g-level at any certain point using a radial basis function (RBF, Buhmann, 2000) with a Gaussian form, \[\frac{\sum^{n}g_{n}w_{n}}{\sum^{n}w_{n}}\,,\,\,{\rm where}\,\,w_{n}=\exp\left( -0.5\left(\frac{r}{r_{\rm o}}\right)^{2}\right)\,\,{\rm for}\,\,{\rm r}<3^{\rm o}; \,0\,\,{\rm otherwise} \tag{4}\] Using this particular interpolation scheme with a radius of \(3^{\rm o}\) to search for nearby g-level measurements produced a smooth, completely defined g-map with no internal gaps in g-level, with both structures clearly visible in both observations With fewer sources of lower S/N contributing to the g-level on the edges of the map, high g-level values can dominate and skew the map. This is of no concern to us as both Arcs A and B are well-defined within the centre of the g-map. As the overall scintillation enhancement appears to be stronger in the later observation (on the right-hand side of Fig. 4), this was used as the reference observation to define the location of Arc A by eye. A small box was defined to encompass the full arc, and the total g-level within the box was recorded. That same box was layered atop of the target observation (the earlier observation, on the left-hand side), and is allowed to move freely in \(\epsilon\). A shift in \(\epsilon\) of \(-2^{\rm o}\) corresponds to an angular velocity of \(-1.38^{\rm\,o}{\rm hr}^{-1}\), meaning the structure would be moving backwards, and a shift of \(6^{\rm o}\) corresponds to a velocity of \(4.11^{\rm\,o}{\rm hr}^{-1}\), equating to \(3000{\rm\,km\,s^{-1}}\) at 1 AU, faster than any CME detected. By minimising the sum of the differences in g-level squared within the box between the two observations, shown in Fig. 5, the optimal shift in \(\epsilon\) is determined. An estimated plane-of-sky velocity was found of \(0.66^{\rm\,o}{\rm hr}^{-1}\). Although we have a strong prior that the only movement that this feature would exhibit would be radial this far from the Sun, there could be some movements present in \(\phi\) (i.e. lateral changes). Running a similar analysis as was done for the radial velocity proved challenging due to the structures being extended in the radial direction, and once run, the jackknife tests indicated that, in contrast to the radial velocity, the result was highly dependent on which sources were included. It must also be noted, that this radial velocity estimation of Arc A depends on the assumption that this system only consists of a frozen screen moving in a radial direction through the field. This means that there are no changes within the screen itself between the two observations. This model holds true for Arc A, being well-defined and having a similar overall g-level in both observations, but this assumption does not hold for Arc B. Figure 4 clearly shows that in the earlier observation Arc B has a lower average g-level compared to the later observation. Although we are able to define the area of Arc B well in the later observation, as Arc B does not follow the assumed model, we are unable to obtain good constraints on Arc B's movement. For this reason, we continued the plane-of-sky velocity estimation with only Arc A. The same process was repeated in a series of jackknife tests (Tukey, 1958) to estimate the error on the plane-of-sky velocity. This involved each source in the field to be removed individually, the smooth g-map re-interpolated, and the new optimum \(\epsilon\) shift to be found. Each of these shifts in \(\epsilon\) were recorded, allowing two analyses to be done, first, a search for bias due to a single source dominating the smooth g-map, which none were found, and secondly, giving an estimate of the error of this velocity calculation. In total there were 1 413 jack-knife tests, equal to the total number of sources in the reference field, above the designated signal to noise ratio. The jack-knife test gave an error of 22%, with an adjusted final radial velocity of \(0.66\pm 0.147^{\rm\,o}{\rm hr}^{-1}\). Assuming a distance from Earth to the FOV of 1AU, it gives us a plane-of-sky velocity of \(480\pm 106{\rm\,kms^{-1}}\). Extensive testing show that this result was insensitive to the size of the box or the exact form of the smoothing function. ### Exploring Possible Origins In order to determine the possible origin of the heliospheric structure, we undertook a systematic search of already catalogued solar events in the literature. When a bright CME is detected by IPS observations, the g-level scintillation enhancements are expected to form arc-shaped structures, just as both Morgan et al. (2023) and Tokumaru (2013) found when studying large CMEs using the MWA and SWIFT, respectively. Tokumaru (2013) explained it as compressed plasma associated with the leading edge of the interplanetary CME (ICME). These arc-shaped structures are similar to those seen in this observed scintillation enhancement, which encourages the idea that the structures we see originate from the Sun. However, Bisi et al. (2010) found that a dense compression region at the leading edge of a fast stream interacting with the trailing edge of a slow stream, such as in a stream interaction region, can also cause enhanced scintillation in IPS observations. Given that these observations were taken close to solar minimum, when CMEs are relatively rare, but coronal holes closer to the equator are not (Gopalswamy, 2022), this alternative explanation is also worth considering. Given a radial velocity, it is possible to find an estimated time of launch off of the Sun, assuming no acceleration is experienced by the structure. IPS is measured along a line-of-sight where it's highest sensitivity is at the closest approach to the Sun, the piercepoint. As we are observing 30\({}^{\circ}\) away from the Sun, we can assume that the piercepoint, and therefore, the centre of the observation is at 107 solar radii. This can be assumed as the distance travelled by the observed heliospheric structure. Using both the plane-of-sky velocity and the distance travelled, it is estimated that the solar event that caused the structure would have been launched around 2019-AUG-02. In reality, solar wind events will accelerate or decelerate, therefore a reasonable launch window is from 2019-AUG-01 to 2019-AUG-03. Along with the position angle for both arcs, mentioned in Section 3.1, this information can be used to search through solar event catalogues as to find a possible progenitor on the solar surface. Various solar observatories, e.g. SOHO/LASCO (Brueckner et al., 1995) and STEREO/SECCHI (Howard et al., 2008), catalogue major solar activity, specifically CMEs (and ICMEs). Alongside these catalogues there are additional catalogues of CMEs using alternative detection methods, some with computer generated catalogues, others with particular sorting techniques. A search of seven separate CME/ICME and solar event catalogues (listed in Section 6) was completed, with a known position angle, and an estimated velocity and launch time for this observed structure. No plausible match was found with all three criteria met. Even with less strict searches, such as excluding final velocity, there were still no plausible CME event filed in any of the catalogues searched. Figure 3: g-maps of MWA IPS observations in heliocentric coordinates separated by 1.5 hours in time, with g-level depicting the level of scintillation enhancement experienced by an individual source indicated with their colour. Left: Earlier observation taken at 2019-AUG-04 4:56 UTC with a pointing direction of SE. Right: Later observation taken at 2019-AUG-04 6:32 UTC with a pointing direction of E. In both observations, three regions of enhanced scintillation can be identified; Arc A, Arc B, and a large shapeless mass on the very edge of the field. The dotted lines pass through the centre of each arc (the arc’s position angle) as seen in the earlier observation. These dotted lines remain identical in the later observation, to act as a reference point in showing the radial movement of the feature away from the Sun. The dashed line in both figures is identical, and is identified as the leading edge of Arc A (as seen in the earlier observation). This shows further the radial movement of the structures away from the Sun. ### Coronagraph and EUVI Images The catalogues that were searched are all created using the white-light coronagraph images taken by either SOHO/LASCO or STEREO/SECCHI, therefore we also examined all white light coronagraph difference images made by SOHO/LASCO C2 and C3 detectors over the period of estimated launch, to conclude that there was no obvious activity caused by a CME or solar flare event. However, during the estimated time of launch of the observed structure, STEREO-A was facing the eastern limb of the Sun (see Figure 1 for the relative positions). In the EUVI images made by STEREO-A during the estimated launch time, there are two coronal holes that are visible. The first is an equatorial coronal hole, which created a high-speed stream that later impacted STEREO-A (STEREO-A, 2019), while the second is a low-latitude coronal hole, matching the position angle of our observed feature. The high-speed stream from the equatorial coronal hole reached a peak solar wind speed of \(460\,\mathrm{kms}^{-1}\). Although from a different coronal hole, this supports the hypothesis that the structure that we observed could be a stream interaction region (SIR). As noted above such events have been detected with IPS techniques in the past (Breen et al., 1998; Bisi et al., 2010). Figure 4: Top Row: Original g-maps for both observations in new radial heliocentric coordinate system, \(\epsilon\) and \(\phi\), where \(\epsilon\) is degrees radially away from the Sun, and \(\phi\) is the position angle. Both Arcs A and B are labelled. Bottom Row: Smooth g-map for both observations with a pixel size of \(0.2^{\circ}\times 0.1^{\circ}\). Arcs A and B are in the same location in the Bottom Row as in the Top Row. The black box on the reference observation (right) was drawn to define the enhanced scintillation area of Arc A. The black box on the target observation (left) is of the same size and shape, but has been shifted in down in \(\epsilon\). ## 4 Discussion With a well calculated estimation of the plane-of-sky velocity and a positional analysis completed, it is clear that the observed structures have solar origins. Although some preliminary work in deciphering the exact solar origins of this observed structure has been done, by looking at both white-light coronagraph images as well as STEREO EUVI images, the exact nature of the structures remains unclear. Whether this structure originates from a stream interaction region, or possibly a small, undetected CME impacting a SIR, or another solar event, can not be differentiated at this time. ### Implications for Heliospheric Monitoring IPS observations taken by the MWA can provide a unique viewpoint of the heliosphere that many other solar probes and IPS stations are not able to provide. Using our MWA IPS observations, we were able to detect a heliospheric structure that would have otherwise gone undetected. Braga et al. (2022) state that the region between the solar corona and 1AU is not probed to the fullest extent. Coronagraph imaging has a limited field of view, with the majority of current instruments aimed relatively close to the Sun, where the largest fields of view reach only about 8 degrees (\(\sim\)32 solar radii) away from the Sun. As shown, the MWA is able to probe much further out into interplanetary space, monitoring how space weather events might evolve as they move with an unprecedented density of detected scintillating sources. With the MWA's ability to sample any region of space surrounding the Sun, it possesses the ability to remotely sense solar wind on the Eastern limb of the Sun. Activity in this region originated on parts of the solar surface that have not be viewable from the Sun-Earth line for upwards of 13 days, so data for this part of the heliosphere can be particularly scarce. This lack of information can lead to uncertainties in models, where magnetic field coverage is limited in certain areas of the heliosphere (Jin et al., 2022). Our ability to infer the angular velocity with MWA IPS on the sky provides independent information on the velocity of detected structures. It is generalised enough that an angular velocity can be found for any structure in IPS data, whether it be a stream interaction region or CME. Work carried out by the majority of the IPS space weather community is measuring and tracking the radial speed of CMEs and ICMEs, and as discussed by Iju et al. (2013) and Iwai et al. (2021), it is the use of multiple IPS stations and a variety of techniques that can give the best interpretation of the solar wind. Our work is very complementary to multi-station IPS undertaken by ISEE (Tokumaru, 2013) and single-station power spectrum fitting (Chang et al., 2019) with LOFAR. ### Future Work As previously stated in Section 4, the nature of the observed structures as well as their exact solar origins are unclear. The use of MHD or full-scale solar wind simulations may be useful in aiding the interpretation of the origins and evolution of this event. However, such Figure 5: Sum of the differences in g-level squared between the boxes in the target and reference observations. This test was completed using the contributions of all sources with a defined g-level in the reference observation’s field. modelling is beyond the scope of this initial discovery publication, and we defer this more in-depth simulation analysis for a future publication. Since the completion of this work, we now have IPS data covering 20 months between April 2020 and March 2023, with the observing periods being 2020-APR-13 to 2021-JAN-28, 2022-JUN-15 to 2022-OCT-20, and 2022-OCT-29 to 2023-MAR-20. These observations were taken in survey mode as depicted in Section 2.1 (similar target fields to Fig. 1), and we intend to analyse a significant subset of these data in order to continue the search for interesting heliospheric activity. This initial work relied on visual inspection of all the processed g-maps as a search for any interesting observations or features. Since it was still unknown whether anything of interest would be present in the data, this visual inspection method sufficed. For future work with MWA IPS data, we plan on implementing a systematic process that would flag possible candidate observations for further study. The Australian SKA Pathfinder (ASKAP, Hotan et al., 2021) is a higher frequency radio telescope dish array which is co-located with the MWA. As shown by Chhetri et al. (2022), IPS measurements can be made using a similar scheme (assuming the same number of pointings and observing time as in previous sections) probing \(5-20^{\circ}\) from the Sun. This would allow us to make almost continuous observations from \(\sim\)20 to over 100 solar radii. Where ASKAP reaches its limits, the MWA takes over. In the future there is is possibility of doing triggered MWA observations of a known CME. As the MWA is probing far into the heliosphere, even a very fast CME will take a number of hours to reach the MWA's FOV, this gives us enough time to schedule observations. As long as the Sun is above the horizon, we are able to take survey mode observations over the full day (\(\sim\)8 hour period will result in \(\sim\)48 observations), probing in particular the location of the CME. The CME would take several hours to leave the MWA's FOV, meaning there is a high likelihood of having more than two observations of the structure. With more observations we can measure the velocity with more precision, and/or estimate any acceleration. ### Conclusions With two interplanetary scintillation observations separated by 1.5 hours, taken by the Murchison Widefield Array during mid-2019 close to sunspot minimum, we observed a moving heliospheric structure using the high density of IPS sources in the field-of-view (\(\sim\)700 detected sources in \(900\,\mathrm{deg}^{2}\)). We observe g-levels greater than 1.5, implying highly enhanced levels of scintillation caused by increased density within the solar wind. As two individual observations were made of the same structure, a radial plane-of-sky velocity was able to be inferred of \(480\pm 106\,\mathrm{kms}^{-1}\). This provides an excellent demonstration of the benefits of the MWA's large field of view which allows for simultaneous observations of a large number of compact sources and their IPS characteristics. After comparisons with seven separate CME and ICME catalogues, as well as white light coronagraph difference images, we conclude that this heliospheric structure was not associated with a Coronal Mass Ejection. With the link of stream interaction regions (SIRs) having a solar cycle dependency during the declining phase, and images from STEREO-A of a coronal hole on the Sun at low-latitudes corresponding to the position angle of our observed heliospheric structure, we hypothesise that our structure is an IPS observation of a SIR, in what is considered far interplanetary space. The MWA is able to probe the interplanetary space where current measurements are sparse, especially regions far from the corona out to 1AU. With a large density of IPS sources per observation, compared to other current IPS stations, the MWA has a unique capability of providing important information, such as the structural evolution, of the solar wind over a large region, that is unable to be obtained at high cadence from other techniques and instruments. ## 5 Coordinate Definitions ### Acronyms **Helioprojective Radial Coordinates**: To describe our observations used in this work we use a helioprojective coordinate system centred on the observer (the Earth). Any observing direction can be parameterised by \(\epsilon\) and \(\phi\), where \(\epsilon\) is the elongation from the Sun, while \(\phi\) is the position angle measured from the Sun's North pole (projected into the plane of sky of the observer) through East. **HCC - Heliocentric Cartesian Coordinates**: A coordinate system in the heliocentric system which is observer-based. The origin is the center of the Sun. The Z-axis is aligned with the Sun-observer line. The Y-axis is aligned with the component of the vector to the Sun's north pole that is perpendicular to the Z-axis. **HPC - Helioprojective Cartesian Coordinates**: A coordinate frame which is observer-based. The origin is location of the observer (the Earth). \(\theta_{x}\) is the angle relative to the plane containing the Sun-Earth line and the Sun's rotation axis. \(\theta_{y}\) is the angle relative to the Sun's equatorial plane. This coordinate system and the earlier heliopro jective radial coordinate system are related as so; \(\theta_{x}=\epsilon\sin\phi\), and \(\theta_{y}=\epsilon\cos\phi\). ## 6 Open Research MWA data is available from the MWA All-Sky Virtual Observatory (MWA ASVO, 2023), and for this work was accessed via giant-squid (Null et al., 2021), which is an alternative MWA ASVO client. For access to the data stored in this archive, registration is required. At the time of writing, the observations used in this paper are public, and can be identified by their GPS start times (example g-map: 1247722256, detected structure: 1248929800 and 1248935560) which serve as unique identifiers of these observations within the MWA archive. All the IPS observations described in Morgan et al. (2022) are also archived, under project code D0011. Coronal Mass Ejection public catalogues that were queried for this work are as follows; SOHO LASCO CME Catalog (Gopalswamy et al., 2009), STEREO COR1 CME Catalog (Xie, 2019), CACTus LASCO C2/C3 catalogue and COR2 catalogue (Robbrecht & Berghmans, 2004; Robbrecht et al., 2009), SEEDS LASCO C2 catalogue (Zhang & Dhakal, 2020a), SEEDS SECCHI COR2 catalogue (Zhang & Dhakal, 2020b), and WIND ICME Catalogue (Nieves-Chinchilla et al., 2018). This research used version 4.0.2 (Mumford et al., 2022) of the SunPy open source software package (The SunPy Community et al., 2020) for coordinate conversions. This scientific work makes use of Inyarrimanha Ilgari Bundara, the Murchison Radio-astronomy Observatory operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. A.W was supported by an Australian Government Research Training Program (RTP) Stipend and RTP Fee-Offset Scholarship.
2309.07635
Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic field I: Schrödinger equation
This is the first of a series of papers in which we investigate the decay estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform magnetic field. In this first starting paper, we prove the local-in-time dispersive estimates and Strichartz estimates for Schr\"odinger equation with one Aharonov-Bohm solenoid in a uniform magnetic field. The key ingredient is the construction of Schr\"odinger propagator, we provide two methods to construct the propagator. The first one is combined the strategies of \cite{FFFP1} and \cite{GYZZ22, FZZ22}, and the second one is based on the Schulman-Sunada formula in sprit of \cite{stov, stov1} in which the heat kernel has been studied. In future papers, we will continue investigating this quantum model for wave with one or multiple Aharonov-Bohm solenoids in a uniform magnetic field.
Haoran Wang, Fang Zhang, Junyong Zhang
2023-09-14T12:00:40Z
http://arxiv.org/abs/2309.07635v1
# Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic field I: Schrodinger equation ###### Abstract. This is the first of a series of papers in which we investigate the decay estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform magnetic field. In this first starting paper, we prove the local-in-time dispersive estimates and Strichartz estimates for Schrodinger equation with one Aharonov-Bohm solenoid in a uniform magnetic field. The key ingredient is the construction of Schrodinger propagator, we provide two methods to construct the propagator. The first one is combined the strategies of [17] and [19, 18], and the second one is based on the Schulman-Sunada formula in sprit of [26, 27] in which the heat kernel has been studied. In future papers, we will continue investigating this quantum model for wave with one or multiple Aharonov-Bohm solenoids in a uniform magnetic field. ## 1. Introduction Let us consider the electromagnetic Hamiltonian \[H_{A,V}=-(\nabla+iA(x))^{2}+V(x),\] where the electric scalar potential \(V:\mathbb{R}^{n}\to\mathbb{R}\) and the magnetic vector potential \[A(x)=(A^{1}(x),\ldots,A^{n}(x)):\,\mathbb{R}^{n}\to\mathbb{R}^{n}\] satisfies the Coulomb gauge condition \[\operatorname{div}A=0. \tag{1.1}\] In three dimensions, the magnetic vector potential \(A\) produces a magnetic field \(B\), which is given by \[B=\operatorname{curl}(A)=\nabla\times A.\] In general dimensions \(n\geq 2\), \(B\) should be viewed as the matrix-valued field \(B:\mathbb{R}^{n}\to\mathcal{M}_{n\times n}(\mathbb{R})\) given by \[B:=DA-DA^{t},\quad B_{ij}=\frac{\partial A^{i}}{\partial x_{j}}-\frac{ \partial A^{j}}{\partial x_{i}}. \tag{1.2}\] The Schrodinger operators with electromagnetic potentials have been extensively studied from the aspects of spectral and scattering theory, we refer to Avron-Herbst-Simon [3, 4, 5] and Reed-Simon [23], in which many important physical potentials (e.g. the constant magnetic field and the Coulomb electric potential) are discussed. The purpose of our program here is to study how the electric or magnetic potentials affect the short-time or long-time behavior of the solutions for dispersive equations (e.g. the Schrodinger, wave and Klein-Gordon). Introduction The study of the energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-energy-momentum-momentum-momentum-momentum-energy-momentum-momentum-momentum-momentum-energy-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentum-momentummomentum-momentum-momentum-momentum-momentum-momentum-momentum-momentummomentum-momentum-momentum-momentummomentum-momentum-momentum-momentum-momentummomentum-momentum-momentummomentum-momentummomentum-momentum-momentummomentum-momentum-momentummomentum-momentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentum-momentummomentummomentum-momentummomentum-momentummomentum-momentummomentummomentum-momentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentum-momentummomentummomentummomentum-momentummomentummomentummomentum-momentummomentummomentummomentum-momentummomentummomentummomentum-momentummomentummomentummomentummomentummomentummomentum- in [18, 19]. On the other hand, because of the Aharonov-Bohm effect, a feature of the Mehler kernel which is related to the Schrodinger kernel associated with pure uniform magnetic field breaks down. Indeed, if \(\alpha=0\), the Schrodinger kernel can be written as \[e^{itH_{0,B_{0}}}(x,y)=\frac{B_{0}}{4\pi\sin(B_{0}t)}\exp\Big{\{}\frac{B_{0}}{4i} \big{(}\cot(B_{0}t)|x-y|^{2}-2x\wedge y\big{)}\Big{\}},\] furthermore, one can write \[e^{itH_{0,B_{0}}}(x,y)=\frac{B_{0}}{4\pi\sin(B_{0}t)} \exp\Big{\{}\frac{B_{0}}{4i}\cot(B_{0}t)\big{(}|x|^{2}+|y|^{2}\big{)} \Big{\}}\] \[\times\exp\Big{\{}i\frac{B_{0}y\cdot R(B_{0}t)x}{2\sin(B_{0}t)} \Big{\}},\] where \(R(\theta)\) is the usual \(2\times 2\) rotation matrix given by \[R(\theta)=\left(\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right).\] Therefore, see [20, Theorem 2], one can prove the Strichartz estimates \[\|e^{itH_{0,B_{0}}}f\|_{L^{q}_{t}((0,\frac{\pi}{B_{0}});L^{r}_{x}(\mathbb{R}^ {2})}=(4\pi)^{1-\frac{4}{q}}\|e^{it\Delta}f\|_{L^{q}_{t}(\mathbb{R};L^{r}_{x}( \mathbb{R}^{2})}\leq C\|f\|_{L^{2}(\mathbb{R}^{2})},\] where \((q,r)\in\Lambda_{S}\) defined in (1.9) below. More precisely, we aim to study the dispersive behavior of the magnetic Schrodinger equation \[\begin{cases}i\partial_{t}u(t,x)-H_{\alpha,B_{0}}u(t,x)=0,\\ u(0,x)=u_{0}(x),\end{cases} \tag{1.6}\] where \(H_{\alpha,B_{0}}\) is given in (1.3) with the perturbation of potentials \(A_{B}(x)\) and \(A_{\rm hmf}(x)\). Now we state our main results. **Theorem 1.1**.: _Let \(H_{\alpha,B_{0}}\) be in (1.3) with the potentials being (1.4)-(1.5) and let \(u(t,x)\) be the solution of (1.6). Then there exists a constant \(C>0\) such that the dispersive estimate_ \[\|u(t,x)\|_{L^{\infty}(\mathbb{R}^{2})}\leq C|\sin(tB_{0})|^{-1}\|u_{0}\|_{L^{1 }(\mathbb{R}^{2})},\quad t\neq\frac{k\pi}{B_{0}},\,k\in\mathbb{Z}, \tag{1.7}\] _and the Strichartz estimate holds for_ \[\|u(t,x)\|_{L^{q}([0,T];L^{p}(\mathbb{R}^{2}))}\leq C\|u_{0}\|_{L^{2}(\mathbb{ R}^{2})}, \tag{1.8}\] _where \(T\in(0,\frac{\pi}{2B_{0}})\) and \((q,p)\in\Lambda_{S}\) with_ \[\Lambda_{S}:=\Bigg{\{}(q,p)\in[2,+\infty]\times[2,+\infty):\frac{2}{q}=2\Big{(} \frac{1}{2}-\frac{1}{p}\Big{)}\Bigg{\}}. \tag{1.9}\] **Remark 1.2**.: The decay estimate (1.7) is the periodic with period \(\pi/B_{0}\). The endpoint of the time interval \(T\) in the Strichartz estimates (1.8) depends on the coefficient \(B_{0}\) of unbounded potential. Actually, if the unbounded potentials disappear, i.e. \(B_{0}=0\), one can take \(T\) to be \(+\infty\) safely, that is, the global-in-time Strichartz estimates, which is corresponding to the Laplacian with the Aharonov-Bohm potential (see Theorem 1.3 of [17]). However, in the present paper, as mentioned above, we only obtain local-in-time Strichartz estimates associated with the operator (1.3) due to the unbounded potentials caused trapped well. **Remark 1.3**.: Let \[A(x)=A_{B}(x)+A_{\rm hmf}(x),\] one can verify that \(\operatorname{div}A=0\) satisfies (1.1). Then, one observe that \[H_{\alpha,B_{0}} =-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2}\] \[=-\Delta+\big{(}\frac{B_{0}}{2}\big{)}^{2}|x|^{2}+\frac{\alpha^{2 }}{|x|^{2}}+iB_{0}(-x_{2},x_{1})\cdot\nabla+i\frac{2\alpha}{|x|^{2}}(-x_{2},x _{1})\cdot\nabla+\alpha B_{0}.\] One will see that the operator is perturbed by the inverse-square potential and harmonic oscillator. This phenomenon is natural because the unbounded potential cause a trapped well, the energy cannot be dispersive for long time. This is closely relate to the models with pure harmonic oscillators, i.e. \(H_{0,V}=-\Delta+|x|^{2}\), in which Koch and Tataru [22] proved the decay is the periodic with period \(\pi\) \[\|e^{itH_{0,V}}\|_{L^{1}(\mathbb{R}^{n})\to L^{\infty}(\mathbb{R}^{n})}\leq C |\sin t|^{-\frac{n}{2}}.\] The paper is organized as follows. In Section 2, in a preliminary step, we recall the the self-adjoint extension of the operator \(H_{\alpha,B_{0}}\), and study the spectrum of \(H_{\alpha,B_{0}}^{F}\) (which is the Friedrichs of \(H_{\alpha,B_{0}}\)). In Section 3, we construct the Schrodinger propagator by combing the strategies of [17] and [19, 18]. In Section 4, we construct the Schrodinger propagator by using another method based on the Schulman-Sunada formula. Finally, in Section 5, we prove the Theorem 1.1 by using the representation of the Schrodinger propagator constructed in previous sections. **Acknowledgments:** The authors thank L. Fanelli and P. St'ovicek for helpful discussions. This work is supported by National Natural Science Foundation of China (12171031, 11901041, 11831004). ## 2. preliminaries In this section, we first recall the self-adjoint extension of the operator \(H_{\alpha,B_{0}}\), the Friedrichs extension, and then we study the spectrum of \(H_{\alpha,B_{0}}^{F}\) (which is the Friedrichs of \(H_{\alpha,B_{0}}\)). ### Quadratic form and the self-adjoint extension Define the space \(\mathcal{H}_{\alpha,B_{0}}^{1}(\mathbb{R}^{2})\) as the completion of \(\mathcal{C}_{c}^{\infty}(\mathbb{R}^{2}\setminus\{0\};\mathbb{C})\) with respect to the norm \[\|f\|_{\mathcal{H}_{\alpha,B_{0}}^{1}(\mathbb{R}^{2})}=\Big{(}\int_{\mathbb{R }^{2}}|\nabla_{\alpha,B_{0}}f(x)|^{2}dx\Big{)}^{\frac{1}{2}}\] where \[\nabla_{\alpha,B_{0}}f(x)=\nabla f+i(A_{B}+A_{\rm hmf})f.\] The quadratic form \(Q_{\alpha,B_{0}}\) associated with \(H_{\alpha,B_{0}}\) is defined by \[Q_{\alpha,B_{0}}:\qquad\mathcal{H}_{\alpha,B_{0}}^{1}\to \mathbb{R}\] \[Q_{\alpha,B_{0}}(f)=\int_{\mathbb{R}^{2}}|\nabla_{\alpha,B_{0}}f (x)|^{2}dx.\] Then the quadratic form \(Q_{\alpha,B_{0}}\) is positive definite, thus it implies that the operator \(H_{\alpha,B_{0}}\) is symmetric semi bounded from below which admits a self-adjoint extension (Friedrichs extension) \(H^{F}_{\alpha,B_{0}}\) with the natural form domain \[\mathcal{D}=\Big{\{}f\in\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2}):H^{F}_{ \alpha,B_{0}}f\in L^{2}(\mathbb{R}^{2})\Big{\}}\] Even though the operator \(H_{\alpha,B_{0}}\) has many other self-adjoint extensions (see [14]) by the von Neumann extension theory, in this whole paper, we use the simplest Friedrichs extension and briefly write \(H_{\alpha,B_{0}}\) as its Friedrichs extension \(H^{F}_{A}\). ### The spectrum of the operator \(H_{\alpha,B_{0}}\) In this subsection, we modify the argument of [17] to obtain the eigenvalue and eigenfunction of the Schrodinger operator \[H_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2},\] where the magnetic vector potentials are in (1.4) and (1.5). More precisely, we will prove that **Proposition 2.1** (The spectrum for \(H_{\alpha,B_{0}}\)).: _Let \(H_{\alpha,B_{0}}\) be the self-adjoint Schrodinger operator in (1.3). Then the eigenvalues of \(H_{\alpha,B_{0}}\) are discrete and are given by_ \[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad m,\,k\in\mathbb{Z}, \,m\geq 0, \tag{2.1}\] _and the (finite) multiplicity of \(\lambda_{k,m}\) is_ \[\#\Bigg{\{}j\in\mathbb{Z}:\frac{\lambda_{k,m}-(j+\alpha)B_{0}}{2B_{0}}-\frac{ |j+\alpha|+1}{2}\in\mathbb{N}\Bigg{\}}.\] _Furthermore, let \(\theta=\frac{x}{|x|}\), the corresponding eigenfunction is given by_ \[V_{k,m}(x)=|x|^{|k+\alpha|}e^{-\frac{B_{0}|x|^{2}}{4}}\,P_{k,m}\Bigg{(}\frac{ B_{0}|x|^{2}}{2}\Bigg{)}e^{ik\theta} \tag{2.2}\] _where \(P_{k,m}\) is the polynomial of degree \(m\) given by_ \[P_{k,m}(r)=\sum_{n=0}^{m}\frac{(-m)_{n}}{(1+|k+\alpha|)_{n}}\frac{r^{n}}{n!}.\] _with \((a)_{n}\) (\(a\in\mathbb{R}\)) denoting the Pochhammer's symbol_ \[(a)_{n}=\begin{cases}1,&n=0;\\ a(a+1)\cdots(a+n-1),&n=1,2,\cdots\end{cases}\] **Remark 2.2**.: One can verify that the orthogonality holds \[\int_{\mathbb{R}^{2}}V_{k_{1},m_{1}}(x)V_{k_{2},m_{2}}(x)\,dx=0,\quad\text{ if}\quad(k_{1},m_{1})\neq(k_{2},m_{2}).\] **Remark 2.3**.: Let \(L^{\alpha}_{m}(t)\) be the generalized Laguerre polynomials \[L^{\alpha}_{m}(t)=\sum_{n=0}^{m}(-1)^{n}\Bigg{(}\begin{array}{c}m+\alpha\\ m-n\end{array}\Bigg{)}\frac{t^{n}}{n!},\] and the well known orthogonality relation \[\int_{0}^{\infty}x^{\alpha}e^{-x}L^{\alpha}_{m}(x)L^{\alpha}_{n}(x)\,dx=\frac {\Gamma(n+\alpha+1)}{n!}\delta_{n,m},\] where \(\delta_{n,m}\) is the Kronecker delta. Let \(\tilde{r}=\frac{B_{0}|x|^{2}}{2}\) and \(\alpha_{k}=|k+\alpha|\), then \[P_{k,m}(\tilde{r})=\sum_{n=0}^{m}\frac{(-1)^{n}m(m-1)\cdots(m-(n-1))}{(\alpha_{ k}+1)(\alpha_{k}+2)\cdots(\alpha_{k}+n)}\frac{\tilde{r}^{n}}{n!}=\left(\begin{array} []{c}m+\alpha_{k}\\ m\end{array}\right)^{-1}L_{m}^{\alpha_{k}}(\tilde{r}). \tag{2.3}\] Therefore, \[\|V_{k,m}(x)\|_{L^{2}(\mathbb{R}^{2})}^{2}=\pi\Big{(}\frac{2}{B_{0}}\Big{)}^{ \alpha_{k}+1}\Gamma(1+\alpha_{k})\Bigg{(}\begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{-1}. \tag{2.4}\] **Remark 2.4**.: Recall the Poisson kernel formula for Laguerre polynomials [1, (6.2.25)]: for \(a,b,c,\alpha>0\) \[\begin{split}&\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+ \alpha+1)}L_{m}^{\alpha}(a)L_{m}^{\alpha}(b)\\ &=\frac{e^{\frac{\alpha_{k}c}{2}}}{(ab)^{\frac{\alpha}{2}}(1-e^{- c})}\exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha}\left(\frac{2 \sqrt{ab}e^{-\frac{c}{2}}}{1-e^{-c}}\right)\end{split} \tag{2.5}\] then this together with (2.3) gives \[\begin{split}&\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+ \alpha_{k}+1)}\Bigg{(}\begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{2}P_{k,m}(a)P_{k,m}(b)\\ &=\frac{e^{\frac{\alpha_{k}c}{2}}}{(ab)^{\frac{\alpha_{k}}{2}}(1- e^{-c})}\exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha_{k}}\left( \frac{2\sqrt{ab}e^{-\frac{c}{2}}}{1-e^{-c}}\right).\end{split} \tag{2.6}\] Proof.: Notice that the operator (1.3), in the polar coordinates \((r,\theta)\), has a nice representation \[H_{\alpha,B_{0}}=-\partial_{r}^{2}-\frac{1}{r}\partial_{r}+\frac{1}{r^{2}} \Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}.\] We want to solve the eigenfunction equation \[H_{\alpha,B_{0}}g(x)=\lambda g(x) \tag{2.7}\] in the domain 1 of \(H_{A,V}\). Define the projectors \(P_{k}\) onto the eigenspaces of the angular momentum as Footnote 1: Here we use the Friedrichs self-adjoint extension. \[P_{k}f(r,\theta)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{ik(\theta-\theta^{\prime})}f( r,\theta^{\prime})d\theta^{\prime}:=f_{k}(r)e^{ik\theta},\quad k\in\mathbb{Z}\] then it clear that the operator \(H_{\alpha,B_{0}}\) commutes with the projectors \(P_{k}\). In the polar coordinates \((r,\theta)\), (2.7) implies that for every \(k\in\mathbb{Z}\), \[g_{k}^{\prime\prime}(r)+\frac{1}{r}g_{k}^{\prime}(r)-\frac{1}{r^{2}}\Big{(}k+ \alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}g_{k}(r)=-\lambda g_{k}(r), \tag{2.8}\] where \[g_{k}(r)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{-ik\theta}g(r,\theta)d\theta.\] Let \[\phi_{k}(s)=\Big{(}\frac{2s}{B_{0}}\Big{)}^{-\frac{|k+\alpha|}{2}}e^{\frac{s}{2 }}g_{k}\Big{(}\sqrt{\frac{2s}{B_{0}}}\Big{)},\] then \(\phi_{k}\) satisfies \[s\phi_{k}^{\prime\prime}(s)+(1+|k+\alpha|-s)\phi_{k}^{\prime}(s)-\frac{1}{2} \Big{(}1+|k+\alpha|+k+\alpha-\frac{\lambda}{B_{0}}\Big{)}\phi_{k}(s)=0. \tag{2.9}\] **Lemma 2.5**.: _The Kummer Confluent Hypergeometric equation_ \[s\phi^{\prime\prime}(s)+(b-s)\phi^{\prime}(s)-a\phi(s)=0,\quad s>0,\] _has two linearly independent solutions given by the Kummer function (also called confluent hypergeometric function)_ \[M(a,b,s)=\sum_{n=0}^{\infty}\frac{(a)_{n}}{(b)_{n}}\frac{s^{n}}{n!},\quad b\neq 0,-1,-2,\cdots\] _where the Pochhammer's symbol_ \[(a)_{n}=\begin{cases}1,&n=0;\\ a(a+1)\cdots(a+n-1),&n=1,2,\cdots\end{cases}\] _and the Tricomi function (also called confluent hypergeometric function of the second kind)_ \[U(a,b,s)=\frac{\Gamma(1-b)}{\Gamma(a-b+1)}M(a,b,s)+\frac{\Gamma(b-1)}{\Gamma( a)}s^{1-b}M(a-b+1,2-b,s).\] By using Lemma 2.5, we have two linearly independent solutions of (2.9), hence two linearly independent solutions of (2.8) are given by \[g_{k}^{1}(\lambda;r) =r^{|\alpha+k|}M\Big{(}\beta(k,\lambda),\gamma(k),\frac{B_{0}r^{ 2}}{2}\Big{)}e^{-\frac{B_{0}r^{2}}{4}}\] \[g_{k}^{2}(\lambda;r) =r^{|\alpha+k|}U\Big{(}\beta(k,\lambda),\gamma(k),\frac{B_{0}r^{ 2}}{2}\Big{)}e^{-\frac{B_{0}r^{2}}{4}}\] with \[\beta(k,\lambda) =\frac{1}{2}\Big{(}1+k+\alpha+|k+\alpha|-\frac{\lambda}{B_{0}} \Big{)},\] \[\gamma(k) =1+|k+\alpha|.\] Therefore, the general solution of (2.9) is given by \[g_{k}(r)= A_{k}g_{k}^{1}(\lambda;r)+B_{k}g_{k}^{2}(\lambda;r) \tag{2.10}\] \[=r^{|\alpha+k|}e^{-\frac{B_{0}r^{2}}{4}}\Bigg{(}A_{k}M\Big{(} \beta(k,\lambda),\gamma(k),\frac{B_{0}r^{2}}{2}\Big{)}+B_{k}U\Big{(}\beta(k, \lambda),\gamma(k),\frac{B_{0}r^{2}}{2}\Big{)}\Bigg{)},\] where \(A_{k},B_{k}\) are two constants which are dependent on \(k\). **Lemma 2.6** ([6],Chap.13).: _The following properties hold:_ * _The two functions_ \(M(a,b,z)\) _and_ \(U(a,b,z)\) _are linearly dependent if and only if_ \(a\in-\mathbb{Z}_{+}\)_._ * \(M(a,b,z)\) _is an entire function of_ \(z\) _and it is regular at_ \(z=0\)_. However,_ \(U(a,b,z)\) _is singular at the origin provided that_ \(b>1\) _and_ \(a\notin-\mathbb{Z}_{+}\) _and it holds true that_ \[\lim_{z\to 0^{+}}z^{b-1}U(a,b,z)=\frac{\Gamma(b-1)}{\Gamma(a)}.\] (2.11) _If_ \(b\in(1,2)\)_, the asymptotic behavior of_ \(U(a,b,z)\) _as_ \(z\to 0^{+}\) _is_ \[U(a,b,z)=\frac{\Gamma(1-b)}{\Gamma(a-b+1)}+\frac{\Gamma(b-1)}{\Gamma(a)}z^{1-b} +O(z^{2-b}).\] * _If_ \(-a\notin\mathbb{N}\)_, the asymptotic behavior as_ \(z\to+\infty\) _holds true:_ \[M(a,b,z)=\frac{\Gamma(b)}{\Gamma(b-a)}(-z)^{-a}(1+O(z^{-1}))+\frac{\Gamma(b)}{ \Gamma(a)}e^{z}z^{a-b}(1+O(z^{-1})),\] (2.12) _and_ \[U(a,b,z)=z^{-a}(1+O(z^{-1})).\] Now we use this asymptotic lemma to conclude more detail information about the eigenvalues and eigenfunctions. Let \[m:=-\beta(k,\lambda)=-\frac{1}{2}\Big{(}1+k+\alpha+|k+\alpha|-\frac{\lambda}{B _{0}}\Big{)}.\] On one hand, from (2.12), if \(m\notin\mathbb{N}\), the function \[M\big{(}-m,\gamma(k),\tilde{r}\big{)}\sim e^{\tilde{r}}\tilde{r}^{-m-\gamma(k )},\quad\tilde{r}\to+\infty,\] is singular at \(+\infty\); while if \(m\in\mathbb{N}=\{0,1,2,\cdots\}\), then \(M(m,\gamma(k),\tilde{r})\) is in fact a polynomial of degree \(m\) in \(\tilde{r}\), which we shall denote as \(P_{k,m}\), i.e., \[P_{k,m}(\tilde{r})=M(-m,1+|k+\alpha|,\tilde{r})=\sum_{n=0}^{m}\frac{(-m)_{n}}{ (1+|k+\alpha|)_{n}}\frac{\tilde{r}^{n}}{n!}.\] On the other hand, note that \(\gamma(k)=1+|k+\alpha|\geq 1\), from (2.11), it follows that \[U\big{(}\beta(k,\lambda),\gamma(k),\tilde{r}\big{)}\sim\tilde{r}^{-|k+\alpha| },\quad\tilde{r}\to 0+,\] with the implicit constant depending only on \(\lambda\) and \(k\). Hence, by letting \(\tilde{r}=\frac{B_{0}r^{2}}{2}\) and using (2.10) and Lemma 2.6, we have for fixed \(k\in\mathbb{Z}\) that \[g_{k}(r)\sim B_{k}r^{|\alpha+k|}e^{-\frac{B_{0}r^{2}}{4}}U\Big{(}\beta(k, \lambda),\gamma(k),\frac{B_{0}r^{2}}{2}\Big{)}\sim B_{k}r^{-|\alpha+k|},\quad \text{as}\quad r\to 0+.\] and \[g_{k}(r)\sim A_{k}r^{|\alpha+k|}e^{-\frac{B_{0}r^{2}}{4}}M\Big{(}\beta(k, \lambda),\gamma(k),\frac{B_{0}r^{2}}{2}\Big{)}\sim A_{k}e^{\frac{B_{0}r^{2}} {4}}r^{-1+k+\alpha-\frac{\lambda}{B_{0}}},\quad\text{as}\quad r\to\infty.\] We now conclude that one must have \(B_{k}\equiv 0\). Indeed, otherwise, from the fact that the eigenfunction \(g\in D(H_{\alpha,B_{0}})\), thus we have that \[\int_{0}^{\infty}g_{k}^{2}(r)\frac{dr}{r}\leq\int_{\mathbb{R}^{2}}\frac{g^{2} (x)}{|x|^{2}}dx<\infty,\] which is obviously contradict with the fact that the integral \(\int_{0}^{\infty}r^{-2|k+\alpha|-1}dr\) is divergent for all \(k\in\mathbb{Z}\) at \(0\). We next conclude that one must have \(A_{k}\equiv 0\) if \(m\notin\mathbb{N}\). Actually, from the fact that the eigenfunction \(g\in D(H_{\alpha,B_{0}})\) again, thus we have that \[\int_{0}^{\infty}g_{k}^{2}(r)\,rdr\leq\int_{\mathbb{R}^{2}}g^{2}(x)dx<\infty.\] However, we note that \(B_{0}>0\) and \[\int_{0}^{\infty}g_{k}^{2}(r)\,rdr\geq\int_{1}^{\infty}e^{\frac{B_{0}r^{2}}{4} }\,rdr\] which is divergent. This is a contradiction. Therefore, we must have \[\mathbb{N}\ni m=-\frac{1}{2}\Big{(}1+k+\alpha+|k+\alpha|-\frac{\lambda}{B_{0}} \Big{)},\] and \[g_{k}(r)=r^{|k+\alpha|}e^{-\frac{B_{0}r^{2}}{4}}\,P_{k,m}\Big{(}\frac{B_{0}r^{2} }{2}\Big{)}.\] Therefore, we prove the function \[V_{k,m}(x)=|x|^{|k+\alpha|}e^{-\frac{B_{0}|x|^{2}}{4}}\,P_{k,m}\Big{(}\frac{B_{ 0}|x|^{2}}{2}\Big{)}e^{ik\theta},\] belongs to \(D(H_{\alpha,B_{0}})\), thus providing an eigenfunction of the operator \(H_{\alpha,B_{0}}\). Thus from \(-m=\frac{1}{2}\Big{(}1+k+\alpha+|k+\alpha|-\frac{\lambda}{B_{0}}\Big{)}\), we solve (2.7) to obtain the eigenvalues \(\lambda\) of \(H_{\alpha,B_{0}}\) \[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad k\in\mathbb{Z},\,m \in\mathbb{N}.\] ## 3. The Schrodinger propagator In this section, we construct the Schrodinger propagator by using the spectrum property of Proposition 2.1. The combined strategies are from [17] and [19, 18]. More precisely, we will prove the following result. **Proposition 3.1**.: _Let \(H_{\alpha,B_{0}}\) be the operator in (1.3) and suppose \(x=r_{1}(\cos\theta_{1},\sin\theta_{1})\) and \(y=r_{2}(\cos\theta_{2},\sin\theta_{2})\). Let \(u(t,x)\) be the solution of the Schrodinger equation_ \[\begin{cases}\big{(}i\partial_{t}-H_{\alpha,B_{0}}\big{)}u(t,x)=0,\\ u(0,x)=f(x).\end{cases}\] _Then_ \[u(t,x)=e^{-itH_{\alpha,B_{0}}}f=\int_{\mathbb{R}^{2}}K_{S}(x,y)f(y)\,dy,\] _where \(t\neq\frac{k\pi}{B_{0}},\,k\in\mathbb{Z}\). Let \(\theta=\theta_{1}-\theta_{2}-tB_{0}\in\mathbb{R},\) there exists an integer \(j_{0}\) satisfying \(\theta+2j_{0}\pi\in[-\pi,\pi]\). Define_ \[\chi(\theta,j_{0})=\left\{\begin{array}{ll}1,&\mbox{if}\,\,\,|\theta+2j_{0 }\pi|<\pi\,\,\,;\\ e^{-i2\pi\alpha}+1,&\mbox{if}\,\,\,\theta+2j_{0}\pi=-\pi;\\ e^{i2\pi\alpha}+1,&\mbox{if}\,\,\,\theta+2j_{0}\pi=\pi.\end{array}\right.\] _Then the kernel of Schrodinger propagator \(e^{-itH_{\alpha,B_{0}}}\) has the representation_ \[K_{S}(x,y) =\frac{B_{0}e^{-itB\alpha}}{8\pi^{2}i\sin(tB_{0})}e^{\frac{iB_{0} (r_{2}^{2}+r_{2}^{2})}{4\tan(tB_{0})}} \tag{3.1}\] \[\times\Big{[}e^{\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\cos( \theta_{1}-\theta_{2}-tB_{0})}e^{-i\alpha(\theta_{1}-\theta_{2}-tB_{0}+2j_{0} \pi)}\chi(\theta,j_{0})\] \[\quad-\frac{\sin(\pi\alpha)}{\pi}\int_{\mathbb{R}}e^{-\frac{B_{0} r_{1}r_{2}}{2i\sin(tB_{0})}\cosh s}\frac{e^{-\alpha s}}{1+e^{-s+i(\theta_{1}- \theta_{2}-tB_{0})}}\,ds\Big{]}.\] Proof.: We construct the representation formula for the kernel of the Schrodinger flow \(e^{-itH_{\alpha,B_{0}}}\) by combining the argument of [17] and [18, 19]. Our starting point is the Proposition 2.1. Let \(\tilde{V}_{k,m}\) be the \(L^{2}\)-normalization of \(V_{k,m}\) in (2.2), then the eigenfunctions \(\left\{\tilde{V}_{k,m}\right\}_{k\in\mathbb{Z},m\in\mathbb{N}}\) form an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) corresponding to the eigenfunctions of \(H_{\alpha,B_{0}}\). We expand the initial data \(f(x)\in L^{2}\) as \[f(x)=\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}c_{k,m}\tilde{V}_{k,m}(x)\] where \[c_{k,m}=\int_{\mathbb{R}^{2}}f(x)\overline{\tilde{V}_{k,m}(x)}\,dx. \tag{3.2}\] The solution \(u(t,x)\) of (1.6) can be written as \[u(t,x)=\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}u_{k,m}(t)\tilde{V}_{k,m}(x), \tag{3.3}\] where \(u_{k,m}(t)\) satisfies the ODE \[\left\{\begin{array}{ll}iu^{\prime}_{k,m}(t)=\lambda_{k,m}u_{k,m}(t),\\ u_{k,m}(0)=c_{k,m},\quad k\in\mathbb{Z},\,m\in\mathbb{N}.\end{array}\right.\] Thus we obtain \(u_{k,m}(t)=c_{k,m}e^{-it\lambda_{k,m}}\). Therefore the solution (3.3) becomes \[u(t,x)=\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}c_{k,m}e^{-it\lambda_{k,m}} \tilde{V}_{k,m}(x).\] Plugging (3.2) into the above expression yields \[u(t,x)=\sum_{k\in\mathbb{Z}\atop m\in\mathbb{N}}e^{-it\lambda_{k,m}}\left( \int_{\mathbb{R}^{2}}f(y)\overline{\tilde{V}_{k,m}(y)}dy\right)\tilde{V}_{k,m} (x).\] We write \(f\) in a harmonic spherical expansion \[f(y)=\sum_{k\in\mathbb{Z}}f_{k}(r_{2})e^{ik\theta_{2}},\] where \[f_{k}(r_{2})=\frac{1}{2\pi}\int_{0}^{2\pi}f(r_{2},\theta_{2})e^{-ik\theta_{2} }\,d\theta_{2},\quad r_{2}=|y|, \tag{3.4}\] we thus have \[u(t,x) =\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}e^{-it\lambda_{k,m}} \frac{V_{k,m}(x)}{\|\tilde{V}_{k,m}\|_{L^{2}}^{2}}\Bigg{(}\int_{0}^{\infty}f_{ k}(r_{2})e^{-\frac{B_{0}r_{2}^{2}}{4}}\,P_{k,m}\Big{(}\frac{B_{0}r_{2}^{2}}{2} \Big{)}r_{2}^{1+\alpha_{k}}\mathrm{d}r_{2}\Bigg{)}\] \[=\Big{(}\frac{B_{0}}{2\pi}\Big{)}\sum_{k\in\mathbb{Z}}e^{ik\theta _{1}}\frac{B_{0}^{\alpha_{k}}e^{-it\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{ k})}\Bigg{[}\sum_{m=0}^{\infty}\left(\begin{array}{cc}m+\alpha_{k}\\ m\end{array}\right)\!\!e^{-2itmB_{0}}\] \[\times\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})(r_{1}r_{2})^{\alpha_ {k}}e^{-\frac{B_{0}(r_{2}^{2}+r_{2}^{2})}{4}}P_{k,m}\left(\frac{B_{0}r_{2}^{2 }}{2}\right)P_{k,m}\left(\frac{B_{0}r_{1}^{2}}{2}\right)r_{2}\mathrm{d}r_{2} \Bigg{)}\Bigg{]},\] where \(\alpha_{k}=|k+\alpha|\) and we use (2.4), (2.1), (2.2) and \[\lambda_{k,m} =(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}\] \[:=2mB_{0}+\beta_{k}\] with \(\beta_{k}=(1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}>0\). Notice that \(P_{k,m}\) can be expressed as (see e.g. [1, (6.2.15)]) \[P_{k,m}\left(\frac{r^{2}}{2}\right)=\frac{\Gamma(1+\alpha_{k})}{\Gamma(1+ \alpha_{k}+m)}e^{\frac{r^{2}}{2}}r^{-\alpha_{k}}2^{\frac{\alpha_{k}}{2}}\int_{ 0}^{\infty}e^{-s}s^{m+\frac{\alpha_{k}}{2}}J_{\alpha_{k}}(\sqrt{2s}r)ds,\] in terms of Bessel functions \(J_{\alpha_{k}}\) of order \(\alpha_{k}\). Hence, we have \[u(t,x)= \frac{B_{0}}{2\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta_{1}}\frac{B_{0 }^{\alpha_{k}}e^{-it\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{k})}\Bigg{[} \sum_{m\in\mathbb{N}}\left(\begin{array}{c}m+\alpha_{k}\\ m\end{array}\right)e^{-2itmB_{0}}\] \[\times\Big{(}\frac{\Gamma(1+\alpha_{k})}{\Gamma(1+\alpha_{k}+m)} \Big{)}^{2}\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})(r_{1}r_{2})^{\alpha_{k}}e^{- \frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4}}e^{\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{2}} \Big{(}\frac{2}{B_{0}r_{2}^{2}}\Big{)}^{\frac{\alpha_{k}}{2}}\] \[\times\Big{(}\frac{2}{B_{0}r_{1}^{2}}\Big{)}^{\frac{\alpha_{k}}{2 }}\Big{(}\int_{0}^{\infty}\int_{0}^{\infty}e^{-s_{1}-s_{2}}(s_{1}s_{2})^{m+ \frac{\alpha_{k}}{2}}J_{\alpha_{k}}(\sqrt{2B_{0}s_{1}}r_{1})J_{\alpha_{k}}( \sqrt{2B_{0}s_{2}}r_{2})ds_{1}ds_{2}\Big{)}r_{2}dr_{2}\Bigg{)}\Bigg{]}\] \[= \frac{B_{0}}{2\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta_{1}}e^{-it \beta_{k}}\Gamma(1+\alpha_{k})\Bigg{[}\sum_{m\in\mathbb{N}}\left(\begin{array} []{c}m+\alpha_{k}\\ m\end{array}\right)\frac{e^{-2itmB_{0}}}{(\Gamma(1+\alpha_{k}+m))^{2}}\] \[\times\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})e^{\frac{B_{0}(r_{1}^ {2}+r_{2}^{2})}{4}}\Big{(}\int_{0}^{\infty}\int_{0}^{\infty}e^{-s_{1}-s_{2}}( s_{1}s_{2})^{m+\frac{\alpha_{k}}{2}}\] \[\times J_{\alpha_{k}}(\sqrt{2B_{0}s_{1}}r_{1})J_{\alpha_{k}}( \sqrt{2B_{0}s_{2}}r_{2})ds_{1}ds_{2}\Big{)}r_{2}dr_{2}\Bigg{)}\Bigg{]},\quad \big{(}\text{variable changes:}\,s_{1}\to s_{1}^{2},\,s_{2}\to s_{2}^{2}\big{)}\] \[= \frac{2B_{0}}{\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta_{1}}e^{-it \beta_{k}}\Bigg{[}\int_{0}^{\infty}f_{k}(r_{2})e^{\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})}{4}}e^{i\alpha_{k}(tB_{0}+\frac{\pi}{2})}\] \[\times\Bigg{(}\int_{0}^{\infty}\int_{0}^{\infty}\frac{s_{1}s_{2}}{ e^{s_{1}^{2}+s_{2}^{2}}}\Bigg{(}\sum_{m\in\mathbb{N}}\frac{(-1)^{m}e^{-i(tB_{0}+ \frac{\pi}{2})(2m+\alpha_{k})}}{\Gamma(1+m)\Gamma(1+\alpha_{k}+m)}(s_{1}s_{2}) ^{2m+\alpha_{k}}\Bigg{)}\] \[\times J_{\alpha_{k}}\big{(}\sqrt{2B_{0}}s_{1}r_{1}\big{)}J_{ \alpha_{k}}\big{(}\sqrt{2B_{0}}s_{2}r_{2}\big{)}ds_{1}ds_{2}\Bigg{)}r_{2}dr_{2 }\Bigg{]}.\] Since (see e.g. [1, (4.5.2)]) \[\sum_{m=0}^{\infty}\frac{(-1)^{m}e^{-i(tB_{0}+\frac{\pi}{2})(2m+\alpha_{k})}}{ \Gamma(1+\alpha_{k}+m)\Gamma(1+m)}(s_{1}s_{2})^{2m+\alpha_{k}}=J_{\alpha_{k}}( 2s_{1}s_{2}e^{-i(tB_{0}+\frac{\pi}{2})}),\] then we have \[u(t,x)=\frac{2B_{0}}{\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta_{1}}e^{it(\alpha_{k} B_{0}-\beta_{k})+i\alpha_{k}\frac{\pi}{2}}\int_{0}^{\infty}e^{\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})}{4}}f_{k}(r_{2})G_{k,t}(r_{1},r_{2})r_{2}dr_{2}\] with \[G_{k,t}(r_{1},r_{2})=\int_{0}^{\infty}\int_{0}^{\infty} \frac{s_{1}s_{2}}{e^{s_{1}^{2}+s_{2}^{2}}}J_{\alpha_{k}}(2s_{1}s_{2} e^{-i(tB_{0}+\frac{\pi}{2})})\] \[\times J_{\alpha_{k}}(\sqrt{2B_{0}}r_{1}s_{1})J_{\alpha_{k}}( \sqrt{2B_{0}}r_{2}s_{2})ds_{1}ds_{2}.\] Using formula (see [28, formula (1), P.395]) \[\int_{0}^{\infty} e^{-p^{2}t^{2}}J_{\nu}(at)J_{\nu}(bt)tdt=\frac{1}{2p^{2}}e^{- \frac{a^{2}+b^{2}}{4p^{2}}}I_{\nu}\Big{(}\frac{ab}{2p^{2}}\Big{)},\] \[\operatorname{Re}\nu>-1,\quad|\arg p|<\frac{\pi}{4},\quad I_{\nu} (r)=e^{-\frac{1}{2}\nu\pi i}J_{\nu}(re^{i\frac{\pi}{2}}), \tag{3.5}\] with \(t=s_{2},p=1,a=\sqrt{2B_{0}}r_{2},b=2s_{1}e^{-i(tB_{0}+\frac{\pi}{2})},\nu= \alpha_{k}\), we get \[\int_{0}^{\infty}e^{-s_{2}^{2}}J_{\alpha_{k}}\big{(}\sqrt{2B_{0}} r_{2}s_{2}\big{)} J_{\alpha_{k}}\Big{(}2s_{1}s_{2}e^{-i(tB_{0}+\frac{\pi}{2})}\Big{)}s_{2}ds_{2}\] \[=\frac{1}{2}e^{-\frac{B_{0}\tau_{2}^{2}+2s_{1}^{2}e^{-i(2tB_{0}+ \pi)}}{2}}I_{\alpha_{k}}\big{(}\sqrt{2B_{0}}r_{2}s_{1}e^{-i(tB_{0}+\frac{\pi}{ 2})}\big{)},\] where \(I_{\alpha_{k}}\) denotes the modified Bessel function of order \(\alpha_{k}\). Hence \[G_{k,t}(r_{1},r_{2}) =\frac{1}{2}\int_{0}^{\infty}e^{-s_{1}^{2}}J_{\alpha_{k}}(\sqrt{2 B_{0}}r_{1}s_{1})e^{-\frac{B_{0}\tau_{2}^{2}+2s_{1}^{2}e^{-i(2tB_{0}+\pi)}}{2}}I_{ \alpha_{k}}(\sqrt{2B_{0}}r_{2}s_{1}e^{-i(tB_{0}+\frac{\pi}{2})})s_{1}ds_{1}\] \[=\frac{1}{4B_{0}}\int_{0}^{\infty}e^{-\frac{s_{1}^{2}}{2B_{0}}}J_ {\alpha_{k}}(r_{1}s_{1})e^{-\frac{B_{0}\tau_{2}^{2}+\frac{s_{1}^{2}}{B_{0}}-i( 2tB_{0}+\pi)}{2}}I_{\alpha_{k}}(r_{2}s_{1}e^{-i(tB_{0}+\frac{\pi}{2})})s_{1}ds _{1}\] \[=\frac{1}{4B_{0}}e^{-i\alpha_{k}\frac{\pi}{2}}e^{-\frac{B_{0} \tau_{2}^{2}}{2}}\int_{0}^{\infty}e^{-\frac{s_{1}^{2}}{2B_{0}}(1+e^{-i(2tB_{0} +\pi)})}J_{\alpha_{k}}(r_{1}s_{1})J_{\alpha_{k}}(r_{2}s_{1}e^{-itB_{0}})s_{1}ds _{1}\] \[=\frac{1}{4B_{0}}e^{-i\alpha_{k}\frac{\pi}{2}}e^{-\frac{B_{0} \tau_{2}^{2}}{2}}\frac{B_{0}}{1+e^{-i(2tB_{0}+\pi)}}e^{-\frac{B_{0}\tau_{1}^{2} +B_{0}\tau_{2}^{2}e^{-2itB_{0}}}{2(1+e^{-i(2tB_{0}+\pi)})}}I_{\alpha_{k}}(\frac {B_{0}r_{1}r_{2}e^{-itB_{0}}}{1+e^{-i(2tB_{0}+\pi)}})\] \[=\frac{1}{4(1+e^{-i(2tB_{0}+\pi)})}e^{-i\alpha_{k}\frac{\pi}{2}}e^ {-\frac{B_{0}(\tau_{2}^{2}+\tau_{2}^{2})}{2(1+e^{-i(2tB_{0}+\pi)})}}I_{\alpha_ {k}}\Big{(}\frac{B_{0}r_{1}r_{2}e^{-itB_{0}}}{1+e^{-i(2tB_{0}+\pi)}}\Big{)}\] where we have used the fact \(I_{\nu}(r)=e^{-i\frac{\pi}{2}\nu}J_{\nu}(re^{i\frac{\pi}{2}})\) in the third equality and (3.5) with \(t=s_{1},p^{2}=\frac{1+e^{-i(2tB_{0}+\pi)}}{2B_{0}},a=r_{1},b=r_{2}e^{-itB_{0}},\nu =\alpha_{k}\) in the last equality. Finally, we use the simple fact (for \(tB_{0}\neq k\pi,k\in\mathbb{Z}\)) \[\frac{1}{1+e^{-i(2tB_{0}+\pi)}}=\frac{e^{itB_{0}}}{2i\sin tB_{0}}\] to obtain \[u(t,x)= \frac{2B_{0}}{\pi}\frac{e^{itB_{0}}}{8i\sin tB_{0}}\sum_{k\in \mathbb{Z}}e^{ik\theta_{1}}e^{it(\alpha_{k}B_{0}-\beta_{k})}\] \[\times\Bigg{[}\int_{0}^{\infty}e^{\frac{B_{0}(\tau_{2}^{2}+\tau_{ 2}^{2})}{4}}f_{k}(r_{2})e^{-\frac{B_{0}(\tau_{2}^{2}+\tau_{1}^{2})}{4}\cdot \frac{e^{itB_{0}}}{i\sin tB_{0}}}I_{\alpha_{k}}\Bigg{(}\frac{B_{0}r_{1}r_{2}}{2 i\sin(tB_{0})}\Bigg{)}r_{2}dr_{2}\Bigg{]}\] \[= \frac{B_{0}e^{-itB_{0}\alpha}}{4\pi i\sin(tB_{0})}\sum_{k\in \mathbb{Z}}e^{ik\theta_{1}}e^{-itB_{0}k}\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})e^ {-\frac{B_{0}(\tau_{2}^{2}+\tau_{2}^{2})\cos(tB_{0})}{4i\sin(tB_{0})}}I_{\alpha_ {k}}\Bigg{(}\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\Bigg{)}r_{2}dr_{2}\Bigg{)},\] where we use \(\beta_{k}=(k+\alpha)B_{0}+(1+\alpha_{k})B_{0}\) in the last line. Recalling \(f_{k}\) in (3.4), we finally have \[\begin{split} u(t,x)=&\frac{B_{0}e^{-itB_{0}\alpha}} {8\pi^{2}i\sin(tB_{0})}\int_{0}^{\infty}\int_{0}^{2\pi}e^{-\frac{B_{0}(r_{1}^{2 }+r_{2}^{2})}{4i\tan(tB_{0})}}\\ &\times\sum_{k\in\mathbb{Z}}\Bigg{(}e^{ik(\theta_{1}-\theta_{2}- tB_{0})}I_{\alpha_{k}}\bigg{(}\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\bigg{)} \Bigg{)}f(r_{2},\theta_{2})r_{2}dr_{2}d\theta_{2}.\end{split} \tag{3.6}\] Now we consider the summation in \(k\). Let \(z=\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\) and \(\theta=\theta_{1}-\theta_{2}-tB_{0}\) and recall the following integral representation [28] of the modified Bessel function \(I_{\nu}\) \[I_{\nu}(z)=\frac{1}{\pi}\int_{0}^{\pi}e^{z\cos s}\cos(\nu s)ds-\frac{\sin(\pi \nu)}{\pi}\int_{0}^{\infty}e^{-z\cosh s}e^{-s\nu}ds. \tag{3.7}\] Recall \(\alpha_{k}=|\alpha+k|\) and \(\alpha\in(0,1)\), we need to consider \[\frac{1}{\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta}\int_{0}^{\pi}e^{z\cos s}\cos( \alpha_{k}s)ds, \tag{3.8}\] and \[\frac{1}{\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta}\sin(\pi\alpha_{k})\int_{0}^{ \infty}e^{-z\cosh s}e^{-s\alpha_{k}}ds. \tag{3.9}\] Similarly as in [18, Proposition 3.1], we use the Poisson summation formula \[\sum_{j\in\mathbb{Z}}\delta(x-2\pi j)=\sum_{k\in\mathbb{Z}}\frac{1}{2\pi}e^{ ikx},\] to obtain \[\frac{1}{\pi}\sum_{k\in\mathbb{Z}}e^{ik\theta}\cos(\alpha_{k}s)=\sum_{j\in \mathbb{Z}}\big{[}e^{is\alpha}\delta(\theta+s+2j\pi)+e^{-is\alpha}\delta( \theta-s+2j\pi)\big{]}.\] Therefore we have \[\begin{split}&\eqref{eq:2.1}\\ &=\sum_{j\in\mathbb{Z}}\int_{0}^{\pi}e^{z\cos s}\Bigg{(}e^{is \alpha}\delta(\theta+s+2\pi j)+e^{-is\alpha}\delta(\theta-s+2\pi j)\Bigg{)}ds \\ &=e^{z\cos(\theta)}e^{-i\alpha\theta}\sum_{j\in\mathbb{Z}}\chi_{[ -\pi,\pi]}(\theta+2\pi j)e^{-i2\pi j\alpha}.\end{split} \tag{3.10}\] Recall \(\theta=\theta_{1}-\theta_{2}-tB_{0}\in\mathbb{R}\) and observe that \[\theta+2\pi j_{0}\in(-\pi,\pi)\implies\theta+2\pi(j_{0}\pm m)\notin(-\pi, \pi),|m|\geq 1,\] the summation in \(j\) is only one term (except for \(\theta+2j_{0}\pi=\pm\pi\)). It's easily to verify that the identity (3.10) are the same at \(\theta+2j_{0}\pi=\pm\pi.\) For all \(\theta\in\mathbb{R}\), there exists \(j_{0}\in\mathbb{Z}\) such that \(\theta+2j_{0}\pi\in[-\pi,\pi].\) Therefore, we get \[\eqref{eq:2.1}=e^{z\cos(\theta)}e^{-i\alpha\theta}e^{-i2\pi j_{0}\alpha} \left\{\begin{array}{ll}1,&|\theta+2j_{0}\pi|<\pi\ ;\\ e^{-i2\pi\alpha}+1,&\theta+2j_{0}\pi=-\pi;\\ e^{i2\pi\alpha}+1,&\theta+2j_{0}\pi=\pi.\end{array}\right.\] Recall again that \(\alpha_{k}=|\alpha+k|,k\in\mathbb{Z}\) and \(\alpha\in(0,1)\), we have \[\alpha_{k}=|\alpha+k|=\begin{cases}k+\alpha,&k\geq 0;\\ -\alpha-k,&k\leq-1,\end{cases}\] and hence \[\sin(\pi\alpha_{k})=\sin(\pi|\alpha+k|)=\begin{cases}\cos k\pi\sin(\pi\alpha)= e^{ik\pi}\sin(\pi\alpha),&k\geq 0;\\ -\cos k\pi\sin(\pi\alpha)=-e^{ik\pi}\sin(\pi\alpha),&k\leq-1.\end{cases}\] We can compute the summation as follows \[\sum_{k\in\mathbb{Z}} \sin(\pi|\alpha+k|)e^{-s|\alpha+k|}e^{ik\theta}\] \[=\sin(\pi\alpha)\sum_{k\geq 0}e^{ik\pi}e^{-s(k+\alpha)}e^{ik \theta}-\sin(\pi\alpha)\sum_{k\leq-1}e^{ik\pi}e^{s(k+\alpha)}e^{ik\theta}\] \[=\sin(\pi\alpha)\Big{(}e^{-\alpha s}\sum_{k\geq 0}e^{ik(is+\pi+ \theta)}-e^{\alpha s}\sum_{k\geq 1}e^{ik(is-\pi-\theta)}\Big{)}\] \[=\sin(\pi\alpha)\Big{(}\frac{e^{-\alpha s}}{1-e^{i(is+\pi+\theta )}}-\frac{e^{\alpha s}e^{i(is-\pi-\theta)}}{1-e^{i(is-\pi-\theta)}}\Big{)}\] \[=\sin(\pi\alpha)\Big{(}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+ \frac{e^{\alpha s}}{1+e^{s+i\theta)}}\Big{)}.\] Therefore, we see \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq on \(M\) by using the Schrodinger propagator \(e^{-it\tilde{H}_{\alpha,B_{0}}}\) (see the operator \(\tilde{H}_{\alpha,B_{0}}\) in (4.2) below) on \(\tilde{M}\). More precisely, see [27, (1)], we have \[e^{-itH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2})=\sum_{j\in\mathbb{Z} }e^{-it\tilde{H}_{\alpha,B_{0}}}(r_{1},\theta_{1}+2j\pi;r_{2},\theta_{2}). \tag{4.1}\] This is similar to the construction of wave propagator on \(\mathbb{T}^{n}\), see [25, (3.5.12)]. In the following subsections, we will construct the Schrodinger propagator \(e^{-it\tilde{H}_{\alpha,B_{0}}}\). ### The eigenfunctions and eigenvalues Before we construct the propagator \(e^{-it\tilde{H}_{\alpha,B_{0}}}\), let us give some remarks about the difference and advantages between \(\tilde{H}_{\alpha,B_{0}}\) and \(H_{\alpha,B_{0}}\). First, we recall the proof of Proposition 2.1. From (1.3), in the polar coordinates \((r,\theta)\in M\), then \[H_{\alpha,B_{0}}=-\partial_{r}^{2}-\frac{1}{r}\partial_{r}+\frac{1}{r^{2}} \Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2},\] which acts on \(L^{2}(M,rdr\,d\theta)\). For \(V_{k,m}(x)\) in (2.2) and \(\lambda_{k,m}\) in (2.1), we have show that \[H_{\alpha,B_{0}}V_{k,m}(x)=\lambda_{k,m}V_{k,m}(x).\] We remark here that we choose \(e^{ik\theta}\) as an eigenfunction of the operator \(\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\) on \(L^{2}_{\theta}([0,2\pi))\) which satisfies that \[\begin{cases}\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{ 2}\varphi(\theta)=\Big{(}k+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi( \theta)\\ \varphi(0)=\varphi(2\pi).\end{cases}\] Instead, we consider the operator \[\tilde{H}_{\alpha,B_{0}}=-\partial_{r}^{2}-\frac{1}{r}\partial_{r}+\frac{1}{ r^{2}}\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}, \tag{4.2}\] which acts on \(L^{2}(\tilde{M},rdr\,d\theta)\). We emphasize that the variable \(\theta\in\mathbb{R}\) while not compact manifold \(\mathbb{S}^{1}\). Then we choose \(e^{i(\tilde{k}-\alpha)\theta}\) as an eigenfunction of the operator \(\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\) on \(L^{2}_{\theta}(\mathbb{R})\) which satisfies that \[\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi( \theta)=\Big{(}\tilde{k}+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi(\theta). \tag{4.3}\] It worths to point out that \(\tilde{k}\in\mathbb{R}\) is a real number while \(k\in\mathbb{Z}\). More important, we inform move the \(\alpha\) in right hand side of (4.3) to the \(e^{i(\tilde{k}-\alpha)\theta}\) which will simplify the eigenfunctions. Now we modify the argument of Proposition 2.1to solve \[\tilde{H}_{\alpha,B_{0}}g(x)=\lambda g(x) \tag{4.4}\] to obtain the eigenfunctions of \(\tilde{H}_{\alpha,B_{0}}\). Define the Fourier transform \(F_{\theta\to\tilde{k}}\) with respect to the variable \(\theta\) \[F_{\theta\to\tilde{k}}f(r,\tilde{k})=\frac{1}{2\pi}\int_{\mathbb{R}}e^{i\tilde {k}\theta}f(r,\theta)\,d\theta:=\hat{f}(r,\tilde{k}),\quad\tilde{k}\in\mathbb{ R}. \tag{4.5}\] By taking the Fourier transform of (4.4), in contrast to (2.8) we obtain \[\hat{g}^{\prime\prime}(r,\tilde{k})+\frac{1}{r}\hat{g}^{\prime}(r,\tilde{k})- \frac{1}{r^{2}}\Big{(}\tilde{k}+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\hat{g}(r,\tilde {k})=-\lambda\hat{g}(r,\tilde{k}). \tag{4.6}\] Let \[\psi_{\tilde{k}}(s)=\Big{(}\frac{2s}{B_{0}}\Big{)}^{-\frac{|\tilde{k}|}{2}}e^{ \frac{s}{2}}\hat{g}\big{(}\Big{(}\sqrt{\frac{2s}{B_{0}}},\tilde{k}\Big{)},\] then \(\psi_{\tilde{k}}(s)\) satisfies \[s\psi_{\tilde{k}}^{\prime\prime}(s)+(1+|\tilde{k}|-s)\psi_{\tilde{k}}^{\prime} (s)-\frac{1}{2}\Big{(}1+|\tilde{k}|+\tilde{k}-\frac{\lambda}{B_{0}}\Big{)} \psi_{\tilde{k}}(s)=0. \tag{4.7}\] which is same to (2.9) by replacing \(|k+\alpha|\) by \(\tilde{k}\). By using Lemma 2.5 again, we have two linearly independent solutions of (4.7), hence two linearly independent solutions of (4.6) are given by \[\hat{g}^{1}(\lambda;r,\tilde{k}) =r^{|\tilde{k}|}M\Big{(}\tilde{\beta}(\tilde{k},\lambda),\tilde{ \gamma}(\tilde{k}),\frac{B_{0}r^{2}}{2}\Big{)}e^{-\frac{B_{0}r^{2}}{4}}\] \[\hat{g}^{2}(\lambda;r,\tilde{k}) =r^{|\tilde{k}|}U\Big{(}\tilde{\beta}(\tilde{k},\lambda),\tilde{ \gamma}(\tilde{k}),\frac{B_{0}r^{2}}{2}\Big{)}e^{-\frac{B_{0}r^{2}}{4}}\] with \[\tilde{\beta}(\tilde{k},\lambda) =\frac{1}{2}\Big{(}1+\tilde{k}+|\tilde{k}|-\frac{\lambda}{B_{0}} \Big{)},\] \[\tilde{\gamma}(\tilde{k}) =1+|\tilde{k}|.\] Therefore, the general solution of (4.7) is given by \[\hat{g}(r,\tilde{k})=A_{\tilde{k}}\hat{g}^{1}(\lambda;r,\tilde{k})+B_{\tilde{ k}}\hat{g}^{2}(\lambda;r,\tilde{k}).\] where \(A_{\tilde{k}},B_{\tilde{k}}\) are two constants which depend on \(\tilde{k}\in\mathbb{R}\). Let \[m:=-\tilde{\beta}(\tilde{k},\lambda)=-\frac{1}{2}\Big{(}1+\tilde{k}+|\tilde{ k}|-\frac{\lambda}{B_{0}}\Big{)}.\] Similar as above, we use this asymptotic Lemma 2.6 to conclude that \(m\in\mathbb{N}\) again, we omit the details. Therefore, we must have \[\mathbb{N}\ni m=-\frac{1}{2}\Big{(}1+\tilde{k}+|\tilde{k}|-\frac{\lambda}{B_ {0}}\Big{)},\] and \[\hat{g}(r,\tilde{k})=r^{|\tilde{k}|}e^{-\frac{B_{0}r^{2}}{4}}\,P_{\tilde{k}- \alpha,m}\Big{(}\frac{B_{0}r^{2}}{2}\Big{)}.\] Therefore, we obtain a complete set of generalized eigenfunctions of \(\tilde{H}_{\alpha,B_{0}}\) \[\Big{\{}U_{m}(x,\tilde{k}):m\in\mathbb{N},\tilde{k}\in\mathbb{R}\Big{\}}\] where \[U_{m}(x,\tilde{k})=|x|^{|\tilde{k}|}e^{-\frac{B_{0}|x|^{2}}{4}}\,P_{\tilde{k}- \alpha,m}\bigg{(}\frac{B_{0}|x|^{2}}{2}\bigg{)}e^{i(\tilde{k}-\alpha)\theta} \tag{4.8}\] which belongs to \(L^{2}(\tilde{M},rdr\,d\theta)\). Thus from \(-m=\frac{1}{2}\Big{(}1+\tilde{k}+|\tilde{k}|-\frac{\lambda}{B_{0}}\Big{)}\), we solve (4.4) to obtain the eigenvalues \(\lambda\) of \(\tilde{H}_{\alpha,B_{0}}\) \[\lambda_{\tilde{k},m}=(2m+1+|\tilde{k}|+\tilde{k})B_{0},\quad\tilde{k}\in \mathbb{R},\,m\in\mathbb{N}.\] We obtain analogue of (2.6) by using (2.5) and (2.3) \[\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+|\tilde{k}|+1)}\Bigg{(} \begin{array}{c}m+|\tilde{k}|\\ m\end{array}\Bigg{)}^{2}P_{\tilde{k}-\alpha,m}(a)P_{\tilde{k}-\alpha,m}(b)\] \[=\frac{e^{\frac{|\tilde{k}|c}{2}}}{(ab)^{\frac{|\tilde{k}|}{2}}(1 -e^{-c})}\exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{|\tilde{k}|}\left( \frac{2\sqrt{abe}e^{-\frac{c}{2}}}{1-e^{-c}}\right).\] ### Construction of \(e^{-it\tilde{H}_{\alpha,B_{0}}}\) Now we construct the propagator \(e^{-it\tilde{H}_{\alpha,B_{0}}}\) by using the above eigenfunctions. Let \(\tilde{U}_{m}(x,\tilde{k})\) be the \(L^{2}\)-normalization of \(U_{m}(x,\tilde{k})\) in (4.8). We write initial data \(f(x)\in L^{2}\) as \[f(x)=\sum_{m\in\mathbb{N}}\int_{\mathbb{R}}c_{m}(\tilde{k})\tilde{U}_{m}(x, \tilde{k})\,d\tilde{k}\] where \[c_{m}(\tilde{k})=\int_{\mathbb{R}^{2}}f(x)\overline{\tilde{U}_{m}(x,\tilde{k} )}\,dx.\] Using the Fourier transform (4.5), we solve \[\begin{cases}\big{(}i\partial_{t}-\tilde{H}_{\alpha,B_{0}}\big{)}u(t,x)=0, \quad(t,x)\in\mathbb{R}\times\tilde{M}\\ u(0,x)=f(x).\end{cases}\] to obtain \[u(t,x)=\sum_{m\in\mathbb{N}}\int_{\mathbb{R}}e^{-it\lambda_{\tilde{k},m}}\left( \int_{\mathbb{R}^{2}}f(y)\overline{\tilde{U}_{m}(y,\tilde{k})}\,dy\right) \tilde{U}_{m}(x,\tilde{k})\,d\tilde{k}.\] By repeating the the proof of Proposition 3.1, we similarly show that \[u(t,x)= \frac{B_{0}}{8\pi^{2}i\sin(tB_{0})}\int_{0}^{\infty}\int_{\mathbb{ R}}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4i\tan(tB_{0})}}\] \[\times\int_{\mathbb{R}}\left(e^{i(\tilde{k}-\alpha)(\theta_{1}- \theta_{2}-tB_{0})}I_{|\tilde{k}|}\bigg{(}\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0 })}\bigg{)}\right)d\tilde{k}f(r_{2},\theta_{2})r_{2}dr_{2}d\theta_{2}.\] In contrast to (3.6), the difference here is that we replace the summation in \(k\in\mathbb{Z}\) by integration on \(\tilde{k}\in\mathbb{R}\). Hence the kernel of \(e^{-it\tilde{H}_{\alpha,B_{0}}}\) is \[\tilde{K}_{S}(x,y)= \frac{B_{0}}{4\pi i\sin(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{ 2})}{4i\tan(tB_{0})}}\] \[\times\int_{\mathbb{R}}\left(e^{i(\tilde{k}-\alpha)(\theta_{1}- \theta_{2}-tB_{0})}I_{|\tilde{k}|}\bigg{(}\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0 })}\bigg{)}\right)d\tilde{k},\] where \(x=(r_{1},\theta_{1})\in\tilde{M}\) and \(y=(r_{2},\theta_{2})\in\tilde{M}\). Now, instead of summing in \(k\) as before, we consider the integration in \(\tilde{k}\). Again by letting \(z=\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\) and \(\theta=\theta_{1}-\theta_{2}-tB_{0}\) and using (3.7), we compute that \[\frac{1}{\pi}\int_{\mathbb{R}}e^{i\tilde{k}\theta}\int_{0}^{\pi} e^{z\cos s}\cos(|\tilde{k}|s)dsd\tilde{k}\] \[=\frac{1}{2\pi}e^{z\cos\theta}\Big{(}\chi_{[0,\pi]}(\theta)+\chi_{ [0,\pi]}(-\theta)\Big{)}=e^{z\cos\theta}\chi_{[-\pi,\pi]}(\theta),\] and \[\frac{1}{\pi}\int_{\mathbb{R}}e^{i\tilde{k}\theta}\sin(\pi|\tilde{k} |)\int_{0}^{\infty}e^{-z\cosh s}e^{-s|\tilde{k}|}ds\,d\tilde{k}\] \[=\frac{1}{\pi}\int_{0}^{\infty}e^{-z\cosh s}\Big{(}\int_{0}^{ \infty}e^{i\tilde{k}\theta}\frac{e^{i\pi\tilde{k}}-e^{-i\pi\tilde{k}}}{2i}e^{- s\tilde{k}}d\tilde{k}\] \[\qquad+\int_{-\infty}^{0}e^{i\tilde{k}\theta}\frac{e^{-i\pi \tilde{k}}-e^{i\pi\tilde{k}}}{2i}e^{s\tilde{k}}d\tilde{k}\Big{)}\,ds\] \[=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-z\cosh s}\big{(}\frac{1 }{\theta+\pi+is}-\frac{1}{\theta-\pi+is}\big{)}ds\] Therefore, we obtain \[\tilde{K}_{S}(x,y)=\frac{B_{0}}{8\pi^{2}i\sin(tB_{0})}e^{-\frac{B _{0}(r_{+}^{2}+r_{0}^{2})}{4i\tan(tB_{0})}}e^{-i\alpha\theta}\] \[\times\Big{(}e^{z\cos\theta}\chi_{[-\pi,\pi]}(\theta)-\frac{1}{2 \pi}\int_{-\infty}^{\infty}e^{-z\cosh s}\big{(}\frac{1}{\theta+\pi+is}-\frac{ 1}{\theta-\pi+is}\big{)}ds\Big{)}.\] Finally, by using (4.1), we have that \[e^{-itH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2})\] \[=\frac{B_{0}}{8\pi^{2}i\sin(tB_{0})}e^{-\frac{B_{0}(r_{+}^{2}+r_ {0}^{2})}{4i\tan(tB_{0})}}\sum_{j\in\mathbb{Z}}e^{-i\alpha(\theta+2j\pi)} \Big{(}e^{z\cos(\theta+2j\pi)}\chi_{[-\pi,\pi]}(\theta+2j\pi)\] \[\qquad-\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-z\cosh s}\big{(} \frac{1}{(\theta+2j\pi)+\pi+is}-\frac{1}{(\theta+2j\pi)-\pi+is}\big{)}ds\Big{)}.\] Due to the period property of \(\cos\) function and for all \(\theta\in\mathbb{R}\), there exists \(j_{0}\in\mathbb{Z}\) such that \(\theta+2j_{0}\pi\in[-\pi,\pi)\), the first term in the big bracket becomes \[e^{z\cos\theta}e^{-i\alpha\theta}\sum_{j\in\mathbb{Z}}e^{-i\alpha 2j\pi}\chi_{[-\pi,\pi]}(\theta+2j\pi)\] \[=e^{z\cos\theta}e^{-i\alpha\theta}e^{-i2\pi j_{0}\alpha}\times \left\{\begin{array}{ll}1,&|\theta+2j_{0}\pi|<\pi;\\ e^{-i2\pi\alpha}+1,&\theta+2j_{0}\pi=-\pi;\\ e^{i2\pi\alpha}+1,&\theta+2j_{0}\pi=\pi.\end{array}\right.\] which is the same to (3.10). Hence it is the same to the first term of (3.1). For the second term in the big bracket, we use the formula \[\sum_{j\in\mathbb{Z}}\frac{e^{-2\pi i\alpha j}}{\sigma+2\pi j}=\frac{ie^{i \alpha\sigma}}{e^{i\sigma}-1},\quad\alpha\in(0,1),\quad\sigma\in\mathbb{C} \setminus 2\pi\mathbb{Z},\] to obtain \[\sum_{j\in\mathbb{Z}}e^{-2\pi i\alpha j}\big{(}\frac{1}{(\theta+2j\pi)+\pi-is }-\frac{1}{(\theta+2j\pi)-\pi-is}\big{)}\] \[=2\sin(\pi\alpha)\frac{e^{\alpha(s+i\theta)}}{1+e^{s+i\theta}}.\] Now we consider the second term \[-e^{-i\theta\alpha}\frac{2\sin(\pi\alpha)}{2\pi}e^{i\theta\alpha} \int_{-\infty}^{\infty}e^{-z\cosh s}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\,ds\] \[= -\frac{\sin(\pi\alpha)}{\pi}\int_{-\infty}^{\infty}e^{-z\cosh s} \frac{e^{-\alpha s}}{1+e^{-s+i\theta}}\,ds\] Recall \(\theta=\theta_{1}-\theta_{2}-tB_{0}\) and \(z=\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\), we obtain \[K_{S}(x,y) =\frac{B_{0}e^{-itB\alpha}}{8\pi^{2}i\sin(tB_{0})}e^{\frac{iB_{0} (r_{1}^{2}+r_{2}^{2})}{4\tan(tB_{0})}}\] \[\times\Big{[}e^{\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\cos( \theta_{1}-\theta_{2}-tB_{0})}e^{-i\alpha(\theta_{1}-\theta_{2}-tB_{0}+2j_{0} \pi)}\chi(\theta,j_{0})\] \[\quad-\frac{\sin(\pi\alpha)}{\pi}\int_{\mathbb{R}}e^{-\frac{B_{0 }r_{1}r_{2}}{2i\sin(tB_{0})}\cosh s}\frac{e^{-\alpha s}}{1+e^{-s+i(\theta_{1} -\theta_{2}-tB_{0})}}\,ds\Big{]},\] which is exact same to (3.1). ## 5. proof of Theorem 1.1 In this section, we prove the main Theorem 1.1 by using (3.1). We first prove the dispersive estimate (1.7). To this end, it is enough to prove \[\Big{|}\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\cosh s} \frac{e^{-\alpha s}}{1+e^{-s+i(\theta_{1}-\theta_{2}-tB_{0})}}\,ds\Big{|}\leq C,\] where \(C\) is a constant independent of \(t\), \(r_{1},r_{2}\) and \(\theta_{1},\theta_{2}\). We notice that \[\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\cosh s }\frac{e^{-\alpha s}}{1+e^{-s+i(\theta_{1}-\theta_{2}-tB_{0})}}\,ds\] \[=\int_{0}^{\infty}e^{-\frac{B_{0}r_{1}r_{2}}{2i\sin(tB_{0})}\cosh s }\Big{(}\frac{e^{-\alpha s}}{1+e^{-s+i(\theta_{1}-\theta_{2}-tB_{0})}}+\frac {e^{\alpha s}}{1+e^{s+i(\theta_{1}-\theta_{2}-tB_{0})}}\Big{)}\,ds\] then we just need to verify that, for \(\theta=\theta_{1}-\theta_{2}-tB_{0}\), \[\int_{0}^{\infty}\Big{|}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s }}{1+e^{s+i\theta}}\Big{|}\,ds\lesssim 1,\] where the implicit constant is independent of \(\theta\). In fact, \[\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s}}{1+e^{ s+i\theta}}\] \[=\frac{\cosh(\alpha s)e^{-i\theta}+\cosh((1-\alpha)s)}{\cos\theta +\cosh s}\] \[=\frac{\cosh(\alpha s)\cos\theta+\cosh((1-\alpha)s)-i\sin\theta \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}\] \[=\frac{2\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)+(\cosh((1- \alpha)s)-\cosh(\alpha s))-2i\sin(\frac{\theta}{2})\cos(\frac{\theta}{2}) \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}.\] Since that \(\cosh x-1\sim\frac{x^{2}}{2},\sinh x\sim x\), as \(x\to 0\); \(\cosh x\sim e^{x},\sinh x\sim e^{x}\), as \(x\to\infty\), we have \[\int_{0}^{\infty}\Big{|}\frac{\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)}{\cos ^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\lesssim\int_{0}^{1} \frac{2|\cos(\frac{\theta}{2})|}{s^{2}+(2|\cos(\frac{\theta}{2})|)^{2}}ds+\int _{1}^{\infty}\ e^{(\alpha-1)s}ds\lesssim 1.\] Similarly, we obtain \[\int_{0}^{\infty}\Big{|}\frac{\sin(\frac{\theta}{2})\cos(\frac{\theta}{2})\cosh( \alpha s)}{\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\lesssim 1.\] Finally, we verify that \[\int_{0}^{\infty}\Big{|}\frac{\cosh((1-\alpha)s)-\cosh(\alpha s)}{ \cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\] \[\lesssim\int_{0}^{1}\frac{|\frac{(1-\alpha)^{2}}{2}-\frac{\alpha^ {2}}{2}|s^{2}}{s^{2}}ds+\int_{1}^{\infty}\big{(}e^{-\alpha s}+e^{(\alpha-1)s} \big{)}ds\lesssim 1.\] Therefore, we obtain the bound (1.7) as desired. Next we prove (1.8). For \(T\in(0,\frac{\pi}{2B_{0}})\) and \(t\in(0,T)\), we have \[\frac{2}{\pi}\leq\frac{\sin(tB_{0})}{tB_{0}}\leq 1,\] then the dispersive estimate (1.7) gives \[\|e^{-itH_{\alpha,B_{0}}}\|_{L^{1}(\mathbb{R}^{2})\to L^{\infty}( \mathbb{R}^{2})}\lesssim\frac{1}{t},\quad\forall t\in(0,T], \tag{5.1}\] when \(T\in(0,\frac{\pi}{2B_{0}}]\). On the other hand, by the spectral theory, we have the energy estimate \[\|e^{-itH_{\alpha,B_{0}}}\|_{L^{2}(\mathbb{R}^{2})\to L^{2}(\mathbb{R}^{2})} \lesssim 1. \tag{5.2}\] To prove the Strichartz estimates (1.8), we recall the abstract mechanism of Keel-Tao [21] **Theorem 5.1**.: _(Keel-Tao [21]) Let \((X,d\mu)\) be a measure space and \(H\) a Hilbert space. Suppose that for each time \(t\in\mathbb{R}\), \(U(t):H\to L^{2}(X)\) which satisfies the energy estimate_ \[\|U(t)\|_{H\to L^{2}}\leq C,\quad t\in\mathbb{R}\] _and that for some \(\sigma>0\) either_ \[\|U(t)U(s)^{*}f\|_{L^{\infty}(X)}\leq C|t-s|^{-\sigma}\|f\|_{L^{1}(X)},\quad t \neq s.\] _Then the estimates hold_ \[\|U(t)f\|_{L^{q}_{t}L^{r}(X)}\lesssim\|f\|_{L^{2}(X)}.\] _where_ \[(q,r)\in\Lambda:=\Big{\{}(q,r)\in[2,+\infty]\times[2,+\infty):\frac{2}{q}=n( \frac{1}{2}-\frac{1}{r})\Big{\}}.\] By making use of this theorem to \(U(t)=e^{-itH_{\alpha,B_{0}}}\chi_{[0,T]}(t)\), since (5.1) and (5.2), we obtain (1.8). Therefore we finish the proof of Theorem 1.1.
2309.08260
Using photon-hadron production to impose restrictions on heavy-hadrons fragmentation functions
Fragmentation Functions (FF) are universal non-perturbative objects that model hadronization in some general kind of processes. They are mainly extracted from experimental data, hence constraining the parameters of the corresponding fits is crucial for achieving reliable results. As expected, the production of lighter hadrons is favoured w.r.t. heavy ones, thus we would like to exploit the precise knowledge of pion FFs to constraint the shape of kaon (or heavier) FFs. In this talk, we show how imposing specific cuts on photon-hadron production leads to relations between the $u$-started FFs. For doing so, we exploit the reconstruction of momentum fractions in terms of experimentally-accessible quantities and introduce NLO QCD + LO QED corrections to reduce the theoretical uncertainties.
German F. R. Sborlini, Roger Hernández-Pinto, Salvador Ochoa-Oregon, David F. Rentería-Estrada
2023-09-15T09:14:13Z
http://arxiv.org/abs/2309.08260v1
# Using photon-hadron production to impose restrictions on heavy-hadrons fragmentation functions ###### Abstract: Fragmentation Functions (FF) are universal non-perturbative objects that model hadronization in some general kind of processes. They are mainly extracted from experimental data, hence constraining the parameters of the corresponding fits is crucial for achieving reliable results. As expected, the production of lighter hadrons is favoured w.r.t. heavy ones, thus we would like to exploit the precise knowledge of pion FFs to constraint the shape of kaon (or heavier) FFs. In this talk, we show how imposing specific cuts on photon-hadron production leads to relations between the \(u\)-started FFs. For doing so, we exploit the reconstruction of momentum fractions in terms of experimentally-accessible quantities and introduce NLO QCD + LO QED corrections to reduce the theoretical uncertainties. ## 1 Motivation A precise phenomenological description of particle production in high-energy collisions is crucial to understand the fundamental constituents of matter. Our current knowledge relies on the Standard Model (SM), a gauge theory that successfully predicts most of the measurements obtained in hadron colliders. However, precision plays a fundamental role, since tiny discrepancies between theory and data could hide new physics signals. Solving the complicated equations of SM to extract accurate phenomenological predictions is plagued with challenges and bottlenecks. One of these bottlenecks is related to the description of the hadronization process, in which a bunch of partons (gluons, quarks or other fundamental particles) originate hadrons through non-perturbative interactions. Even if there are models [1] that could be use to approximate this process, exact solutions are not available. Thus, in order to describe the production of pions, kaons and other hadrons, we rely on Fragmentation Functions (FFs), which are extracted from analysis and fits of experimental data [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Being able to experimentally constraint these FFs is important to reduce the fit errors and obtain more precise predictions. In Ref. [12], we propose to use photon-hadron production at colliders to improve the extraction of FFs, specially for heavy hadrons. Since the photon acts as a clean probe of the parton collision, it could help us to reconstruct the parton kinematics with more precision. This knowledge can be then used to relate FFs of different hadrons by comparing the ratios of their production rates. The aim of this article is proving that we can constrain \(d^{K}(z)/d^{\pi}(z)\) (i.e. the ratio of pion and kaon FF) exploiting the ratio of their cross-sections (i.e. \(d\sigma_{\gamma+K}/d\sigma_{\gamma+\pi}\)) after imposing proper cuts. ## 2 Reconstructing the parton kinematics In the context of the parton model, it is worth noticing that momentum fractions are not physical quantities; in other words, we can not directly measure them. Still, they allow us to understand what is going on inside the hadrons and we can relate them to experimentally-accessible quantities. For instance, inspired by the LO kinematics of photon+hadron production, we can define \[x_{1,REC} = \frac{p_{T}^{\gamma}}{\sqrt{s_{CM}}}\left(\exp(\eta^{\pi})+\exp( \eta^{\gamma})\right)\,, \tag{1}\] \[z_{REC} = \frac{p_{T}^{\pi}}{p_{T}^{\gamma}}\,. \tag{2}\] Whilst the r.h.s. of Eqs. (1)-(2) correspond to a function of \(\{p_{T}^{\gamma},\eta^{\gamma},p_{T}^{\pi},\eta^{\pi}\}\), the l.h.s. provide an estimator of the momentum fraction of the parton \(a_{1}\) entering the reaction \(a_{1}+a_{2}\to\gamma+a_{3}\), and the momentum carried by the pion in the hadronization process \(a_{3}\to\pi\), respectively. ### Reconstruction at NLO (and beyond) The estimators in Eqs. (1)-(2) are strictly valid at tree-level because the presence of real radiation associated to higher-order QCD corrections introduces new events with different parton-level kinematics. For instance, at next-to-leading order (NLO) we need to combine events involving 2-to-3 (real radiation) and 2-to-2 (virtual corrections) processes. In order to do so, we first discretize the external (experimentally-accessible) variables \(\mathcal{\bar{V}}_{\rm Exp}\) and create bins. Then, given a point in the corresponding grid, \(p_{j}=\{p_{T}^{\gamma},\eta^{\gamma},\phi^{\gamma},p_{T}^{\pi},\eta^{\pi},\phi^{ \pi}\}\), we calculate the integrated cross-section \(\sigma(p_{j})\) and define \[(x_{1})_{j} = \sum_{i}(x_{1})_{i}\frac{d\sigma}{dx_{1}}(p_{j};(x_{1})_{i})\,, \tag{3}\] \[(z)_{j} = \sum_{i}(z)_{i}\frac{d\sigma}{dz}(p_{j};(z)_{i})\,, \tag{4}\] that provide a cross-section-weighted approximation to the partonic momentum fractions \(x_{1}\) and \(z\). Once this is done, we need to find the maps \[Y_{REC}:=\bar{\mathcal{V}}_{\rm Exp}\to Y_{REAL}\,, \tag{5}\] with \(Y=\{x_{1},x_{2},z\}\) and \(Y_{REAL}\) given by Eqs. (3)-(4), that will allow us to reconstruct the partonic momentum fractions using only information obtained from experimentally-accessible variables. In Ref. [13], we proposed the LO-inspired relations Eqs. (1)-(2) as an approximated map, and we showed that these formula were highly correlated to the _true_ momentum fractions including NLO QCD effects. Exploiting the recent advances in machine-learning techniques, in Ref. [14], we also used neural networks to find the maps in Eqs. (1)-(2). The resulting correlation plots are shown in Fig. 1, which exhibit a very accurate reconstruction of the momentum fractions \(x_{1}\) and \(z\), with minimal human intervention (no need to define a function basis for the fit). ## 3 Constraining Fragmentation Functions Once the approximated momentum fractions are described in terms of the external variables, we have access to \(z\). We will use this fact to extract information about the FFs. First, we consider the differential cross-section for the process \(H_{1}+H_{2}\to h_{i}+\gamma\) as a function of the _real_ momentum Figure 1: Correlation plots of the _real_ vs. reconstructed momentum fractions \(x_{1}\) (left) and \(z\) (right), including up to NLO QCD + LO QED effects. We used an optimized neural network based on multilayer perceptrons [14]. \(\{x_{REAL},z_{REAL}\}\) are the _true_ momentum fractions of the events generated by the simulator. fraction: \[\frac{d\sigma^{h_{i}}}{dz_{REAL}} = \int\,dx_{1}dx_{2}dz\,\sum_{a_{1},a_{2},a_{3}}\,d^{h_{i}}_{a_{3}}(z )f^{H_{1}}_{a_{1}}(x_{1})f^{H_{1}}_{a_{1}}(x_{2})\,d\hat{\sigma}_{a_{1}a_{2} \to a_{3}\gamma}\,\delta(z-z_{REAL}) \tag{6}\] \[= \sum_{a_{1},a_{2},a_{3}}\,d^{h_{i}}_{a_{3}}(z_{REAL})\,g_{a_{3}}(z _{REAL})\,,\] where \(d^{h_{i}}_{a_{3}}(z)\) is the FF associated to a parton \(a_{3}\) that hadronizes into \(h_{i}\) carrying a momentum fraction \(z\). In order to have the second line of Eq. (6), we are neglecting the scale dependence, thus having a perfect factorization. Notice that \(g_{a_{3}}(z)\) is independent on the final state hadron \(h_{i}\). At this point, our aim is clear: exploit Eq. (6) to find relations among FF for different hadrons. In particular, keeping in mind photon-hadron production, we perform the following approximations: 1. Since \(z=p_{T}^{h_{i}}/p_{Y}^{\gamma}=z_{REC}\) is strictly valid at tree-level, we can impose \(|\eta|<0.5\) to keep mainly events with Born-level kinematics and use \(z\approx z_{REC}\) even when including up to NLO QCD + LO QED corrections [15]. 2. \(qg\)-initiated channel is roughly 10 times larger than the others, mainly due to gluon PDF enhancement. Having in mind the LO picture, this implies that the dominant production channel at parton level is \(q+g\to q+\gamma\), or equivalently that \(a_{3}\) is a quark. 3. As a consequence of a factor \(e_{q}^{2}\) in the matrix element, U-channels are 4 times larger than D-channels. As a consequence of 1, 2 and 3, together with the fact the \(u\) is the dominant U-sector quark flavour inside the proton, Eq. (6) leads to \[R^{K/\pi}(d\sigma)=\frac{d\sigma^{K}/dz_{REC}}{d\sigma^{\pi}/dz_{REC}}\approx \frac{d^{K}_{u}(z_{REC})}{d^{\pi}_{u}(z_{REC})}=R^{K/\pi}(d_{u})\,, \tag{7}\] where we achieve a relation between kaon and pion \(u\)-started FFs. We considered to initial scenarios to test the validity of this approximation. On one side, we fixed the reference energy scale to \(\mu=\bar{Q}=26\) GeV. On the other, we choose the default definition \(\mu=(p_{T}^{h_{i}}+p_{T}^{\gamma})/2\) which changes event-by-event. In both cases, \(R^{K/\pi}(d_{u})\) and \(R^{K/\pi}(d\sigma)\) exhibit a rather similar shape and they overlap within their corresponding error bands1. In Fig. 2 (left) we show the results fixing the energy scale to \(\mu=\bar{Q}\) without further cuts (except those mentioned in 1). Footnote 1: As usual, we obtained these bands by varying a factor 2 up and down the renormalization and factorization energy scales. More details are available in Ref. [12]. ### Enhancing different partonic channels From Eq. (6), we appreciate that the sum over quark flavours spoils a perfect cancellation of \(g_{a_{3}}\) in the cross-section ratios considered in Eq. (7). Thus, we can impose additional kinematical cuts to enhance even more the contribution of the \(u\)-quark channel. By taking a look to different PDF sets, we notice that \(u\) is favoured w.r.t. \(d\) for \(x\in(0.03,0.5)\). Thus, we used the reconstructed \(x\) momentum fractions from Eq. (1) and selected those events fulfilling \[0.03\leq\{(x_{1})_{REC},(x_{2})_{REC}\}\leq 0.5\,. \tag{8}\] Notice that this cut is totally realistic because \(x_{REC}\) is expressed in terms of experimentally-accessible quantities. In Fig. 2 (right) we show the results of this new scenario, where we appreciate that \(R^{K/\pi}(d_{u})\) and \(R^{K/\pi}(d\sigma)\) are much closer. Furthermore, the overlap of their error bands is larger, which indicates that the Eq. (7) is a good approximation. ## 4 Conclusions and outlook In this article, we motivate the importance of photon-hadron production to access parton-level kinematics, specially when we require the presence of a prompt-photon in the final state. Since this photon acts as a clean probe of the underlying partonic collisions, we can reconstruct the momentum fractions by using experimentally-accessible variables. By using machine-learning tools, we test the validity of the analytic approximations provided in Ref. [13] and show that neural-networks lead to a very accurate reconstruction with minimal human intervention. Once the momentum fractions are expressed in terms of measurable quantities (such as \(p_{T}\) or \(\eta\)), we proceed to study cross-section ratios for different hadrons in the final state. By using proper approximations, we manage to relate these ratios to FF ratios. This means that, if we are able to accurately determine FFs for hadron \(h_{1}\), then we can constrain the FF of another hadron \(h_{2}\) by computing \(R^{h_{2}/h_{1}}(d\sigma)\) as defined in Eq. (7). By imposing optimized kinematical cuts (as the ones described in Sec. 3.1), we can enhance the contribution of different partonic channels and, in this way, extract more information about the FFs. In the future, we plan to implement machine-learning techniques to optimize the cuts and better constrain FFs for heavy hadrons. ## Acknowledgments This work is supported by the Spanish Government (Agencia Estatal de Investigacion MCIN /AEI/10.13039/501100011033) Grants No. PID2020-114473GB-I00, PID2022-141910NB-I00; Generalitat Valenciana Grant No. PROMETEO/2021/071. G.S. is supported by H2020-MSCA-COFUND USAL4EXCELLENCE-PROOPI-391 project under Grant Agreement No 101034371. R.H.P. is supported by CONACyT Project No. 320856 (_Paradigmas y Controversias de la Ciencia 2022_), _Ciencia de Frontera 2021-2042_ and _Sistema Nacional de Investigadores_. Figure 2: Comparison of the ratios \(R^{K/\pi}(d_{u})\) (black dashed) and \(R^{K/\pi}(d\sigma)\) (green dashed) including up to NLO QCD and LO QED effects. The central energy scale is fixed to \(\mu=\bar{Q}\). We show two scenarios: (left) without additional cuts and (right) imposing \(0.03\leq\{(x_{1})_{REC},(x_{2})_{REC}\}\leq 0.5\).
2309.07153
Finding Influencers in Complex Networks: An Effective Deep Reinforcement Learning Approach
Maximizing influences in complex networks is a practically important but computationally challenging task for social network analysis, due to its NP- hard nature. Most current approximation or heuristic methods either require tremendous human design efforts or achieve unsatisfying balances between effectiveness and efficiency. Recent machine learning attempts only focus on speed but lack performance enhancement. In this paper, different from previous attempts, we propose an effective deep reinforcement learning model that achieves superior performances over traditional best influence maximization algorithms. Specifically, we design an end-to-end learning framework that combines graph neural network as the encoder and reinforcement learning as the decoder, named DREIM. Trough extensive training on small synthetic graphs, DREIM outperforms the state-of-the-art baseline methods on very large synthetic and real-world networks on solution quality, and we also empirically show its linear scalability with regard to the network size, which demonstrates its superiority in solving this problem.
Changan Liu, Changjun Fan, Zhongzhi Zhang
2023-09-09T14:19:00Z
http://arxiv.org/abs/2309.07153v1
# Finding Influencers in Complex Networks: An Effective Deep Reinforcement Learning Approach ###### Abstract Maximizing influences in complex networks is a practically important but computationally challenging task for social network analysis, due to its NP-hard nature. Most current approximation or heuristic methods either require tremendous human design efforts or achieve unsatisfying balances between effectiveness and efficiency. Recent machine learning attempts only focus on speed but lack performance enhancement. In this paper, different from previous attempts, we propose an effective deep reinforcement learning model that achieves superior performances over traditional best influence maximization algorithms. Specifically, we design an _end-to-end_ learning framework that combines graph neural network as the _encoder_ and reinforcement learning as the _decoder,_ named DEIM. Through extensive training on small synthetic graphs, DEIM outperforms the state-of-the-art baseline methods on very large synthetic and real-world networks on solution quality, and we also empirically show its linear scalability with regard to the network size, which demonstrates its superiority in solving this problem. Influence maximization, graph neural networks, deep reinforcement learning, social network ## 1 Introduction Social networks refer to a relatively stable system formed by various interactive relationships among individual members of society. Influence maximization problem is an important issue for social networks analysis, which has wide spread application in practice, including word-of-mouth marketing, crowd mobilization and public opinion monitoring [1, 2]. This problem can be formally described as finding out \(k\) seeds (influencers) to maximize their influences under certain propagation model, e.g., the independent cascade model (IC) or the linear threshold model (LT) [3]. Traditional attempts towards this problem can be categorized into two types [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. First are approximate algorithms [20, 21, 22, 23], which are bounded with theoretical guarantees, and can be applied to medium sized graphs. However the design of these algorithms often requires extensive expert error-and-trials and their scalability often suffers. Second ones are heuristic methods, including random heuristics [3], evolution-based ones [24], e.g., genetic algorithms, centrality-based ones [25, 26] and degree discount heuristics [27]. These methods are often fast and easy to design, and they can usually produce a feasible solution in a short time.They do not, however, offer any theoretical assurances, and the quality of the solutions can be very poor in some circumstances. Note that both approximate and heuristic methods are ad hoc in nature, with little cross-scenario flexibility. To our best knowledge, the advantages of data-driven models such as deep reinforcement learning have not been well exploited for tackling the influence maximization problem. Recently, some works have proposed that reinforcement learning can be used to solve the combinatorial optimization problems on graphs [28, 29, 30]. The intuition behind these works is that the model can be trained in a graph distribution \(\mathcal{D}\), and the solution set for a new graph can be obtained using the trained model. Dai et al. [28] first implemented this idea. They applied this idea to solve the traditional combinatorial optimization problems such as minimum vertex coverage and maximum coverage problem. However, their method is difficult to apply to large-scale graphs. Later on, this method was improved by Li et al. [30]. Akash et al. [29] first use this idea to solve the problem of influence maximization. Although the calculation speed has been improved, the performance of their algorithm i.e., selecting k seed nodes, and the proportion of nodes finally activated, is not better than the traditional best approximation algorithm IMM. Moreover, their model is trained via supervised learning, however, in real life, providing labels for supervised training is time-consuming and laborious. And supervised training will limit the ability of their model to obtain higher-quality solutions. Inspired by current existing machine learning attempts in solving combinatorial optimization problems, here, we design DREIM (**D**eep **R**E**inforcement learning for **I**nfluence **M**aximizations), an _end-to-end_ deep reinforcement learning based influence maximization model. More concretely, DREIM incorporates the graph neural network to represent nodes and graphs, and the Q-learning technique to update the trainable parameters. We train DREIM with large amounts of synthetic graphs, and the learned strategy could be applied on much larger instances, including both synthetic networks and real-world ones. DREIM achieves better solution quality than current state-of-the-art methods, for example, when selecting 100 seed nodes from the Facebook network, DREIM activates 26700 nodes while IMM activates 20180 nodes. Meanwhile, DREIM can also be very efficient through a _batch nodes selection_ strategy. In summary, we make four main contributions: * We formulate the seed nodes selection process of influence maximization (IM) problem as a Markov decision process. * We present an _end-to-end_ deep reinforcement learning framework to solve the classical IM problem. We design a novel state representation for reinforcement learning by inducing a virtual node, which can capture system state information more accurately. * We propose a reasonable and effective termination condition of reinforcement learning, leading to the superior generalization capacity of DREIM. * We evaluate DREIM through extensive experiments on different sizes of large graphs. Our results demonstrate that DREIM is effective and efficient as well as its linear scalability on large networks. The remaining parts are organized as follows. We systematically review related work in Section 2. After that, we present the preliminaries and problem formalization in Section 3. We then introduce the details of DREIM architecture in Section 4. Section 5 presents the evaluation of DREIM on both large synthetic graphs and real-world networks. In section 6, we first intuitively discuss the policies learned by DREIM and then discuss the effect of our novel Q-learning setting. Finally, we conclude the paper in Section 7. ## 2 Related Work ### Influence maximization Domingos and Richardson [31, 32] perform the first study of influence maximization problem, and Kempe et al. [3] first formulate the problem as a discrete optimization problem. Since then influence maximization has been extensively studied [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. The existing solution methodologies can be classified into three categories. **Approximation methods**. Kempe et al. [3] proposed a hill-climbing greedy algorithm with a \(1-1/e-\epsilon\) approximation rate, which uses tens of thousands of Monte Carlo simulations to obtain the solution set. Leskovec et al. [20] then proposed a cost-effective delayed forward selection (CELF) algorithm. Experimental results show that the execution time of CELF is 700 times faster than that of the greedy algorithm. Borgs et al. [21] Borgs et al. proposed a IM sampling method called RIS. A threshold is set to determine how many reverse reachable sets are generated by the network and the node that covers the most of these sets is selected. TIM/TIM+ [22] greatly improves the efficiency of [21], and is the first RIS-based approximation algorithm that achieves efficiency comparable to that of heuristic algorithms. Later, IMM [23] employs the concept of martingale to reduce computation time while retaining TIM's \(1-1/e-\epsilon\) approximation guarantee. According to a benchmark study [33], IMM is known as the state-of-the-art approximation algorithm for solving IM problem.In addition, Tang et al. [4] proposed OPIM to improve interactivity and flexibility for a better online user experience. The disadvantages of most approximation methods are that they suffer from scalability problems and rely heavily on expert knowledge. **Heuristic methods**. Unlike approximation methods, heuristic don't give any worst-case bound on the influence spread. These methods include the random heuristic [3], which randomly selects seed nodes; the centrality-based heuristic [25, 26], which selects nodes with high centrality; the evolution-based heuristic [24] and the degree-discounting heuristic proposed by Chen et al. [27] which has similar effectiveness to the greedy algorithm and improves efficiency. **Machine learning methods**. Akash et al. [29] first leveraged reinforcement learning method to solve the influence maximization problem. Their model includes a supervised component that uses the greedy algorithm to generate solutions to supervise the neural network. However, generating the training data is a big challenge and supervised learning often suffers from the over-fitting issue. ### Graph representation learning Graph representation learning (GRL) tries to find \(d\)-dimensional (\(d\ll|\mathcal{V}|\)) dense vectors to capture the graph information. The obtained vectors can be easily fed to downstream machine learning models to solve various tasks like node classification [34], link prediction [35], graph visualization [36], graph property prediction [37], to name a few. GNNs are among the most popular graph representation learning methods which adopt a massage passing paradigm [38, 39]. Let \(\mathbf{h}_{v}^{(l)}\) denote the embedding vector of node \(v\) at layer \(l\), a typical GNN includes two steps: (i) Every node aggregates the embedding vectors from its neighbors at layer \(l-1\); (ii) Every node updates its embedding through combining its embedding in last layer and the aggregated neighbor embedding: \[\mathbf{h}_{\mathcal{N}(v)}^{(l)}=\text{AGGREGATE}\left(\left\{\mathbf{h}_{u} ^{(l-1)},\forall u\in\mathcal{N}(v)\right\}\right), \tag{1}\] \[\mathbf{h}_{v}^{(l)}=\beta\left(\mathbf{W}^{(l)}\cdot\text{COMBINE}\left( \mathbf{h}_{v}^{(l-1)},\mathbf{h}_{\mathcal{N}(v)}^{(l)}\right)\right), \tag{2}\] where \(\mathcal{N}(\cdot)\) denotes a set of neighboring nodes of a given node, \(W^{(l)}\) is a trainable weight matrix of the \(lth\) layer shared by all nodes, and \(\beta\) is an activation function, e.g., ReLU. ### Deep reinforcement learning The reinforcement learning framework [40] considers tasks in which the agent interacts with a dynamic environment through a sequence of observations, actions and rewards. Different from supervised learning, the agent in reinforcement learning is never directly told the best action under certain state, but learns by itself to realize whether its previous sequence of actions are right or not only when an episode ends. The goal of reinforcement learning is to select actions in a fashion that maximizes the long-term performance metric. More formally, it uses a deep neural network to approximate the optimal state-action value function \[\begin{split}& Q^{*}(s,a)=\\ &\max_{\pi}\mathbb{E}\left[r_{t}+\gamma r_{t+1}+\gamma^{2}r_{t+2}+ \ldots\mid s_{t}=s,a_{t}=a,\pi\right],\end{split} \tag{3}\] where \(\gamma\) is the discounted factor, \(\pi=P(a|s)\) is the behaviour policy, which means taking action \(a\) at state \(s\). ### Machine learning for combinatorial optimization In a recent survey, Bengio et al. [41] summarize three algorithmic structures for solving combinatorial optimization problems using machine learning: _end to end_ learning that treats the solution generation process as a whole [42, 28, 43], learning to configure algorithms [44, 45], and learning in parallel with optimization algorithms [46, 47]. Dai et al. [28] firstly highlight that it is possible to learn combinatorial algorithms on graphs using deep reinforcement learning. Then, Li et al. [30] and Akash et al. [29] propose improvements from different aspects. Recently, Fan et al. [48] have proposed an deep reinforcement learning framework to solve the key player finding problem on social networks. Most of these algorithms are based on two observations. First, although all kinds of social networks in real life are complex and changeable, the underlying generation models of these networks are unified, such as BA model [49], WS model [50], ER model [51], and powerlaw-cluster model [52], etc. Second, the nodes in the solution set selected by the approximation algorithms should have similar characteristics, such as high betweenness centrality. ## 3 Preliminaries and problem formalization In this section, we first introduce some preliminaries of the influence maximization problem and then give its formalization. ### Preliminaries Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W})\) be a social network, where \(\mathcal{V}\) is a set of nodes (users), \(\mathcal{E}\) is a set of edges (relationships), \(|\mathcal{V}|=N\) and \(|\mathcal{E}|=M\). \((u,v)\in\mathcal{E}\) represents an edge from node \(u\) to node \(v\). Let \(\mathbf{W}\) denote the weight matrix of edges indicating the degree of influence. \(\mathcal{V}_{a}\) denotes the active nodes set. * **Seed node**: A node \(v\in\mathcal{V}\) that is initially activated as the information source of the entire graph \(\mathcal{G}\). The set of seed nodes is denoted by \(\mathcal{S}\), and \(\bar{\mathcal{S}}\) is the complementary set of \(\mathcal{S}\). * **Active node**: A node \(v\in\mathcal{V}\) is regarded as active if it is a seed node (\(v\in\mathcal{S}\)) or it is influenced by previously active node \(u\in\mathcal{V}_{a}\). * **Spread**: The expected proportion of activated nodes after the process of influence propagation terminates, denoted as \(\sigma(\mathcal{S})=\frac{|\mathcal{V}_{a}|}{|\mathcal{V}|}\). * **Linear threshold model**: LT model simulates the common herd mentality phenomenon in social networks. In this model, each node \(v\) in a graph has a threshold \(\theta_{v}\). Let \(\mathcal{N}(v)\) be the set of neighbors of node \(v\) and \(\mathcal{N}^{\alpha}(v)\) be the set of activated neighbors of node \(v\). For each node \(u\in\mathcal{N}(v)\), the edge \((u,v)\) has a non-negative weight \(\mathbf{w}(u,v)\leq 1\). Given a graph \(\mathcal{G}\) and a seed set \(\mathcal{S}\), and the threshold for each node, this model first activates the nodes in \(\mathcal{S}\). Then information starts spreading in discrete timestamps following the following rule. An inactive node \(v\) will be activated if \(\sum_{u\in\mathcal{N}^{\alpha}(v)}\mathbf{w}(u,v)\geq\theta_{v}\). The newly activated nodes attempt to activate their neighbors. This process stops when no new nodes are activated. ### Problem formalization The influence maximization problem is formally defined as follows: \[\operatorname{argmax}_{|\mathcal{S}|=k,\mathcal{S}\subseteq\mathcal{V}}\sigma( \mathcal{S}), \tag{4}\] where \(\mathcal{S}\) is the solution set, \(\sigma\) is the spread calculation function and \(k\) is the budget. We set the threshold of each node as a random real number between \(0\sim 1\). And for every node \(v\in\mathcal{V}\), we set the influence weight of its neighbors as \(\frac{1}{|\mathcal{N}_{v}|}\) where \(|\mathcal{N}_{v}|\) denotes the number of neighbors of node \(v\). To our best knowledge, we are the first who utilize deep learning method to tackle the influence maximization problem under LT diffusion model. ## 4 Proposed Model: Dreim In this section, we introduce the proposed model named DREIM. We begin with the overview and then introduce the architecture by parts as well as the training procedure. We also analyze the time complexity of DREIM in the end of this section. ### Overview The proposed deep learning model called DREIM combines graph neural network (GNN) and deep reinforcement learning (DRL) together in an _end-to-end_ manner. As illustrated in Fig. 1, DREIM has two phases: offline training and online inference. In the offline training phase (top), we train DREIM using synthetic graphs drawn from a network distribution \(\mathcal{D}\), like powerlaw-cluster model adopted here. At first we generate a batch of graphs scaling in a range, like \(30\sim 50\). Then we sample one (or a mini-batch) of them as the environment, and let DREIM interact with it. When the interaction process terminates, the experiences in the form of 4-tuple \(\left[S_{i},A_{i},R_{(i,i+n)},S_{(i+n)}\right]\) will be stored in the experience replay buffer with a size of \(500000\). At the same time, the agent is getting more intelligent by performing mini-batch gradient descents over Eq. (6). During the online inference phase (bottom), we applied the well-trained model to large synthetic and real-world networks with sizes scaling from thousands to millions. In order to decrease the computation time, we adopt a _batch nodes selection_ strategy which selects a batch of highest \(Q\)-value nodes at each adaptive step. We use the notion DREIM-X to denote different variants of DREIM, where X denotes the number of nodes selected at each step. DREIM refers to DREIM-1 and DREIM-All means we select k nodes as seed nodes at the very first step. There are three key parts in DREIM: (i) **Encoding**, which utilizes the GraphSAGE [39] architecture to learn the nodes' representations incorporating their structural information and feature information; (ii) **Decoding**, which generates a scalar \(Q\)-value for each node; (iii) **Greedy selection**, which adopts the \(\epsilon\)-greedy strategy in the training phase and the _batch nodes selection_ strategy in the inference phase, to make actions based on the \(Q\) values of nodes. In what follows, we will describe each part in detail. ### Encoder Traditional hand-crafted features, such as node degree centrality, clustering coefficient, etc., can hardly describe the complex nonlinear graph structure. Therefore, we exploit the GraphSAGE [39] architecture to represent complex structure and node attributes using \(d\)-dimensional dense vectors. Alg. 1 presents the pseudo code of GraphSAGE. The input node attributes vector \(\mathbf{X}_{v}\) should include some raw structural information of a node. In this paper, we utilize a 2-tuple [out-degree, isesed] to set the input node attributes, where "out-degree" denotes the summation of outgoing edge weights of node \(v\), and "issesed" denotes whether a node \(v\) is already selected as seed node, i.e., this feature is set 1 if node \(v\) is already a seed node and 0 if node \(v\) is not a seed node. For the representation of the entire graph, we use the embedding of a virtual node that has connections to all the nodes in unique direction. ``` 0:\(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W})\); node attributes \(\mathbf{X}_{v}\in\mathbf{R}^{1\times c},\forall v\in\mathcal{V}\cup\{s\}\); iteration depth \(L\); weight parameters \(\mathbf{W}_{1}\in\mathbf{R}^{c\times d},\mathbf{W}_{2}\in\mathbf{R}^{d\times(d /2)},\mathbf{W}_{3}\in\mathbf{R}^{d\times(d/2)}\) 0: Embedding vector \(\mathbf{z}_{v},\forall v\in\mathcal{V}\cup\{s\}\) 1: Create a virtual node \(s\) which connects all nodes in the graph, denoted as the graph state 2: Initialize \(\mathbf{h}_{v}^{(0)}\leftarrow\operatorname{ReLU}\left(\mathbf{X}_{v}\cdot \mathbf{W}_{1}\right),\mathbf{h}_{v}^{(0)}\leftarrow\mathbf{h}_{v}^{(0)}/ \left\|\mathbf{h}_{v}^{(0)}\right\|_{2},\forall v\in\mathcal{V}\cup\{s\}\) 3:for\(l=1\) to \(L\)do 4:for\(v\in\mathcal{V}\cup\{s\}\)do 5:\(\mathbf{h}_{v(v)}^{(l-1)}\leftarrow\sum_{j\in\mathcal{N}(v)}\mathbf{h}_{j}^{(l -1)}\) 6:\(\mathbf{h}_{v}^{(l)}\leftarrow\operatorname{ReLU}\left(\left[\mathbf{W}_{2} \cdot\mathbf{h}_{v}^{(l-1)},\mathbf{W}_{3}\cdot\mathbf{h}_{\mathcal{N}(v)}^{ (l-1)}\right]\right)\) 7:endfor 8:\(\mathbf{h}_{v}^{(l)}\leftarrow\mathbf{h}_{v}^{(l)}/\left\|\mathbf{h}_{v}^{(l )}\right\|_{2},\forall v\in\mathcal{V}\cup\{s\}\) 9:endfor 10:\(\mathbf{z}_{v}\leftarrow\mathbf{h}_{v}^{(K)},\forall v\in\mathcal{V}\cup\{s\}\) ``` **Algorithm 1** Encoding algorithm ### Decoder In the previous step, we obtain the embeddings, in which the virtual node's embedding can be regarded as the state while other nodes' embeddings as the potential actions. In the decoding step, we aim to learn a function that can transfer the state-action pair \(\left(s,a\right)\) to a scalar value \(Q\left(s,a\right)\). The scalar value indicates the expected maximal rewards after taking action \(a\) given state s. In DREIM, the embeddings of state and actions are fed into a 2-layer MLP. We employ rectified linear unit (ReLU) as the activation function. Formally, the decoding process can be defined as follows: \[Q(s,a)=\mathbf{W}_{5}^{\top}\operatorname{ReLU}\left(\mathbf{z}_{a}^{\top}\cdot \mathbf{z}_{s}\cdot\mathbf{W}_{4}\right), \tag{5}\] where \(\mathbf{W}_{4}\in\mathbf{R}^{d\times 1},\mathbf{W}_{5}\in\mathbf{R}^{d\times 1}\) are weight parameters between two neural network layers, \(\mathbf{z}_{s}\) and \(\mathbf{z}_{a}\in\mathbf{R}^{1\times d}\) are the output embeddings for state and action respectively. We present how we define the elements of \(Q\)-learning as follows: * **State**: We create a virtual node to represent the state \(s\). \(s\) updates its embedding in the same way as other nodes do, i.e., message aggregation and combination. * **Action**: Action is the process of adding a node \(v\notin\mathcal{S}\) to the solution set \(\mathcal{S}\). * **Transition**: When a node \(v\) is selected to join the solution set \(\mathcal{S}\), the "isseed" attribute of it will change from 0 to 1. * **Reward**: After the agent makes an action, it will receive a feedback from the environment, which represents reward or punishment. In DREIM, we use negative inactive rate \(-\frac{|\mathcal{V}|-|\mathbf{y}_{a}|}{|\mathcal{V}|}\), i.e., \(\sigma-1\) to denote the reward after adding a node into \(\mathcal{S}\). * **Policy**: Policy is the rule the agent obeys to pick the next action. In this work, we adopts different policies during training and inference, and we introduce it concretely in section 4.4 * **Termination**: The interaction process terminates when all nodes are activated. We choose this termination condition to improve DREIM's generalization ability. Previous works set the termination condition as \(|\mathcal{S}|=k\)[29, 53], we argue this will hinder the model's generalization ability. For example, when the budget \(k=10\), there will be 90 nodes left for a graph with 100 nodes, but there will be 990 nodes left for a graph with 1000 nodes, which will confuse the agent, since the terminal state is totally different for graphs with different sizes. ### Greedy Selection Based on the above two steps, we adopt a greedy selection process to make actions based on the \(Q\) values of nodes. In the training phase, since the model has not been trained well, we adopt the \(\epsilon\)-greedy strategy to balance exploration and exploitation. In the inference phase, since we have already trained DREIM well, we only exploit the learned \(Q\) function to take the highest \(Q\) value action. In order to speed up the solution set generation process, we also adopt a _batch nodes selection Figure 1: The pipeline of DREIM as a combination of the graph embedding and DQN process. The top half of the figure represents the training phase, and the bottom half is the testing phase. The colour bar denotes each node’s embedding obtained after the encoding process and the green bar shows the Q-value of each node after the 2-layer MLP decoding process. Orange nodes indicate that they have been selected as seed nodes, and black nodes are not selected by DREIM. Figure 2: Training analysis of DREIM. (a). Remaining inactive rate decreases as the number of seed nodes increases. \(k^{*}\) seed nodes are selected to activate all the nodes in a network. (b). The training process convergences fast measured by validation quality, i.e., area under the inactive rate curve. strategy, which takes the top-\(k\) nodes with the highest \(Q\)-value at each adaptive step. Extensive analysis shows that this strategy can bring a significant speed increase without compromising much on solution quality. ``` 0: number of episodes \(e\), replay buffer size \(b\), 1: Initialize DQN \(\Theta\) and target DQN \(\hat{\Theta}\) 2: Initialize \(n\)-step experience replay buffer \(B\) 3:for episode =1 to \(e\)do 4: Draw random graphs from distribution \(D\), like powerlaw-cluster model 5: Initialize the solution set \(S\) as an empty set \(\emptyset\) 6:for t=1 to Termination do 7: Compute embeddings and \(Q\) values 8:\(c\leftarrow\) random number between 0 and 1 9:\(v_{t}=\left\{\begin{array}{ll}\text{random node }v\in\bar{\mathcal{S}},&c< \varepsilon\\ \text{argmax}_{v\in\bar{\mathcal{S}}}\,Q(v,\mathcal{S};\Theta),&c\geq\varepsilon \end{array}\right.\) 10:\(\mathcal{S}_{t+1}=\mathcal{S}_{t}\cup v_{t}\), get reward \(r_{t}\) 11:if\(t\geq n\)then 12: Add tuple \((s_{t-n},a_{t-n},r_{t-n,t},s_{t})\) to \(\mathcal{B}\) 13: Sample random batch experiences from \(B\) 14: Perform stochastic gradient descents for \(\Theta\) 15: Every \(C\) steps reset \(\hat{\Theta}=\Theta\) 16:endif 17:endfor 18:endfor ``` **Algorithm 2** Training algorithm ### Training and inference During training, there are two sets of parameters to be learned, including \(\Theta_{\text{E}}=\{\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{W}_{3}\}\) and \(\Theta_{\text{D}}=\{\mathbf{W}_{4},\mathbf{W}_{5}\}\). Thus total parameters to be trained can be denoted as \(\Theta=\{\Theta_{\text{E}},\Theta_{\text{D}}\}\). To avoid the inefficiency of Monte Carlo method [40], we adopt the Temporal Difference method [54] which leads to faster convergence. To better estimate the future reward, we utilize the \(n\)-step \(Q\)-learning [40]. We use an experience replay buffer to increase the training data diversity. We not only consider the \(Q\)-learning loss like other works do [29, 53] but also the graph reconstruction loss to enhance topology information preservation. Alg. 2 presents the whole set of training process of DREIM. The loss function can be formalized as follows: \[\begin{split}\text{Loss}\left(\Theta\right)=& y-Q \left(s_{t},a_{t};\Theta_{Q}\right)^{2}+\\ &\alpha\sum_{i=1}^{N}\sum_{j=1}^{N}s_{i,j}\left\|z_{i}-z_{j}; \Theta_{\text{E}}\right.\right\|_{2}^{2},\end{split} \tag{6}\] where \(y=r_{t,t+n}+\gamma\max_{a^{\prime}}\hat{Q}\left(s_{t+n},a^{\prime};\hat{ \Theta}_{Q}\right)\), \(\Theta_{Q}=\{\Theta_{\text{E}},\Theta_{\text{D}}\}\) and \(\hat{\Theta}_{Q}\) is the parameters of the target network, which are only updated as \(\Theta_{Q}\) every C iterations. \(\alpha\) is a hyper-parameter to balance the two losses. \(\gamma\) is the discounting factor. We validate DREIM using the metric _Return_: \[Return\left(v_{1},v_{2},\ldots,v_{k^{*}}\right)=\frac{1}{N}\sum_{k=1}^{k^{*}} \left(1-\sigma\left(\{v_{1},v_{2},\ldots,v_{k}\}\right)\right), \tag{7}\] where \(k^{*}\) is the number of seed nodes needed to activate all the nodes in a network. _Return_ can be regarded approximately as the area under the inactive rate curve as shown in Fig. 2 (a). In the inference phase, we adopt the _batch nodes selection_ strategy, i.e., instead of one-by-one iteratively selecting and recomputing the embeddings and \(Q\) values, we pick a batch of highest-\(Q\) nodes at each adaptive step. ### Complexity analysis The complexity of DREIM consists of two processes, i.e., training process and inference process. **Training complexity.** The complexity of the training process depends on the training iterations, which is hard to be theoretically analyzed. Experimental results show that DREIM convergences fast, which indicates that we don't need many iterations for training. For example, as shown in Fig. 2 (b), we train DREIM on synthetic graphs with a scale range of \(30\sim 50\), DREIM convergences at the iteration about 60000 meaning that the training procedure is not much time-consuming. **Inference complexity.** The inference complexity is determined by three parts, encoding, decoding and node selection, with a complexity of \(O(M)\), \(O(N)\), and \(O(N\text{log}N)\) respectively, which results in a total complexity of \(O(M+N+N\text{log}N)\). Since most real-world social networks are sparse, thus we can say DREIM has linear scalability concerning the network size. Note that DREIM is once-training multi-testing, i.e., the training phase is just performed only once, and can be used for any input network in the inference phase. ## 5 Experiment We train and validate DREIM using synthetic graphs generated by the powerlaw-cluster model as it captures Figure 3: Wall-clock running time analysis. (a). Liner scalability of DREIM. (b). For Facebook network with budget \(k=50\), the wall-clock running time decreases quickly as the batch size increases. two important features of the vast majority of real-world networks, i.e., small-world phenomenon [50] and power-law degree distribution [49]. And we test DREIM on both large synthetic and real-world social networks of different scales. To our best knowledge, this is the first work that considers learning for IM problem under the LT diffusion model, moreover, DREIM exceeds the spread quality of all the baselines. ### Experimental setup #### 5.1.1 Baselines We compare the effectiveness and efficiency of DREIM with other baselines. IMM is the state-of-the-art approximation method according to a benchmarking study [33]. Previous work GCOMB [29] tries to solve the IM problem using reinforcement learning method and obtains similar performance to IMM. So we compare DREIM with IMM and GCOMB. We use the code of IMM and GCOMB shared by the authors. For IMM, we set \(\epsilon=0.5\) throughout the experiment as suggested by the authors. For GCOMB, to make it comparable, we change the diffusion model from IC to LT. We train and validate GCOMB on subgraphs sampled from Youtube by randomly selecting 30% of its edges, and we test GCOMB on the remaining subgraph of Youtube and the entire graphs of other networks. To reduce the randomness of the LT diffusion model, we ran 10000 diffusion processes and report the average value as the final spread. #### 5.1.2 Datasets We adopt the powerlaw-cluster model to generate synthetic networks with different scales. Table 2 summarize the basic statistics of real-world networks [55] and Fig. 4 illustrates the degree distribution of them. #### 5.1.3 Evaluation metrics To evaluate DREIM quantitatively, we consider the following three aspects to report DREIM's effectiveness and efficiency. **Spread quality.** We adopt the metric _active rate_, i.e., the proportion of active nodes to total nodes under a specific seed nodes budget \(k\) to compare the spread quality with baselines. **Scalability.** As analyzed before, the inference complexity of DREIM is \(O(kM)\), where \(k\) is the given budget and \(M\) is the number of edges. We test DREIM on large networks scaling up to millions of nodes. We can see in Fig. 5 that DREIM can effectively solve the problem of networks size up to 4.85 million. **Time complexity.** We theoretically compare the time complexity of DREIM and all the baselines. ### Results on synthetic graphs We report the active rate under different budget \(k\) of DREIM and other baselines on synthetic networks scaling from 10000 to 500000 generated by the powerlaw-cluster model. Since the network generation process is stochastic, we generate 100 networks for each scale and report the mean and standard deviation. As shown in Table 1, DREIM achieves superior performance over both IMM and GCOMB in terms of active \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scale} & \multicolumn{3}{c|}{\(k=50\)} & \multicolumn{3}{c|}{\(k=100\)} & \multicolumn{3}{c|}{\(k=200\)} \\ \cline{2-9} & IMM & GCOMB & DREIM & IMM & GCOMB & DREIM & IMM & GCOMB & DREIM \\ \hline 10000 & \(39.84\pm 0.39\) & \(39.47\pm 0.61\) & \(\mathbf{40.89\pm 0.42}\) & \(30.37\pm 0.41\) & \(29.43\pm 0.25\) & \(\mathbf{30.41\pm 0.32}\) & \(60.14\pm 0.44\) & \(58.12\pm 0.12\) & \(\mathbf{60.29\pm 0.21}\) \\ \hline 20000 & \(30.37\pm 0.41\) & \(29.43\pm 0.25\) & \(\mathbf{30.41\pm 0.32}\) & \(39.28\pm 0.41\) & \(38.69\pm 0.17\) & \(\mathbf{40.59\pm 0.48}\) & \(49.36\pm 0.37\) & \(48.36\pm 0.41\) & \(\mathbf{49.98\pm 0.41}\) \\ \hline 50000 & \(21.96\pm 0.35\) & \(20.61\pm 0.34\) & \(\mathbf{21.98\pm 0.21}\) & \(29.24\pm 0.32\) & \(28.55\pm 0.39\) & \(\mathbf{30.43\pm 0.11}\) & \(37.55\pm 0.28\) & \(37.33\pm 0.46\) & \(\mathbf{37.86\pm 0.16}\) \\ \hline 100000 & \(16.50\pm 0.25\) & \(16.39\pm 0.38\) & \(\mathbf{17.56\pm 0.37}\) & \(22.22\pm 0.28\) & \(21.99\pm 0.16\) & \(\mathbf{23.72\pm 0.15}\) & \(29.14\pm 0.24\) & \(28.81\pm 0.16\) & \(\mathbf{29.89\pm 0.40}\) \\ \hline 50000 & \(8.74\pm 0.11\) & \(8.75\pm 0.46\) & \(\mathbf{9.22\pm 0.68}\) & \(12.03\pm 0.14\) & \(12.16\pm 0.24\) & \(\mathbf{12.53\pm 0.18}\) & \(16.08\pm 0.12\) & \(16.48\pm 0.19\) & \(\mathbf{17.01\pm 0.34}\) \\ \hline \end{tabular} \end{table} Table 1: Active rate comparison of different methods on synthetic networks for \(k=50\), \(k=100\), \(k=200\) (%). All the results are the average value for 100 networks of the same scale. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Network & \# Nodes & \# Edges & \(\gamma\) & Avg.Degree \\ \hline Digg & 29.6K & 84.8K & 2.79 & 5.72 \\ Facebook & 63.3K & 816.8K & 2.43 & 25.77 \\ Youtube & 1.13M & 2.99M & 2.14 & 5.27 \\ LiveJournal & 4.85M & 69M & 2.43 & 6.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Basic statistics of real-world networks. \(\gamma\) denotes the exponent of the power-law degree distribution. Figure 4: Degree distribution of real-world networks. rate. One important observation is that even we train DREIM on small graphs with a scale range of \(30\sim 50\), it still performs well on networks with several orders of magnitude larger scales. This is because DREIM uses the inductive graph embedding method whose parameters are independent of the network size, and for reinforcement learning, it adopts the uniform termination condition for networks with different sizes. Another observation is that the _batch nodes selection_ strategy can bring higher solution quality. For example, as shown in Table 3, when the network scale is 10000 and 20000, DREIM-10 even obtains a higher active rate than DREIM-1. And when the network size is 500000, DREIM-All performs better than DREIM-1. This phenomenon indicates that our _batch nodes selection_ strategy could not only decrease the time complexity, but also enhance the effectiveness of DREIM. And from Fig. 3 (a) we can also see that for budget \(k=30\), the running time increases linearly with regard to the network size. ### Results on real-world networks In the last section, we see that DREIM performs well on large synthetic graphs generated by the same model as it is trained on. In this section, we test whether DREIM can still perform well on large real-word networks. In Fig. 5 we plot the active rate curve using different methods under different budget k for every network we test. We can see that DREIM can always surpass all the baselines in terms of active rate for all the experiment settings, i.e., different networks and budgets. For example, when initially activating 50 nodes for Youtube, DREIM finally activates 16.3% of the total nodes while IMM and GCOMB activate 16.1% and 15.4% of the total nodes respectively. In Fig. 3 (b), we plot the wall-clock running time curve for Facebook network with \(k=50\) using different batch sizes. We can see that DREIM-10 takes 10 times less time than DREIM-1 meanwhile, as shown in Fig. 6, activates 24.5% of the total nodes, clearly higher than 23.2% and 23.0% obtained by IMM and GCOMB, meaning that the running time can be dramatically reduced by exploiting the _batch nodes selection strategy_ and the effectiveness of DREIM is not greatly affected. ### Comparison of time complexity As analyzed before, the time complexity of DREIM is \(O(M+N+N\text{log}N)\), for sparse real-world networks, we can say DREIM has a linear time complexity \(O(kM)\) when budget is k. And the empirical results in Fig. 3 (a) also support our theoretical analysis. For IMM, as reported by the authors, its time complexity is \(O\left((k+\ell)(n+m)\log n/\varepsilon^{2}\right)\), i.e., IMM also has a near linear scalability with regard to the network size. For GCOMB, its time complexity is \(O\left(|V|+|V^{9.8}|\left(dm_{G}+m_{G}^{2}\right)+|V^{g}|\,b\left(d+m_{Q} \right)\right)\), see [29] for details. DREIM has a comparatively lower time complexity among all the methods. Moreover, the time complexity can be mitigated through the _batch nodes selection_ strategy to \(O(k/bM)\) supported by results in Fig. 3 (b), where \(b\) is the batch size. ### Other settings All experiments are performed on a 32-core sever with 64GB memory. We conduct the validation process every 300 iterations and conduct the _play game process_ every 10 iterations. For the \(n\)-step \(Q\)-learning, we set \(n=5\) and 64 experiences are sampled uniformly random from the experience replay buffer. We implement the model using Tensorflow and use the Adam optimizer, and the Figure 5: Active rate on real-world networks under different budget \(k\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scale} & \multicolumn{3}{c|}{\(k=50\)} & \multicolumn{3}{c|}{\(k=100\)} & \multicolumn{3}{c|}{\(k=200\)} \\ \cline{2-10} & DREIM-1 & DREIM-10 & DREIM-All & DREIM-1 & DREIM-10 & DREIM-All & DREIM-1 & DREIM-10 & DREIM-All \\ \hline 10000 & \(40.89\pm 0.42\) & \(\mathbf{41.70\pm 0.62}\) & \(38.72\pm 0.67\) & \(30.41\pm 0.32\) & \(\mathbf{34.19\pm 0.21}\) & \(29.62\pm 0.18\) & \(\mathbf{60.29\pm 0.21}\) & \(59.16\pm 0.41\) & \(60.20\pm 0.62\) \\ \hline 20000 & \(30.41\pm 0.32\) & \(\mathbf{34.19\pm 0.21}\) & \(29.62\pm 0.18\) & \(\mathbf{40.50\pm 0.48}\) & \(38.73\pm 0.73\) & \(39.25\pm 0.42\) & \(\mathbf{49.98\pm 0.41}\) & \(49.76\pm 0.29\) & \(47.71\pm 0.32\) \\ \hline 50000 & \(\mathbf{21.98\pm 0.21}\) & \(20.25\pm 0.56\) & \(21.28\pm 0.53\) & \(\mathbf{30.43\pm 0.11}\) & \(29.01\pm 0.65\) & \(28.99\pm 0.28\) & \(37.86\pm 0.16\) & \(\mathbf{37.94\pm 0.36}\) & \(36.63\pm 0.26\) \\ \hline 100000 & \(\mathbf{17.56\pm 0.37}\) & \(16.76\pm 0.47\) & \(16.70\pm 0.37\) & \(\mathbf{23.72\pm 0.15}\) & \(21.03\pm 0.43\) & \(21.86\pm 0.68\) & \(\mathbf{29.89\pm 0.40}\) & \(29.39\pm 0.76\) & \(28.68\pm 0.47\) \\ \hline 500000 & \(9.22\pm 0.68\) & \(9.12\pm 0.37\) & \(\mathbf{10.38\pm 0.25}\) & \(12.53\pm 0.18\) & \(\mathbf{12.75\pm 0.24}\) & \(12.57\pm 0.39\) & \(\mathbf{17.01\pm 0.34}\) & \(15.95\pm 0.61\) & \(16.46\pm 0.39\) \\ \hline \end{tabular} \end{table} Table 3: Active rate comparison of different DREIM variants, i.e., using distinct batch sizes on synthetic networks for \(k=50\), \(k=100\), \(k=200\) (%). All the results are the average value for 100 networks of the same scale. hyper-parameters are summarized in Table 4. To reduce the randomness of the diffusion process, we run a large number (e.g., 10000) of diffusion process and use the average value as the final spread value. ## 6 Discussion In this section, we try to interpret what heuristics DREIM has learned. We further discuss the effect of different termination conditions on DREIM's performance. ### Policy analysis of DREIM In this section, we try to interpret DREIM's success through two observations. Firstly, the agent learns to minimize the _Return_. To make it clear, we draw the inactive rate curve, as shown in Fig. 2 (a), where the _Return_ can be approximately computed by the area under the curve. Note that minimizing the _Return_ guides the agent to become more and more intelligent in two folds. One is that it tries to select seed nodes as few as possible to activate all the nodes, and the other one is that it prefers to select nodes that bring more decrease on inactive rate at each step. These two aspects coordinate together to guide the agent to find better influencers. Secondly, in terms of the seed nodes, DREIM tends to find the influencers which have a balance of connectivity based centrality, e.g., degree centrality and distance based centrality, e.g. betweenness centrality. ### Different termination conditions In what follows, we discuss the influences of using different termination conditions of the interaction process for training and inference. During the training phase, the interaction process ends when all nodes are activated, which is a uniform state for networks of different scales while during the inference phase, we use the _batch nodes selection_ strategy, and terminates when \(|\mathcal{S}|=k\), where \(k\) is the budget. Previous works like [29, 53] adopt \(|\mathcal{S}|=k\) as the termination condition for both training and inference. We argue this setting will restrict DREIM's generalization ability because the termination state can be very different for networks of different scales. One concern is that, using different termination condition seems to be unreasonable, since traditionally researchers usually apply their model to similar problem instances. However, the empirical results in Table 1 and Fig. 5 do show that though we use different termination conditions for training and inference, we still obtain better spread quality for all budgets. We will leave exploring explanations for this phenomenon as a future direction. ## 7 Conclusion In this paper, we formalize the influence maximization problem as a Markov decision process, and we design a novel reinforcement learning framework DREIM to tackle this problem. To our best knowledge, DREIM is the first work that obtains significant superior spread quality over traditional state-of-the-art method IMM. We combine the graph neural network and deep reinforcement learning together. We treat the embedding of the virtual node and the embeddings of real nodes as the state and action of the reinforcement learning. By exploiting a _batch nodes selection_ strategy, the computational time is greatly reduced without compromising much on solution quality. We use synthetic graphs generated by the powerlaw-cluster model to train DREIM and test DREIM on several large synthetic and real-world social networks of different sizes from thousands to millions. And the empirical results show that DREIM has exceeded the performance of all the baselines. For future work, one possible direction is to seek theoretical guarantees. Another important direction is to design more powerful and interpretable graph representation learning methods to better preserve graph information. ## Funding This work was supported in part by the National Natural Science Foundation of China (Nos. 61803248, 61872093, and U20B2051), the National Key R & D Program of China (No. 2018YFB1305104), Shanghai Municipal Science and Technology Major Project (Nos. \begin{table} \begin{tabular}{l l l} \hline \hline Hyper-parameter & Value & Description \\ \hline learning rate & \(1\times 10^{-4}\) & the learning rate used by Adam optimizer \\ embedding dimension & 64 & dimension of node embedding vector \\ maximum episodes & \(1\times 10^{6}\) & maximum episodes for the training process \\ layer iterations & 3 & number of message-passing iterations \\ reconstruction loss weight & \(1\times 10^{-3}\) & weight of graph reconstruction loss \\ \hline \hline \end{tabular} \end{table} Table 4: Hyper-parameter values. Figure 6: Active rate on real-world networks under different budget \(k\) using distinct batch sizes. 2018SHZDZX01 and 2021SHZDZX03), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology. ## Data Availability Statement The data underlying this article are available in SNAP, at [http://snap.stanford.edu/data](http://snap.stanford.edu/data).
2309.17340
Outage-Watch: Early Prediction of Outages using Extreme Event Regularizer
Cloud services are omnipresent and critical cloud service failure is a fact of life. In order to retain customers and prevent revenue loss, it is important to provide high reliability guarantees for these services. One way to do this is by predicting outages in advance, which can help in reducing the severity as well as time to recovery. It is difficult to forecast critical failures due to the rarity of these events. Moreover, critical failures are ill-defined in terms of observable data. Our proposed method, Outage-Watch, defines critical service outages as deteriorations in the Quality of Service (QoS) captured by a set of metrics. Outage-Watch detects such outages in advance by using current system state to predict whether the QoS metrics will cross a threshold and initiate an extreme event. A mixture of Gaussian is used to model the distribution of the QoS metrics for flexibility and an extreme event regularizer helps in improving learning in tail of the distribution. An outage is predicted if the probability of any one of the QoS metrics crossing threshold changes significantly. Our evaluation on a real-world SaaS company dataset shows that Outage-Watch significantly outperforms traditional methods with an average AUC of 0.98. Additionally, Outage-Watch detects all the outages exhibiting a change in service metrics and reduces the Mean Time To Detection (MTTD) of outages by up to 88% when deployed in an enterprise cloud-service system, demonstrating efficacy of our proposed method.
Shubham Agarwal, Sarthak Chakraborty, Shaddy Garg, Sumit Bisht, Chahat Jain, Ashritha Gonuguntla, Shiv Saini
2023-09-29T15:48:40Z
http://arxiv.org/abs/2309.17340v2
# Outage-Watch: Early Prediction of Outages using Extreme Event Regularizer ###### Abstract. Cloud services are omnipresent and critical cloud service failure is a fact of life. In order to retain customers and prevent revenue loss, it is important to provide high reliability guarantees for these services. One way to do this is by predicting outages in advance, which can help in reducing the severity as well as time to recovery. It is difficult to forecast critical failures due to the rarity of these events. Moreover, critical failures are ill-defined in terms of observable data. Our proposed method, Outage-Watch, defines critical service outages as deteriorations in the Quality of Service (QoS) captured by a set of metrics. Outage-Watch detects such outages in advance by using current system state to predict whether the QoS metrics will cross a threshold and initiate an extreme event. A mixture of Gaussian is used to model the distribution of the QoS metrics for flexibility and an extreme event regularizer helps in improving learning in tail of the distribution. An outage is predicted if the probability of any one of the QoS metrics crossing threshold changes significantly. Our evaluation on a real-world SaaS company dataset shows that Outage-Watch significantly outperforms traditional methods with an average AUC of 0.98. Additionally, Outage-Watch detects all the outages exhibiting a change in service metrics and reduces the Mean Time To Detection (MTTD) of outages by up to 88% when deployed in an enterprise cloud-service system, demonstrating efficacy of our proposed method. Outage Forecasting, System reliability and monitoring, Distribution Learning, Mixture Density Network + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition reliability is critical for business success, as outages can severely impact QoS metrics (resource availability, latency, etc.) resulting in compromised system availability and a poor user experience. Several monitoring and alerting tools (refer SS3) are employed to monitor and ensure the performance of cloud services. Automating system troubleshooting has been found to improve reliability, efficiency, and agility for enterprises [25, 31, 57]. Despite these efforts, cloud systems still experience incidents and outages [12, 15, 38]. Timely detection and remediation of outages is essential for reducing system downtime. However, a reactive approach to incident detection is often used in practice, hindering effective outage management [23, 50, 83]. With a possible innovation in being able to predict the outages well in advance, the time to detect these outages can be reduced significantly. Consider a real-world scenario in Figure 1 showing the timeline of an outage caused by a flawed configuration change in a Storage service. In this scenario, a 3:54 am (A) failure sparked a sequence of problems, including SQL errors at 4:10 am and an increase in latency that starts affecting the QoS at 4:18 am (B). Alerts were triggered at 5:08 am when latency exceeded pre-defined thresholds. It took nearly 55 minutes (from 4:18 am (B)) to realize it was a cross-service issue and declare an outage at 5:12 am (C). An experienced Site Reliability Engineer (SRE) [14, 39] was engaged to mitigate the issue, which was resolved at 6:15 am (D) with all services back to normal. Here, the flawed change impacted several SQL databases and spread to other services. The current reactive approach relying on alerts showed significant delay in detecting the outage, as seen by the ramp up in underlying metrics affecting QoS between 4:18 am (B) and 5:12 am (C). This example highlights the potential to predict a substantial fraction of outages in advance by utilizing the information available during the ramp-up phase. In consideration of the strict downtime constraints, with only 500 to 50 minutes of allowable downtime per year corresponding to the uptime guarantees of 99.9% and 99.99% respectively, the early detection of outages, even minutes in advance, can result in significant benefits. The objective of this paper is to present a comprehensive solution aimed at reducing the mean time to detection (MTTD) through early detection of outages. Outages manifest in two major ways, (i) as degradation in QoS and other metrics, (ii) detected only through user reports and do not manifest in observable metrics. The first type of outages, accounting for 50-70% of the incidents as observed from our data (see SS6), exhibit characteristics that allow for prediction. However, outage prediction in cloud systems is a complex task due to the vast number of interdependent metrics. An SRE, who traditionally detect outages using rule-based alerts, often only have a limited view of the overall system, leading to difficulties in quickly and accurately identifying issues. Such approaches rely on human knowledge and is insufficient for large-scale production cloud systems, which have a vast number of complex and ever-changing rules. Our interviews with engineers from various service teams revealed that detection could take hours, particularly in cases where there are multiple concurrent alerts. This highlights the importance of developing a more efficient method for predicting outages in cloud systems. Previous works [23, 45, 50] on failure prediction through runtime monitoring which requires a substantial amount of data from the faulty state of the system are not applicable in this scenario, as outages are rare events [3] and hence, data is not available in the faulty state in plenty. In addition, using alerts to detect outages takes a toll on MTTD since they are fired after a significant ramp-up in metrics has been identified. Failure detection literature from other domains [13, 58] are not extensible to our case since the nature and the quantity of failures is very different in an enterprise service. Our scenario has very few outages and directly extending those works fail. In this work, we propose a novel system (Outage-Watch) for predicting outages in cloud services to enhance early detection. We define outages as extreme events where deterioration in the QoS captured by a set of metrics goes beyond control. Outage-Watch models the variations of QoS metrics as a mixture of Gaussian to predict their distribution. We also introduce a classifier that is trained in a multi-task setting with extreme value loss to learn the distribution better at the tail, thus acting as a regularizer [67]. Outage-Watch predicts an outage if there is a significant change in the probability of the QoS metrics exceeding the threshold. Our evaluation on real-world data from an enterprise system shows significant improvement over traditional methods with an average AUC of 0.98. Furthermore, we deployed Outage-Watch in a cloud system to predict outages, which resulted in a 100% recall and reduced MTTD by up to 88% (20 - 60 minutes reduction). Our major contributions can be summarized as follows: 1. We propose a novel approach Outage-Watch to predict outages in advance, which are manifested as large deteriorations in a chosen set of metrics (QoS) reflecting customer experience degradation. Outage-Watch works even in the absence of actual outages in training data. 2. Outage-Watch generates the probability of a metric crossing any threshold, making it flexible to define the threshold, unlike classification tasks. It predicts the distribution of QoS metric values in future given current system state, and improves learning the tail distribution via extreme value loss to capture outages before they happen. 3. An evaluation of the approach on real service data shows an improvement of \(7-15\%\) over the baselines in terms of AUC, Figure 1. Illustration of the life-cycle of an outage. A refers to the point when the root cause of a fault occurred, B represents the time when it started affecting the performance metrics. When the metrics crossed their respective thresholds, alerts fired which led to an outage being declared at C. The time between C to D is when the engineers diagnose and resolve the issue. The plots below show the variation in the root cause metric and the QoS metric at these times. while its deployment in a real setting was able to predict all the outages which exhibited any change in the observable metrics, thus reducing the MTTD. The rest of the paper is organized as follows. We briefly talk about related works in Section 2 followed by the background and problem formulation in Section 3. In Section 4 and 5 we outline the motivation and describe Outage-Watch. With Section 7 analyzing its performance, we conclude in Section 8. ## 2. Related Work Service reliability has been a well-researched area in both academia and industry (Kal ### Problem Formulation We now formally define our problem statement. Metrics \(\mathcal{M}_{tot}\) are continuously monitored in the system, essentially forming a time series. The task is to understand the trends in some or all of the metrics in \(\mathcal{M}_{tot}\) and predict an impending outage, with the goal of predicting it as early as possible. It is obvious that a change in the metrics will show up only when a fault has occurred. Thus, the goal of an outage forecasting solution is to minimize the lag time between the actual occurrence of the fault and its identification as an extreme event, while also minimizing false positive cases. With \(t\) as the wall-clock time, the input to the outage forecasting module is a set of relevant metrics (see SS5.1.1) from \([t-\mathrm{w},t]\) where \(w\) is the window length. More details on the pre-processing of metrics is elucidated in SS5.1.2. With ground-truth labels generated based on the occurrence of extreme events, the supervised ML model Outage-Watch aims to forecast an outage by learning the distribution of the relevant metrics at a certain time in the future. A distribution of metrics is essentially a probability density function of the metric values at a certain time. ## 4. Solution Motivation In this section, we present the rationale behind our solution design and provide a concise overview of how it functions. ### Design Motivation As discussed in SS1, outage prediction models should aim to predict the probability of an outage in advance to ensure timely recovery during a fault. One way to achieve this is to monitor the system metrics for deviations from their regular trend, as these deviations are often indicative of an outage. However, the current system monitoring tools often fail to detect deviations until they surpass a specific threshold and activate an alert, provided that an alert has been defined for those system metrics. However, proactive monitoring of system metrics allows for earlier and more efficient identification of outages. This motivates the design of our proposed approach, Outage-Watch. We have observed from the data (Figure 2) that during an outage, multiple metrics (Ash distribution learnt, Outage-Watch takes this as an indication of an outage. A thresholding mechanism is employed on the probability value to predict an outage. ## 5. Outage-Watch In SS4.2, we have discussed the overall architecture of Outage-Watch and talked about its two main components briefly. We shall now delve into the details of each component. ### Metric Processing #### 5.1.1. Metric Selection and Quality of Service (QoS) Metrics (Fig. 3[A]) The monitoring tools collect a large set of service metrics \(\mathcal{M}_{tot}\) for a system. However, many such metrics recorded by these tools are often never used by the SREs (Bordes and Riedler, 2017). Also, storage and handling of metrics data is non-scalable and gets expensive over time. Consequently, we derived a condensed subset of metrics, denoted as \(\mathcal{M}\). In our specific scenario, we filtered down the number of metrics from \(\sim\)2000 in \(\mathcal{M}_{tot}\)(Bordes and Riedler, 2017) to 42 using a step-wise procedure. We employed established techniques (Zhu et al., 2017) for feature selection process. Firstly, features were filtered using correlation analysis and rank coefficient tests. Then, time series features that were constant throughout the time series or exhibited low variance were omitted due to their limited informational value. To refine our feature set further, we incorporated domain-specific knowledge: retaining only those metrics that either trigger alerts or have been emphasized in previous outage analysis reports that are generated post-identification and mitigation of outages by engineers. This process yielded a focused feature set well-suited for effective service monitoring and analysis. However, only a fraction of \(\mathcal{M}\) directly reflects the service quality as perceived by the customer, for example, latency of a service, number of service failure errors, resource availability, etc. These metrics, known as _Quality of Service (QoS)_ metrics \(\mathcal{M}_{QoS}\) or the golden metrics (Kang et al., 2016), are used by the SRE to define outages. These metrics are crucial to monitor because cloud service providers face revenue loss if QoS is not met due to violations of Service Level Agreements (SLAs). Based on the alert severity used by the SRE team and the SLA definitions, we select five golden metrics comprising of (i) Workload, (ii) CPU Utilization, (iii) Memory Utilization (iv) Latency, and (v) Errors. These metrics are often used for system monitoring in industries and have been utilized in prior works (Kang et al., 2016; Wang et al., 2017). The golden signals can often refer to different metrics based on service components. For example, the latency metric refer to disk I/O latency for storage service, web transaction time for web services, query latency for databases, etc. Outage-Watch uses the entire set of metrics \(\mathcal{M}\), to forecast the likelihood that \(\mathcal{M}_{QoS}\) metrics will surpass a threshold in the future. We do not specifically forecast the likelihood of metric values of \(\mathcal{M}\setminus\mathcal{M}_{QoS}\) crossing the threshold since these capture small issues which propagates within the system and gets manifested into the QoS metrics. Also, QoS metrics capture the user impact directly. It should be noted that our choice of \(\mathcal{M}_{QoS}\) is based on system domain knowledge which we gathered from the inputs from reliability engineers on the most important metrics that define an outage. Nonetheless, our approach will work in the same way for a different set of \(\mathcal{M}_{QoS}\) metrics. #### 5.1.2. Pre-processing (Fig. 3[B]) After the selection of metrics \(\mathcal{M}\), we handle the missing values differently for different category of metrics. For some metrics, a missing value might indicate a null value, which can be replaced with a zero. For other metrics, the rows containing missing values may be dropped. For example, if there are missing values in a metric that defines an error, these can be replaced with zeroes, as this indicates that there were no errors in the service. However, if there are missing values in utilization-based metrics, it may be necessary to drop those rows, as the missing values could be due to a fault with the monitoring system. Once the missing values are handled, each metric \(m^{i}\) is normalized using Equation 1. \[m^{i}=\frac{m^{i}-min(m^{i})}{max(m^{i})-min(m^{i})+\epsilon} \tag{1}\] Following this, we create a time series of \(\mathcal{M}\) metrics with a rolling-window of size \(w\). That is, for each time instant \(t\) and metric \(m^{i}\in\mathcal{M}\), we create a time series \(m^{i}_{w}=\{m^{i}_{t-w},\ldots,m^{i}_{t}\}\), where \(m^{i}_{t}\) refers to the value of the metric \(m^{i}\) at \(t^{th}\) time instant. We thus create \(X=\{m^{1}_{w},m^{2}_{w},\ldots,m^{|\mathcal{M}|}_{t}\}\), which forms a sequence of metric values that can be used as an input to our encoder model. #### 5.1.3. Label Generation (Fig. 3[IC]) In real-world production services, outages are rare due to the robust deployment architecture and constant monitoring system in place. SREs often intervene to prevent the full-scale outages resulting in a rarity of such events. However, these potential issues when interventions are performed Figure 3. Overall Architecture of Outage-Watch, comprising of (1) metric processing phase and the (2) distribution learning phase. The distribution and label prediction generated from (2) at time step \(t\) are evaluated against ground-truth metric value and labels from a future time step \(t+\gamma\), which we get from (1B) and (1C) respectively. can still be considered as extreme situations (see SS3), which will allows us to better understand the system's behaviour and predict critical issues in advance. Thus, instead of having the time periods when an outage was actually declared as the ground truth, we modify our definition of _labels_ to the time periods of extreme events. Such modifications facilitate us in forecasting the distribution of the relevant metrics. However, the challenge of labelling the data during these extreme events remain. To address this issue, we perform the following algorithmic steps that incorporates domain knowledge to generate proxy labels for outages or extreme situations. 1. Take \(w^{\prime}\) minutes windows for each of the metrics \(m_{i}\) from the set \(\mathcal{M}_{\text{QoS}}\). 2. Select those windows where the value of \(m_{i}\) crosses a percentile threshold \(\mathcal{T}\) for at least \(\alpha\) fraction of \(w^{\prime}\) window. 3. Filter the previously obtained time windows by keeping only those where at least \(k\) alerts were fired in the system. These chosen time windows serve as proxy labels for extreme events. Here, we take \(w^{\prime}\) as 10 minutes, \(\mathcal{T}\) as 95, \(\alpha\) as 0.5, \(k\) as 1. These steps indirectly incorporate domain knowledge to accurately generate labels for outages and extreme situations using alerts defined by SRE. This process not only allows us to create a denser labelling of extreme events, which can aid in the prediction of potential outages or situations that could have escalated to an outage in advance, but it also includes some less severe cases, which can aid in model training recall. The proxy labels serve as positive training samples for the model. ### Distribution Forecasting Through this module, Outage-Watch aims to learn how the QoS metrics will behave in a future time to forecast the probability of an outage. We outline the component details below. #### 5.2.1. Metric-Encoder (Fig. 3[2a]) Before we can learn the distribution of QoS metrics, we must encode the past behaviour of the service metrics which captures the system state as a latent vector representation. Metric-Encoder extracts information via ML technique to encode spatial as well as temporal relation (Zhu et al., 2017) between the metrics. Spatial correlation captures how each behaviour of metric \(m^{i}\in\mathcal{M}\) affects the QoS metrics \(\mathcal{M}_{\text{QoS}}\), while the temporal dependence captures the time series trend in X. Though both statistical and ML-based techniques have been studied in this regard (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019), it has been shown that ML-based models, and especially Recurrent Neural Network (RNN) models outperform conventional methods (Wang et al., 2019) in encoding a time series due to their ability to capture sequential dependencies and temporal patterns in data. Several RNN-based models like Long Short-Term Memory (LSTM) (Wang et al., 2018) or Bidirectional LSTM (BiLSTM) (Wang et al., 2019; Wang et al., 2019) models can be used for our purpose. Based on our experimental results with various RNN architectures (see SS7.1.1), we choose BiLSTM as the metric encoder model. LSTM uses gating mechanism to control information encoding, while the hidden state (\(h_{t}\)), a multi-dimensional vector, maintains the encoding of the input time series. BiLSTMs extend LSTMs by applying two LSTMs, one forward and one backward, to input data to capture information from both directions. The Metric-Encoder takes \(X\) as input and outputs a vector representation (\(h\)) capturing the temporal and spatial relationship of the metrics. #### 5.2.2. Multi-Task Learning We propose a multi-task learning (Wang et al., 2019; Wang et al., 2019) problem, where one task is to learn the distribution of each QoS metric \(y\in\mathcal{M}_{\text{QoS}}\) from the _Metric-Encoder_ output, while the other task classifies the _Metric-Encoder_ output as an outage or not. We now describe each of the task in detail. **Task 1: Distribution Learning (Fig. 3[2B]).** The first task aims to learn a parametric distribution governing the QoS metrics conditional on the encoded system state representation. More precisely, given a time series of metrics \(X\) which was encoded to form a vector \(h\), we wish to estimate the probability of a metric \(y\in\mathcal{M}_{\text{QoS}}\) given \(X\), \(p(y|X)\). Learning a distribution is essentially learning the parameters governing it. In general, the metric \(y\) is often assumed to follow a normal distribution \(\mathcal{N}(y;\mu,\sigma)\), since we observe limited data points in the tail of the distribution. However, in a real production system, a normal distribution might underfit the actual data distribution. Often, we don't necessarily have simple normal distributions. To overcome this limitation, we estimate the distribution of the QoS metric via a mixture of normal distributions with \(C\) mixture components, where the probability distribution of a metric \(y\) given \(X\) is of the form: \[p(y|X)=\sum_{c=1}^{C}\alpha_{c,y}(X)\mathcal{N}(y|\mu_{c,y}(X),\sigma_{c,y}(X)), \tag{2}\] where \(c\) denotes the index of the corresponding mixture component, and \(\alpha_{c,y}\) is the mixture proportion representing the probability that \(y\) belongs to the \(c^{th}\) mixture component \(\mathcal{N}(y|\mu_{c,y},\sigma_{c,y})\). It is well known in the literature that a mixture of Gaussian/normal distributions is capable of modelling any arbitrary probability distribution with correct choice of \(C,\alpha_{c},\mu_{c}\) and \(\sigma_{c}\)(Wang et al., 2019). We aim to estimate a mixture distribution for each QoS metric \(y\) via a separate Mixture Density Network (MDN), that comprises a feed-forward neural network to learn the mixture parameters \(\mu_{y},\sigma_{y}\) and the mixing coefficient \(\alpha_{y}\). We have chosen \(C=3\) after experimental analysis (see Fig. 6a), and hence each MDN has 3 values of \(\alpha_{y},\mu_{y}\) and \(\sigma_{y}\). The network is learnt through minimizing the negative log-likelihood loss of obtaining the ground truth metric value of \(y\) given the mixture distribution, averaged over all metrics \(\mathcal{M}_{\text{QoS}}\). Formally, \[\operatorname*{arg\,min}_{\theta}l(\theta)=-\frac{1}{|\mathbb{R}|}\sum_{X,y \in\mathbb{R}}\log p(y|X) \tag{3}\] Here \(\mathbb{R}\) corresponds to the realm of possibilities. MDN hence learns the parameters of the distribution of QoS metrics, which can then be further used to compute the probability of \(y\) crossing a certain threshold to predict outages. **Task 2: Outage Classification (Fig. 3[2C]).** We have observed through experiments (see SS7.1.3) that the distribution learnt by MDN performs poorly at the tail, where extreme values are generally observed and can be used to forecast outages (Figure 4). To overcome this limitation, a feed-forward neural network performs outage classification in a multi-task setting, where we predict whether an outage will happen or not from the encoded output from the _Metric-Encoder_. We use the output proxy labels generated in SS5.1.3 as a ground truth. This module acts the extreme value regularizer. where the intuition is that the synthetically generated proxy labels will act as a regularizer for better learning in the tail of the distribution. Similar to distribution learning, we have separate neural networks for each QoS metric in \(\mathcal{M}_{\text{QoS}}\). To classify outages, we have used the Extreme Value Loss (EVL) (Zhou et al., 2017; Wang et al., 2018), which is a modified form of Binary Cross Entropy (BCE) Loss as the loss function. EVL reduces the number of false positives by assigning more weight to the penalty of incorrectly predicting outages. EVL works well with imbalanced data as we have observed through experiments. EVL can be formally defined as, \[\mathbb{L}_{EVL}=-\frac{1}{N}\sum_{i=1}^{N}\beta_{0}\left[1-\frac{ \hat{y_{i}}}{\delta}\right]^{\delta}y_{i}\log\hat{y_{i}}+\\ \beta_{1}\left[1-\frac{1-\hat{y_{i}}}{\delta}\right]^{\delta}(1- y_{i})\log(1-\hat{y_{i}}), \tag{4}\] where \(N\) is the size of the batch, \(y_{i}\in\{0,1\}\) is the ground-truth value and \(\hat{y_{i}}\) is the value predicted by our model Outage-Watch, \(\beta_{0}\) is the proportion of normal events in the batch and \(\beta_{1}\) is the proportion of extreme events in the batch. We use \(\delta=2\) in the loss function for the experiments. ### Training Since we want to predict the probability of an outage in advance and reduce the MTTD, the ground truth metric values and the proxy labels should also correspond to a future time \(t+\gamma\). Thus, at a time \(t\), the Metric-Encoder takes \(X\) which is a time series of all metric values from \(t-w\) to \(t\) as input. The ground truth value for Task 1 is the metric value for each QoS metric \(y\) at time \(t+\gamma\) and while for Task 2, we use the proxy label (see SS5.1.3) computed from the QoS metric values at \(t+\gamma\). We train the entire pipeline consisting of the Metric-Encoder, mixture density network and the classifier in an end-to-end fashion. It should also be noted that with a large \(\gamma\), one can aim to predict an outage well in advance, but the distribution followed by the QoS metric will not be accurate. Hence, a careful selection of \(\gamma\) is necessary. By our experiments (see Fig. 5(b)), we show that \(\gamma=10\) mins works the best for our purpose. ### Inference At inference time, we predict and use only the distribution of the QoS metrics, while excluding the classifier from our inference pipeline to predict an outage. The distribution of the QoS metrics provide us with more flexibility and enables us to define outages based on custom thresholds. Moreover, the distribution captures the entire spectrum and specifically the tail metric values. However, the steps to predict an outage from the distribution of the QoS metrics can be summarized as below. #### 5.4.1. Probability Computation (Fig. 5[3A]) We first compute the probability of an outage occurring by computing the probability that the value of the QoS metric crosses a pre-defined threshold. These thresholds are generally defined by the SLAs that have been agreed with a particular customer. As an example, an agreement of achieving a service latency of at most \(\rho\) milli-seconds for 99% of the times might have been signed with the cloud service provider, and can be termed as an SLA. Hence, in this case, the threshold is 99%. Formally, the probability of a QoS metric value \(y\) crossing a threshold \(\mathcal{T}\), and hence the probability of an outage occurring can be defined as \[Prob(Outage)=\sum_{c=1}^{C}\alpha_{c}[\mathcal{N}(y|\mu_{c},\sigma_{c})> \mathcal{T}] \tag{5}\] #### 5.4.2. Outage Prediction (Fig. 5[3B]) From the probability computed above, we use a thresholding technique on \(Prob(Outage)\) to predict the outages. We compute the threshold based on Youden's J Index (Youden, 2017) on training data. It is a popular thresholding technique for imbalanced data (extreme events are very few as compared to the _usual_ metric values), which uses the Area under the ROC curve (AUC) to compute the threshold. On the training data containing the proxy labels and the corresponding probability of an outage occurring, Youden's J Index tries to compute the threshold such that it increases the precision and recall. We maintain the same threshold for all our evaluations. ## 6. Implementation Setup In this section, we outline the experimental process and the setup we followed. We have implemented Outage-Watch in _python_ and used _tensorflow2_(Cheng et al., 2017), a standard open-source library to implement the ML models. We have run Outage-Watch on a system having Intel Xeon E5-2686 v4 2.3GHz CPU with 8 cores. Footnote 2: [https://www.tensorflow.org/](https://www.tensorflow.org/) **Source of Data:** The data is sourced from a prominent SaaS enterprise offering extensive software and digital services. It leverages Amazon Web Services (AWS) and Microsoft Azure for cloud Figure 4. Using Extreme Value Loss in the classifier over BCE can aid the distribution learner to learn a better distribution at the tail. Figure 5. Tasks performed during inference time to predict potential outages from the predicted distribution provisions. The software infrastructure covers diverse domains including programming languages, databases, AWS, Azure, Docker, Kubernetes, Jenkins, and more. **Dataset:** We collect the dataset for evaluating Outage-Watch from a real-world service hosted by a large cloud-based service provider. The metrics data was obtained through a message queue pipeline deployed on the monitoring system of the service. We have collected a total of 3 months of metrics and outage data from the monitoring system for training and testing purposes where data from the last 3 weeks were used for testing. We collected \(\sim\)2000 metrics, which was reduced to 42 as discussed in SS5.1.1. Outages have a widespread impact within the enterprise affecting multiple services. Since there were no outages observed during the period of the training data while one outage was observed during the period of test data, we generated time periods when the extreme situations occurred (see SS5.1.3). It amounted to around 5-7% of the total training data, thus exhibiting a skewed label imbalance. **Model Hyperparameters:** The implementation details for the ML models used in Outage-Watch (SS4.2) are outlined as follows. The BiLSTM model in the Metric Encoder has 128 hidden units (\(h=128\)), followed by a dropout layer with \(p=0.2\). Regularization techniques were used while training the model to prevent overfitting. The feed-forward Mixture Density Network (MDN) which models the distribution parameters of QoS metrics has two hidden layers with 200 neurons each, with ReLU (Nakamoto et al., 2016) activation function in the hidden layers. The neuron outputting the mixing factor of components (\(\alpha\)) use a softmax function. The classifier feed-forward network has one hidden layer with 20 neurons with ReLU activation, while the output layer use the sigmoid function. We use a learning rate of 0.001 with the Adam optimizer for training. **Baselines:** The baselines for evaluation are chosen following an approach similar to the work presented in (Zhou et al., 2017). We leverage some of the fundamental classification and regression techniques for outage prediction. It includes Naive Bayes classifier, random forests and gradient boosted decision trees. Naive bayes is a probabilistic machine learning model while the other two are ensemble methods that are constructed using a multitude of individual trees. We implement these baselines to use them as a proxy for prior learning based outage prediction models. We also use a BiLSTM classifier as a baseline, which uses only classifier network on the encoded BiLSTM representation to predict outages. **Evaluation Metrics:** To evaluate the effectiveness of various approaches, we use AUC-PR and F1 score. AUC-PR calculates the area under the precision-recall (PR) curve and is commonly used for heavily imbalanced datasets (Zhou et al., 2017) where we are optimizing for the positive class (outage being detected) only. AUC-PR is computed using the probability of an outage occurring or not (from Equation 5). Also, based on the probability values, we use the procedure in SS5.4.2 on training data to compute a threshold for detecting outages, which we use to compute the F1 score in test data. **Other Hyperparameters:** For all the experiments, we choose a window3\(w=60\) mins to create a windowed time series of metric data. On the contrary, we vary the prediction look-ahead \(\gamma\) from 5 mins to 30 mins. In SS7.1.4, we experimentally show the optimal value of \(\gamma\). Unless specified, we maintain threshold \(\mathcal{T}=95\%\) (Eq. 5) for all the experiments. Footnote 3: According to our empirical study, over 60% issues are triggered within 1 hour after the impact start time. ## 7. Evaluation In this section, we present the experimental results and aim to address the following research questions: * **RQ1:** How do our design decisions align with the ablation studies performed? * **RQ2:** How does our approach compare to the established baselines? * **RQ3:** How does our approach perform in a real-world cloud deployment scenario? ### Design Choices (RQ1) #### 7.1.1. Metric Encoder Model In SS5.2.1, we claim to use Bidirectional LSTM (BiLSTM) as the model for metric encoder. In this subsection, we discuss the experiments conducted to determine the optimal architecture and the rationale behind using BiLSTM. We conducted experiments using four different types of RNNs: LSTM (Srivastava et al., 2015), BiLSTM (Srivastava et al., 2015), Stacked LSTM (Srivastava et al., 2015), and Stacked BiLSTM (Srivastava et al., 2015). The encoded representation was then used to forecast the distribution in a multi-task setting with EVL in the classifier network. The performance of each architecture was evaluated using AUC-PR metric. We have experimented with varying values of \(\gamma\in\{5,10,15,30\}\) min. The results are presented in Table 1. We see that BiLSTM encoder performs the best in our case for all values of \(\gamma\). BiLSTM can track a time series in the forward as well as the backward direction. Thus, it can help to encode the overall variation in performance metrics as well as retain recent trends, which makes BiLSTM an ideal choice for encoding the information in the metric time series. #### 7.1.2. Multi-Task Learning Table 2 illustrates that incorporating the proposed multi-task learning approach improves performance compared to using only a single task: classification network or MDN. We used BiLSTM as the Metric Encoder. We evaluate the different schemes using the AUC-PR metric. For the classifier network (individually as well as when evaluated in a multi-task setting), we employed the EVL loss. In a similar setting as of the above, we perform the experiments with \(\gamma\in\{5,10,15,30\}\) min. We observe that when the BiLSTM encoded representation was used to learn a distribution of the QoS metrics in a multi-task setting (learning the distribution and classifying the time periods of extreme values), it performed better than when the tasks were \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Prediction Look-Ahead (\(\gamma\))} \\ \cline{2-5} & 5 mins & 10 mins & 15 mins & 30 mins \\ \hline LSTM & 0.950 & 0.950 & 0.950 & 0.948 \\ \hline BiLSTM & **0.974** & **0.977** & **0.968** & **0.959** \\ \hline Stacked LSTM & 0.961 & 0.944 & 0.925 & 0.914 \\ \hline Stacked BiLSTM & 0.956 & 0.938 & 0.933 & 0.918 \\ \hline \hline \end{tabular} \end{table} Table 1. Design choice: Comparison of different RNN architectures over different prediction windows in terms of AUC. performed individually. This corroborates our design choice of using a multi-task learning in the distribution forecasting module. #### 7.1.3. Classifier Network Loss Additionally, we perform further experiments to show the performance enhancement of using EVL loss over Binary Cross-Entropy (BCE) loss for the classifier network in SS5.2.2. The results, as shown in Table 3, indicate that the use of EVL in conjunction with multi-task learning improves performance in predicting extreme events as compared to solely using BCE loss. The metric used to evaluate the performance of the models is the F1-score, and the results demonstrate that EVL outperforms BCE. #### 7.1.4. Ablations We perform further ablation studies to prove our parameter choices for Outage-Watch. We first illustrate through Figure 5(a) that predicting the distribution of QoS metrics using a mixture Gaussian distribution with 3 components performs the best for predicting the outages. We see in the figure that with more or less components, there is a drop in the overall performance. We also conducted an analysis to determine the optimal prediction look-ahead \(\gamma\). With large look-ahead \(\gamma\), we can forecast the outage well in advance. It however suffers in accuracy of the prediction probability since the inherent trend in the metric changes. Thus, there is a trade-off between \(\gamma\) and accuracy metric. Through our experiments, we found that a look-ahead of 10 minutes resulted in the most satisfactory performance, as the validation loss showed negligible increase before reaching a sudden jump beyond this point. Thus, Outage-Watch can forecast an outage and reduce the MTTD by at least 10 mins than the current approaches (SS7.3). ### Baseline Comparison (RQ2) With our design choices fixed, that is, using a BiLSTM for encoding the metrics and then forecasting the distribution of the QoS metrics in a multi-task learning setting, we compare Outage-Watch with several baselines as described in SS6. We use AUC-PR to compare the performance and tabulate the results in Table 4. Similar to the previous evaluations, we experiment with multiple values of \(\gamma\in\{5,10,15,30\}\) min. The results demonstrate that our proposed approach of forecasting the distribution outperforms all other techniques, including traditional methods, by a significant margin. It has been shown to be a highly effective approach for predicting outages through QoS metrics One key advantage of Outage-Watch is its ability to predict the probability of an outage occurring based on any threshold \(\mathcal{T}\) (see Equation 5) since we are forecasting the distribution as opposed to just learning a classifier with the ground truth proxy labels. On the contrary, traditional methods are limited to predicting outages to a specific threshold (similar to the threshold used for creating the ground-truth labels in training data). As discussed in SS5.1.3, we create proxy labels based on the threshold of 95%, i.e., if the metric value crosses the 95 percentile mark, it is considered to be a potential extreme event. Thus, the classifier network was trained using the generated proxy labels as a ground-truth. However, when we evaluate the distribution forecasted by Outage-Watch based on the probability of the metric value crossing a threshold of \(\mathcal{T}=97\%\) and \(\mathcal{T}=99\%\), we achieve high F1 scores, as seen in Table 5. \(\gamma\) was maintained at 10 minutes. This flexibility in threshold selection is a major advantage of our proposed method and sets it apart from traditional techniques as the model need not be trained again to get the results on a different threshold. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Prediction Look-Ahead (\(\gamma\))} \\ \cline{2-5} & 5 mins & 10 mins & 15 mins & 30 mins \\ \hline Naive Bayes & 0.593 & 0.592 & 0.592 & 0.582 \\ \hline Random Forest & 0.873 & 0.868 & 0.867 & 0.824 \\ \hline Gradient Boost & 0.870 & 0.854 & 0.828 & 0.822 \\ \hline BiLSTM+Classifier & 0.909 & 0.914 & 0.930 & 0.927 \\ \hline Outage-Watch & **0.981** & **0.982** & **0.977** & **0.975** \\ \hline \hline \end{tabular} \end{table} Table 4. Performance of different models over different prediction windows in terms of AUC score. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Prediction Look-Ahead (\(\gamma\))} \\ \cline{2-5} & 5 mins & 10 mins & 15 mins & 30 mins \\ \hline Classifier & 0.909 & 0.914 & 0.930 & 0.927 \\ \hline MDN & 0.967 & 0.960 & 0.956 & 0.951 \\ \hline MTL & **0.981** & **0.982** & **0.977** & **0.975** \\ \hline \hline \end{tabular} \end{table} Table 2. Design choice: Comparison of different model architectures over different prediction windows in terms of AUC. Here, MTL refers to the Multi-task learning proposed model. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Prediction Look-Ahead (\(\gamma\))} \\ \cline{2-5} & 5 mins & 10 mins & 15 mins & 30 mins \\ \hline Outage-Watch(with BCE) & 0.980 & 0.980 & 0.971 & 0.946 \\ \hline Outage-Watch(with EVL) & **0.987** & **0.984** & **0.974** & **0.954** \\ \hline \hline \end{tabular} \end{table} Table 3. Design choice: Comparison of BCE and EVL loss over different prediction windows in terms of F1 score. Figure 6. (a) Model performance vs number of Gaussian mixture components \(C\) to predict by MDN; (b) Loss of the MDN (Eq. 3) vs Prediction look-ahead \(\gamma\) ### Deployment Results (RQ3) We deployed Outage-Watch in an enterprise system for 2 months and predicted the probability of impending outages. The overall objective of Outage-Watch is to predict outages in timely manner, thereby assisting the engineers. From the forecasted distribution, we first predict the probability of a metric value crossing the threshold \(\mathcal{T}=99\%\). We then predict potential outage situations through the thresholding technique as described in SS5.4.2 on the metric probability that crosses the threshold first. The threshold is generated based on the 9 weeks of training data. We report the precision and recall of the prediction made by Outage-Watch. We also report the reduction in time to detect outages by the model against the current reactive approach which is used to report an outage. In this deployment, we implemented a continuous re-training strategy for Outage-Watch, updating the model after every outage detected with full data for up to two days after the outage ended. This approach was taken to ensure that the system state changes during an outage are reflected in the updated model. Our strategy balances effective model retraining with efficiency. It's worth noting that our focus here does not encompass a robust retraining strategy (Wang et al., 2018) targeting drift issues. The potential for addressing data distribution changes, arising from factors such as the implementation of new business functionalities, could influence outage detection. However, this aspect falls outside the purview of our current work. We present a case study for multiple outages that were flagged by the engineers during the deployment period and how Outage-Watch performed in forecasting them. During the deployment period, a total of 4 outages (Outage A, B, C, D) took place, out of which Outage A, B and C manifested through the QoS metrics in the cloud service. Outage-Watch was able to accurately predict all these three outages and reduced the mean time to detection as compared to the current approach followed by the engineers. However, Outage D was not evident through the QoS metrics and hence it was not predicted. We report the precision, recall and the reduction in MTTD for Outage A, B and C in Table 6. For each outage, we consider data from a day before and 2 days after to report the precision. Precision here refers to the number of outages correctly predicted over the number of times the probability value was above the threshold for a sustained period (15 mins). Recall refers to the number of outages predicted correctly over the total outages that could have been predicted, which is 3. Finally, MTTD reduction for each outage is reported as a percentage of (C - time of prediction)/(C-B) (see Figure 1 for the notations of B, C). We observe that Outage-Watch outperforms other baselines in terms of precision and recall. Recall is 100%, while precision is 30-40%. We also observe a large reduction in MTTD4 for Outage-Watch ('-' implies outage was not predicted). When EVL is used with Outage-Watch, precision improves. Footnote 4: MTTD improvement: There is a decrease of tens of minutes, particularly notable given the conventional MTTD is also in the range of a few tens of minutes. To provide a deeper understanding of how the system works, we present two case studies of outages (Outage A and Outage B) that were successfully predicted by Outage-Watch. #### 7.3.1. Outages Predicted An illustration of these two outages and the performance of Outage-Watch are presented in Figure 7. 1. Outage A: The outage was auto-launched at time \(t_{A}\) by the monitoring systems due to out-of-heap memory issues on several app store nodes in one of the regions. The system Outage-Watch was able to predict the out-of-heap memory \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Recall} & \multicolumn{3}{c}{Reduction in MTTD} \\ \cline{3-6} & & & Outage A & Outage B & Outage C \\ \hline Naive Bayes & \(1/14\) & \(1/3\) & \(24\%\) & – & – \\ \hline Random Forest & \(1/11\) & \(1/3\) & \(0\%\) & – & – \\ \hline Gradient Boost & \(2/12\) & \(2/3\) & \(0\%\) & \(76\%\) & – \\ \hline BiLSTM + Classifier & \(2/10\) & \(2/3\) & – & \(56\%\) & \(26\%\) \\ \hline BiLSTM + MDN & \(3/10\) & \(3/3\) & \(43\%\) & \(76\%\) & \(26\%\) \\ \hline Outage-Watch (BCE) & \(3/9\) & \(3/3\) & \(54\%\) & \(76\%\) & \(27\%\) \\ \hline Outage-Watch (EVL) & \(3/8\) & \(3/3\) & \(40\%\) & \(80\%\) & \(26\%\) \\ \hline \hline \end{tabular} \end{table} Table 6. Results for outages predicted by different models using QoS metrics Figure 7. Analysis of the Outage-Watch’s performance on two real outages that happened during the deployment period. The upper plots are the metric values which shows deviations (actual value is masked), while the lower plots compute the probability of the metric value exceeding a threshold of \(\mathcal{T}=99\%\). Outage-Watch was able to correctly predict both the outages in advance (in comparison to the current approach which is indicated by the light red rectangular shaded region). issue correctly and flagged the outage 67 minutes before the outage was actually launched. The system failed due to a fault in the event consumer queue which got stuck and was not processing since \(t_{A}-100\) minutes in one of the regions. 2. Outage B: In another outage, alerts regarding high error rates in a service \(\mathcal{S}\) were fired. The outage was auto-launched at \(t_{B}\) by monitoring the high error rate. It was found that a faulty update in one of the AWS components had caused the component to fail. That component was being used by \(\mathcal{S}\) and therefore after the update, \(\mathcal{S}\) was unable to process incoming requests. The issue commenced since \(t_{B}-25\) minutes. Outage-Watch correctly detected this outage and reduced the MTTD by 22 minutes from the auto-launched approach. #### 7.3.2. Outages Not Predicted In addition, we also present a case study of an outage (Outage D) which occurred after a change was implemented that inadvertently tripped an UI feature blocking protocols from making requests. However, such outages due to UI issues are not meant to be manifested in the QoS metrics. As a result, it was not detected by our model. However, this is not a false negative in our case since no changes were observed on the QoS metrics, and hence Outage-Watch could not have detected the outage. This also highlights the limitations of our proposed method, as it relies on the monitoring metrics to predict outages. ### Discussion From the above results, we see that Outage-Watch performs better than the other comparable baselines, as well as on real-world deployment setting. However, it can be observed from Table 6 that Outage-Watch reports multiple false positives, which is dependent on the quality of threshold we choose to detect an outage. One can circumvent this issue by having a higher threshold (which might reduce the reduction in MTTD according to Figure 7). However, since we are working with the constraints of data, our training data set did not have any outages and the number of extreme events were very less as compared to the test data. This resulted in Youden's index to compute a threshold lower than 0.5. However, with more data in the training set reflecting the extreme events, thresholding model becomes more proficient to distinguish true positives from the false positives, which results in a higher Youden's index (Youden, 2019). As and when data arrives, Youden's Index model should be retrained to get an improved threshold. Hence, thresholds can be re-adjusted as well with the predicted distribution, reducing the false positives. However, in real-world scenarios, the presence of false positive cases is nuanced in this context. Our analysis of model predictions, along with SRE input, identified false positive cases where predicted extreme values didn't lead to outages. Some of these issues were resolved manually or self-corrected. Consequently, the occurrences of false positives are not necessarily negative indicators, as they can capture mild issues that resemble potential disruptions. However, false positives are always undesirable in the workflow of SRE. While SREs can get insights from false positives, their presence is undesirable due to the risk of alert fatigue from frequent, possibly minor, predicted outputs. To tackle this, we provide the benefit of adaptable threshold selection as discussed in SS5.4. A detailed study on how SREs can distinguish a false positive from a true positive output in their workflow remains an open question. Overall, Outage-Watch proves to be very helpful for production outage management as it was able to predict the outages well in advance. This could help in reducing the severity and consequently helps with quick mitigation of the outages. We also highlight a limitation of the model, which relies solely on monitoring data and the QoS metrics and cannot predict those outages that are not indicative from these metrics. Nevertheless, we believe that by incorporating additional sources of data, such as log files and change details, we can improve the performance of our model in detecting these types of outages. Overall, our proposed method is a valuable tool for predicting outages using QoS metrics and has the potential to improve system performance and reliability. ## 8. Conclusion In this paper, we present a novel approach Outage-Watch for predicting outages by forecasting the distribution of the relevant QoS metrics. This approach takes a time series input of multiple monitoring metrics representing the current system state within a time window to encode the information present in the time series in a vector representation. It then uses the encoded information to learn the distribution of the relevant metrics using a feed-forward neural network. In addition, Outage-Watch uses extreme value loss to classify the extreme events in a multi-task manner which helps in learning the distribution of the metrics in the tail. At inference time, our model uses the distribution learnt to compute the probability of the metric to cross a certain threshold, and then predicts outages based on a thresholding technique. Our experiments on real-data show the efficacy of our method with an average AUC of 0.98. The applicability and robustness of our approach has been verified by deploying it on an enterprise system. **Future Works:** Our future works include extending the evaluation duration on production systems to provide insights into long-term performance. Few interesting modifications include automated dynamic threshold selection and providing a confidence bound for our prediction to distinguish between the true positives and the false positive predictions. Furthermore, refining the definition of extreme events could enhance predictive capabilities. Incorporating diverse data sources, such as log files and trace data, may extend the scope of outage detection. We also plan on exploring robust re-training strategies (Youden, 2019) to improve model performance in production. Lastly, implementing a feedback loop or introducing human-in-the-loop dynamics may further refine the model's predictive abilities. ## 9. Data Availability The metrics data used in this research is proprietary and cannot be shared due to confidentiality agreements with the enterprise service provider. However, the model code along with a sample data are made available at Outage-Watch5. The sample data includes the format of the input data required for the model with random metric values. Any real dataset can be pre-processed to the specified format for implementing the approach. We believe that the model code and sample data provided in the paper are sufficient for replication and working with similar datasets. Footnote 5: [https://github.com/skeprivwal44/Outage-Watch](https://github.com/skeprivwal44/Outage-Watch)
2308.16377
Science Communications for Explainable Artificial Intelligence
Artificial Intelligence (AI) has a communication problem. XAI methods have been used to make AI more understandable and helped resolve some of the transparency issues that inhibit AI's broader usability. However, user evaluation studies reveal that the often numerical explanations provided by XAI methods have not always been effective for many types of users of AI systems. This article aims to adapt the major communications models from Science Communications into a framework for practitioners to understand, influence, and integrate the context of audiences both for their communications supporting AI literacy in the public and in designing XAI systems that are more adaptive to different users.
Simon Hudson, Matija Franklin
2023-08-31T00:39:33Z
http://arxiv.org/abs/2308.16377v1
# Science Communications for Explainable Artificial Intelligence ###### Abstract Artificial Intelligence (AI) has a communication problem. XAI methods have been used to make AI more understandable and helped resolve some of the transparency issues that inhibit AI's broader usability. However, user evaluation studies reveal that the often numerical explanations provided by XAI methods have not always been effective for many types of users of AI systems. This article aims to adapt the major communications models from Science Communications into a framework for practitioners to understand, influence, and integrate the context of audiences both for their communications supporting AI literacy in the public and in designing XAI systems that are more adaptive to different users. ## 1 Intro Artificial Intelligence (AI) is increasingly implemented in software products employed by users with a wide variety of backgrounds and competencies. As a new technology, its effective use depends on a great amount of effort to educate users in a way that enables them to adapt the tool to their needs. Making use of AI is made more difficult due to a lack of transparency [17], and hoping black boxes will align with our human goals is not a good alternative [1]. To ensure that humans have effective overxi AI agents, Explainable Artificial Intelligence (XAI) techniques have been developed to help users make sense of an AI's decisions and predictions. However, XAI models provide primarily numerical explanations favored by those with numerical backgrounds [1]. For example, SHapley Additive Explanations (SHAP) - a method for explaining individual AI model decisions by computing the contribution of each variable to the decision - can lead to information overload for lay users [14]. The same research found that simply written messages that provide a list of factors that were considered by the AI model increased trust and understanding. There is a dilemma where the SHAP model objectively provided more information, but a written message provided as an alternative was perceived as more understandable and trustworthy. It appears that AI has a communication problem. There is scarce evidence for what explanation should be given to a certain individual for a specific task [15], and therefore it is not clear how to optimize the distribution of XAI models [15]. We argue that XAI, and the AI field more broadly, can benefit from the Science Communications (SciComms) field to better integrate missing user context. SciComms is the practice of communicating scientific information to the public in an accessible and engaging way. It involves researching, writing, and editing scientific content, creating visuals, and using various digital and traditional communication tools. The strong public discourse around AI's impacts sparked by Chat-GPT calls for greater use of practices developed in the SciComms field to help build AI literacy; generative text models also present an opportunity to integrate the SciComms approaches into AI interfaces directly to improve their adaptability, and usefulness, to different audiences. ## 2 AI's Communication Challenge Enabling users to effectively make use of AI has largely been treated as an AI literacy problem: users having sufficient understanding of the technology's functioning in order to "critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace [18]." Treatments have followed a communication model in SciComms referred to as the "information-deficit model". This model assumes what users are missing are facts about a given field of science or technology, and addresses that gap through top-down dissemination of information. The information deficit model is typically a top-down, one-way communication approach that simply relies on simply disseminating the knowledge of experts. Even if the ideas are simplified, they tend to ignore the individual contexts and local knowledge of users and instead favor a one-size-fits-all approach. These efforts tend to fail in impacting the overall literacy of a field due to not adapting to the audience's various perspectives or being made relevant in a way they can find useful in their daily lives [16]. SciComms encompasses the "skills, media, activities, and dialogue to produce" robust engagement with science and technology in the public [11]. For a technology like AI, SciComms is a critical practice in preparing the public for its effective use. The SciComms field has been evolving out of the "information deficit model" that pins mistrust in science on science illiteracy [14]. Meta-analysis of surveys on the public understanding of science taken around the world since 1989 has shown that trust in science varies even as average science literacy stays the same [1]. Other contexts [14], in particular, historical experiences with science institutions [15], are much stronger predictors. Understanding the context of past influential experiences, as well as the context of the present narratives and issues around technology, helps determine the frames most effective at supporting productive discussion in the public [14]. These contexts are a moving target as AI makes more of an impact on mainstream culture due to the release of widely publicized products like Chat-GPT. Other models in SciComms have been growing in favor, including the _contextual model_, _lay expertise model_, and the _participatory model_[1]. The contextual model emphasizes adapting communications to different audiences' diverse contexts that can include a wide variety of factors: such as attitudes, beliefs, the task being executed, the domain it is performed in, and the current circumstance of a user. The lay expertise model gives additional credence to local knowledge, such as cultural or domain expertise that the science and technology experts do not have access to; it balances the authority of the science and technology expert with that of the end-user and knowledge of their own needs. The participatory model emphasizes the key role of holders of local knowledge in forming goals for technology design and even the policy regulating it. The deficit model has not been entirely discounted, rather the SciComms field has pursued balancing it with the strengths of these other models. Resolving AI's communication challenge thus should include the goal of better understanding different stakeholders by exploring the contexts most relevant to them so that they can give input they believe is valuable. Further, AI outputs themselves are a kind of scientific communication of the data science the models employ to make a prediction, and XAI would also benefit from applying these methods directly in an AI interface's presentation of outputs. Recent advancements in generative AI, as they apply to XAI, provide a valuable opportunity to integrate these models for SciComms into the design of _adaptive explainable AI_ to address the moving target of individual user contexts. Taking this view, AI's communication challenge can be divided into two categories: creator and user. The user challenge is in preparing users' expectations of these interfaces via AI literacy efforts - communicating the dynamics and implications of algorithms so that people can raise their ability to think critically about the data generated and outputs provided by algorithms. The first creator challenge is in designing systems that are well adapted for their intended audience, or, if for general use, can be adaptable to a wide variety of users. The second creator challenge is in developing systems that are capable of learning and adapting to an individual user's context by taking into consideration an individual's background, the task they are performing, the domain, the ordering of tasks, stated preferences etc. For example, explaining an AI's decision to a junior educator may be more detailed and authoritative versus a senior one who may receive only more nuanced explanations. This is of course complicated by other contexts, like their particular AI literacy, experience with the machine, or cultural factors. We propose a framework for addressing AI's communication challenge through three stages of work: Understanding Context, Influencing Context, and Integrating Context. ## 3 Understanding Context Understanding Context involves gathering and interpreting knowledge of the target audience, their needs, values, and interests. It means understanding the user's background, their level of familiarity with AI concepts, their cognitive style, and their information needs. A nationally representative survey of 2,000 American adults in 2019 found that most people use AI, but claim that they don't and won't in the future [15]. Presenting AI as a consumer product (e.g., Netflix recommender system) versus as an intelligent agent influences people's receptiveness to it. On the other hand, giving AI agentic properties can obscure the human role in AI systems [1]. It is therefore important to research how different contextual factors can set reasonable expectations of the technology and the public's own sense of agency in influencing its development [1]. Methods used in SciComms for understanding context involve audience research (i.e., understanding knowledge, attitudes, and interests), psychographics (i.e., understanding psychological attributes), demographics, narrative (i.e., understanding context through users' stories), and user testing (i.e., empirical research on what types of explanations work best for different types of users) [23]. In addition, a growing set of computational approaches for measuring audience reception of communications will be helpful here [1]. An interesting case study on how context can be understood, and how it influences the perception of emerging technologies comes from research on self-driving vehicles (SDVs). One notable factor is the lack of information provided by SDVs [1]. A factor that may also reduce trust in SDVs is the widespread data collection that is necessary for a connected network of SDVs to work efficiently [1], however recent evidence suggests that many appear willing to share data if it improves travel experience [22]. Further, individuals prefer SDVs whose algorithms are set in a way so that they protect their passengers at all costs, yet they would prefer if others bought vehicles that are programmed to sacrifice passengers for the benefit of the majority (e.g., saving a higher number of pedestrians; [1]). More broadly, people seem to be concerned about hacking, misuse, legal issues, and safety [15]. Although it is evident that a car's ethical behaviour is outside the scope of SciComms, there is a clear opportunity to pro vide more information, and to be more transparent about what type of data collection is necessary for a safer and improved travel experience. Further, it can help inform which context to prioritize when determining a communication strategy. SDVs also give insights into how sociodemographics are relevant to public attitudes. Studies have identified that younger travelers [14] and those living in large cities [3] are more willing to adopt SDVs. Further, people in China and India had more positive opinions about SDVs than people in the US, UK, and Australia [3]. Identifying the sociodemographic divides in attitude is indicative of the forthcoming potential divides in the challenges SciComms about AI will face. Finally, [14] found that attitudes towards SDVs reflect the theoretical curve assumed in Diffusion of Innovations Theory [14]. SDVs, like other emerging technologies, will first be used by early adopters, successively followed by second-wave adopters, mainstream users, late adopters, and avoiders. The patterns exist within and between every age breakdown, however, the older participants were more likely to resist SDVs. Younger generations, considered digital natives, are more trustworthy of intelligent systems and algorithmic decision-making; thus, as a group, they skew towards early adoption. SciComms initiatives, thus, will have different challenges for each group of adopters. The movement of users between these categories as the technology spreads will also add value to strategies that can be adaptive to shifting contexts. We are considering a broad definition of context, involving any variety of factors that can influence how information is received, interpreted, and integrated or discarded. Put simply, it is considering the audience's diverse needs in order to reach a communication goal and goes beyond a communicator only considering the information that they want to be known by the audience. To that end, considering different frameworks for understanding relevant context can be useful in identifying the priority factors to consider in the basic design of an XAI system. One approach to understanding the context in relation to XAI comes from [12] who argue that people's reaction to explanations can be evaluated by measuring their _mental models_, _probability estimates_, _trust_, _knowledge_, _and performance_. Here, _mental models_ are representations of how people understand systems. _Probability estimates_ are the probability of future events occurring. _Trust_ is a multidimensional concept predictive of whether people choose to use AI. _Knowledge_ can be procedural - relating to changes in abilities - or semantic - relating to changes in factual knowledge. Finally, people's _performance_ on the task an AI is assisting them with will change due to the provided XAI explanations. The approach can identify what factors are the main barriers to communication. Some users may not trust the AI in the first place, while others may not have the procedural knowledge to use the tool. ## 4 Influencing Context Influencing Context involves shaping communication about AI (the user challenge) as well as the baseline design of interface features to address the specific needs and concerns of a target group (the first creator challenge). This means not only delivering explanations that the audience can understand but also considering what issues and consequences might be most relevant to them. "Framing" is a key practice used in SciComms to influence public reception of scientific and technological topics. A frame is a storyline selected to help simply convey complex topics. Different frames prioritize different issues and consequences, guiding attention to what the communicator wants an audience to focus on out of the many topics within a complex set of information [15]. However, it is important SciComms practitioners make sure to avoid _emphasis framing_ - the phenomenon in which the bias of presented information also biases judgments of that information [1]. Such framing can be used to influence a person's perceptions of an issue, either positively or negatively. For example, trust in the authenticity of AI work can be increased by emphasizing human involvement in their creation or training processes [11]. The goal of framing should be to draw their attention to the most relevant aspects of their lives so that they can productively engage with the ideas, without biasing the judgments they go on to make. Picking which aspects of technology to focus on can influence whether there is any effect on public perception and understanding at all. Studies have found that for emerging technologies, such as carbon nanotubes and genetically modified foods, factual information about the technologies has a lower impact on people's reception of such technologies than background factors such as people's values [1]. The same study found that once people form their opinions, they tend to view factual information about a technology's features through _motivated reasoning_ - where they use this information to reaffirm their opinions. Even if someone wanted to emphasize features that they themselves see as positive, a receiver with a negative opinion may not be influenced at all. Framing that first addresses the storylines behind the beliefs may be a more effective approach. Another approach to influencing context is message design. Message design involves designing the message to be engaging, understandable, and memorable [15]. This can involve visuals, analogies, or narratives to make explanations more engaging, or repetition and reinforcement to make messages more memorable [15] Audience research approaches, such as focus groups, can help inform and test narratives needed to reach specific groups [17]. Gaining such an understanding will also inform the development of AI technologies as they will emphasize user needs [18]. For example, when individuals are given a certain amount of control over intelligent systems, they are more likely to use them [1]. While the SciComms field has discounted the deficit model, it has not completely thrown it out. It is still important to provide groups with details about science and technology. Public stakeholders respond strongly to being provided information on the technology's functioning as well as broader implications in society [11]. Integrating with the other SciComms models is a matter of incorporating feedback and adjusting the message. This involves monitoring the audience's reaction to the communication and adjusting it as needed. For example, if the audience is not understanding the explanation, understanding whether it comes from a knowledge gap or a communications gap is important. Further, SciComms can utilize issue prioritization - identifying the most important issues and consequences for each group and focusing the communication on these issues. SciComms can also influence AI literacy through a participatory approach. Participatory approaches include dialogues and even giving stakeholders access to decision-making discussions on design and research priorities. This not only helps ensure a deep integration of context in the research and development of the technology, but the process also better equips stakeholder groups to engage with and critique the efforts of the field [1] and better present perspectives that can actually be taken up in development [13]. It is important that these "bottom-up" strategies happen early and often, not only so that the public's views, local knowledge, and lay expertise can be incorporated, but so that the public feels a part of the process, and consensus and reasonable expectations can be built before deployment [1]. This can also help supersede unhelpful or bad-faith framing. This practice has begun to a degree [14], but we would emphasize that collective dialogues are as much a part of influencing context as they are about understanding context [15]. Indeed, recent approaches have looked at how explanations provided by XAI can either nudge behaviour directly or boost people's capability which in turn promotes better AI use [12]. From this perspective, local feature importance explanations and concept-based explications are harmonious to disclosure nudges that give context-dependant information, and global feature importance explanations and counterfactual explanations are congruous to boost capability. A further inquiry would allow for a more optimal distribution and selection of XAI methods and corresponding SciComms techniques. Translational exploration utilizing already current understandings from AI, Behavioural Science, and Human-Computer Interaction would be advantageous. SciComms can help in the development of better human-in-the-loop (HITL) architectures, where ML systems communicate to humans via XAI explanations, and humans give feedback to update the model's performance [1]. HITL ML architectures are systems in which humans and machines collaborate to create solutions [17], using feedback loops where humans and machines interact and learn from each other [18]. This type of architecture is particularly useful in cases where the information and data available are not enough to provide an accurate solution. By combining human insight with computer algorithms and datasets, a more accurate and comprehensive solution can be created. ## 5 Integrating Context Integrating Context focuses on creating an interactive and adaptive communication interface that allows for a two-way dialogue between AI and people. The goal here is to establish a feedback loop where the AI learns from user interactions and modifies its explanations accordingly employing SciComms models, and where the user learns from the AI and adjusts their expectations and understanding of AI behaviour. Such communication is necessary to maintain human agency, and integrating techniques for context discovery and adaption in AI interfaces is an important topic of research. In other words, this is about extending the first two stages of Understanding and Influencing Context into an adaptive interface. To this end, computational linguistics is beginning to use science communications principles in conversational interface design that considers sensitive vocabulary when explaining a scientific concept so as to not run into a bias of a user, or to create a harmful one [15]. Generative AI offers a large opportunity to do this adaption for the wide variety of context factors needed to consider in a globally distributed technology [1]. The SciComms field has made strides to update itself to the modern needs and media paradigms and incorporated more computational methods that would better enable application to the fast-moving and widely distributed field of AI [13]. SciComms calls two-way dialogue _dialogic communication_. This means the AI would not just provide explanations, but also solicit feedback, ask clarifying questions, and adapt its responses based on user inputs. A case study for the benefits of two-way communication comes from preferences inference and elicitation. [16] has argued that one way forward is to align AI to human preferences and that they can be best inferred from human behaviour. This perspective, however, ignores the fact that preference and behaviour are often influenced by the same factors [12], and that preference and behaviour have a bidirectional causal relationship [1]. As a result, we might want to design systems that attempt to elicit preferences or meta-preferences more directly by prompting users for information [1]. Another SciComms approach that may be relevant is _adaptive communication_ - AI dynamically adjusts its explanations based on the user's feedback. For example, if the user indicates they didn't understand an explanation, the AI could provide a simpler explanation or use a different analogy. The approach can be paired with an iterative design for XAI models - continuously refining the AI-user interface based on user feedback and testing results. This would result in a cyclical process of design, testing, feedback, and redesign, aimed at continuously improving the communication between the AI and the user. One risk here is that a battery of questions probing for user info may result in disengagement of the user. Providing explanatory features that prompt more questioning from the user may be beneficial here. In a generative interface, for example, showing the hidden prompts that guide its answer, showing sources, or indicating the effect of increasing the temperature on the likelihood of inaccurate hallucinations can help the user to discern how an output was reached and ask follow-up questions that imply their context. Coupled with a simple suggestion to question the output and provide more context could prompt the user to offer more useful variables for the AI agent to adapt to. Larger token sizes of AI models would also increase the memory of these systems to learn and adapt to their users over time and tune them to their needs. Other risks included an AI model getting stuck in a particular tuning to a user and struggling to update to a user making significant changes in attitude or experiencing a large change in context. The user may also become overconfident in the AI system and stop questioning the outputs. For both user and machine it is ideal to maintain a healthy level of communicated uncertainty, and defining parameters for measuring the success of these systems will be a critical area of research. ## 6 Conclusion AI has a communications challenge. To create human-centric AI, it is important that XAI is able to adapt to different users. This has often been confused with trying to increase users' trust in AI without understanding why they might not trust it in the first place. SciComms is a field with rich insights into how to understand users so as to improve public engagement and expectations of AI systems, and its approaches could be employed to help AI systems better adapt to their particular users. Such integration can result in more adaptive XAI systems that mimic the way a skillful human communicator would adapt their explanations to different audiences in different contexts. Building systems that have that ability of explanation adaptation is now possible, where an explanation can be completely changed while retaining the inherent meaning. We argue that XAI needs to move towards _adaptive explainable artificial intelligence_ - a recommender system that predicts the preferred explanations of a specific user in a particular context. There is a wide area of research to pursue, and we propose a study of research gaps to identify useful areas of overlap between SciComms and XAI. To build adaptive XAI, data would be needed on how certain people react to different explanations while performing distinct tasks in particular contexts. There have been recent proposals for XAI evaluation paradigms [14]. Using the data collected from such paradigms in an adaptive system would involve building a simple, interpretable model capable of recommending the right XAI method to the right person for a given task. That data would contain well-understood variables, so a simple model can also produce salient explanations of its own process by default -- a kind of'meta-interpretability' that stops new black boxes from hindering human-AI interaction.
2303.18028
Theory of rheology and aging of protein condensates
Biological condensates are assemblies of proteins and nucleic acids that form membraneless compartments in cells and play essential roles in cellular functions. In many cases they exhibit the physical properties of liquid droplets that coexist in a surrounding fluid. Recently, quantitative studies on the material properties of biological condensates have become available, revealing complex material properties. In vitro experiments have shown that protein condensates exhibit time dependent material properties, similar to aging in glasses. To understand this phenomenon from a theoretical perspective, we develop a rheological model based on the physical picture of protein diffusion and stochastic binding inside condensates. The complex nature of protein interactions is captured by a distribution of binding energies, incorporated in a trap model originally developed to study glass transitions. Our model can describe diffusion of constituent particles, as well as the material response to time-dependent forces, and it recapitulates the age dependent relaxation time of Maxwell glass observed experimentally both in active and passive rheology. We derive a generalized fluctuation-response relations of our model in which the relaxation function does not obey time translation invariance. Our study sheds light on the complex material properties of biological condensates and provides a theoretical framework for understanding their aging behavior.
Ryota Takaki, Louise Jawerth, Marko Popović, Frank Jülicher
2023-03-31T13:07:13Z
http://arxiv.org/abs/2303.18028v2
# Theory of rheology and aging of protein condensates ###### Abstract Biological condensates are assemblies of proteins and nucleic acids that form membraneless compartments in cells and play essential roles in cellular functions. In many cases they exhibit the physical properties of liquid droplets that coexist in a surrounding fluid. Recently, quantitative studies on the material properties of biological condensates have become available, revealing complex material properties [1; 2]. In vitro experiments have shown that protein condensates exhibit time dependent material properties, similar to aging in glasses. To understand this phenomenon from a theoretical perspective, we develop a rheological model based on the physical picture of protein diffusion and stochastic binding inside condensates. The complex nature of protein interactions is captured by a distribution of binding energies, incorporated in a trap model originally developed to study glass transitions [3]. Our model can describe diffusion of constituent particles, as well as the material response to time-dependent forces, and it recapitulates the age dependent relaxation time of Maxwell glass observed experimentally both in active and passive rheology. We derive a generalized fluctuation-response relations of our model in which the relaxation function does not obey time translation invariance. Our study sheds light on the complex material properties of biological condensates and provides a theoretical framework for understanding their aging behavior. + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] ## I Introduction The formation of biological condensates by phase separation of proteins and nucleic acids in the cell has became a new paradigm in molecular biology over the last decade [4; 5; 6]. Such condensates provide membraneless biochemical compartments with liquid-like properties. They typically exhibit a spherical shape to minimize the surface tension and have properties of droplets in a fluid environment. Recent studies suggest that rheological properties of biomolecular condensates can be considerably richer than those of simple liquids [1; 7; 8], which may have biological consequences [8; 9; 10; 11]. Recently, the rheological property of RNA associated condensates of PGL-3 and FUS protein condensates were studied in vitro using active and passive microrheology [1]. The study revealed time-dependent material properties of these protein condensates, summarized as follows: (1) The rheological properties of the condensates depend on the waiting time (\(t_{w}\)) between droplet formation and experiment; they are well fit by a Maxwell fluid model with elastic behavior on short time scales up to the relaxation time (\(\tau_{c}\)) and liquid behavior at the longer time scales. (2) The relaxation time, \(\tau_{c}\), of the Maxwell fluid increases for longer waiting time \(t_{w}\). The increase of \(\tau_{c}\) is associated with an increase of viscosity, not a change of elasticity. (3) Various quantities reflecting the material property, such as complex modulus and mean squared displacement, collapse on a master curve upon rescaling of frequency and modulus for different \(t_{w}\). These time-dependent rheological properties suggest that the rheology of the protein condensates is an aging Maxwell fluid, termed Maxwell glass, referring to aging phenomena in glassy materials [12; 13]. Viscoelastic properties of condensates have been reported in multiple experimental studies. Alshareedah _et al._[2] found that condensate viscoelasticity can be modulated by varying aminoacid sequence of condensate-forming proteins. Ghosh _et al._[7] investigated the relationship between condensate rheology and fusion dynamics showing that shorter relaxation times lead to faster fusion. Theory on viscoelastic condensates has addressed the shape dynamics of condensate droplets [14], as well as salt dependence of viscoelastic material properties [15]. A two fluid model describing the transition from a liquid to an elastic droplet was proposed to discuss the observed solid-like condensate behaviours [16]. Shen _et al._[17] reported the spatially heterogeneous condensate organisation during the transition from a liquid to a solid state in an aging condensate. Aging and complex rheology of non-biological materials has a long history of research [18] due to its abundance and close connection to daily life [19]. A comprehensive experimental study of aging materials by Struik dates back to the 1970s [20]. More recently, aging colloidal glasses have been studied using microrheology [21]. The soft glassy rheology model has been developed to describe the aging and rheology of soft materials [22; 23; 24], based on seminal works by Bouchaud and coworkers [3; 25]. Recently, Lin [26] proposed a related mean-field model for condensate aging, based on the assumption of strongly correlated transitions between trap energies, in contrast to the soft glassy rheology model. Calculating the linear response function in this model yields a linear aging of condensate relaxation time-scale. In this work, we develop a mean-field model of ag ing biological condensates that can describe their time-dependent material properties, observed in experiments. We clarify how the aging of the protein condensates is reflected in active and passive microrheology. Active and passive rheology methods are illustrated in Fig.1a. The structure of the paper is as follows. In section II, we propose a mean-field model to describe the binding and unbinding of diffusive elements inside the protein condensates. Using the unbound probability of elements in condensates, we write the constitutive equation of the aging Maxwell fluid, leading to the relaxation function for Maxwell glass (section III.1). In section III.3, we examine the time-dependent rheology of the model using active rheology and propose the time-dependent complex modulus. Finally, in section IV, we derive fluctuation-response relations between response functions and mean squared displacement of the diffusive elements, which can be employed in passive rheology experiments. We conclude with a discussion of our results. For readers unfamiliar with the subject, we have included an introduction to the rheology of aging materials in Appendix A, which summarizes the essential concepts employed throughout the paper. ## II Trap model of condensate aging We introduce a mean-field model of an aging protein condensate composed of cross-linked elements forming an elastic network. These elements occasionally unbind and freely diffuse before attaching at a new location, see Fig.1b. Dynamics of unbinding is determined by the binding energy \(E\) of individual cross-links. To describe cross-linking of large proteins in a complex environment we draw binding energies from a distribution \(\rho(E)\). State of the system at time \(t\) is described by probabilities \(p_{b}(E,t)\) and \(P_{u}(t)\) to find the system bound with energy \(E\) or unbound, respectively. The dynamical equations for these probabilities are \[\frac{1}{\Gamma_{0}}\frac{\partial p_{b}(E,t)}{\partial t}=-p_{b} (E,t)e^{-\beta E}+P_{u}(t)\rho(E), \tag{1a}\] \[\frac{1}{\Gamma_{0}}\frac{\partial P_{u}(t)}{\partial t}=-P_{u} (t)+\int_{0}^{\infty}dEp_{b}(E,t)e^{-\beta E}, \tag{1b}\] where \(\beta\equiv 1/k_{B}T\), with temperature \(T\) and Boltzmann constant \(k_{B}\). \(T\) is the temperature of the heat bath to which the condensates are coupled. Eq.(1) is an extension of trap model by Bouchaud [3; 25]. The first term of the right-hand side in Eq.(1a) describes the transition from a bound state with energy \(-E\) to the unbound state at \(E=0\), which occur at a rate \(\Gamma_{0}e^{-\beta E}\), where \(\Gamma_{0}\) is a rate parameter and binding energy \(E>0\) is positive. The second term describes transitions from the unbound state to a bound state which occur at a density \(\rho(E)\). Here, we choose an exponential distribution of binding energies, \(\rho(E)=\beta_{0}e^{-\beta_{0}E}\), which can describe both equilibrium and aging regimes of the model [3]. The parameter, \(\alpha\equiv\beta_{0}/\beta\) controls qualitatively different solutions of Eq.(1). For \(\alpha>1\), Eq.(1) relaxes to an equilibrium steady state with \(p_{b}^{\rm eq}(E)\sim\rho(E)\exp(\beta E)\), and \[P_{u}^{\rm eq}=\frac{\alpha-1}{2\alpha-1}\quad, \tag{2}\] see Appendix C. As shown in [3], for \(0<\alpha<1\), \(p_{b}^{\rm eq}(E)\) is no longer normalizable and the equilibrium state of Eq.(1) does not exist. The probability \(P_{u}(t)\) vanishes asymptotically as \[P_{u}(t)\simeq\kappa\big{(}\Gamma_{0}t\big{)}^{\alpha-1};\quad\kappa=\frac{1 }{\alpha}\frac{\sin\big{(}\alpha\pi\big{)}}{\pi\Gamma[\alpha]}, \tag{3}\] as derived in Appendix C. Here \(\Gamma[\alpha]\) denotes the Gamma function. Fig.2 shows \(P_{u}(t)\) for initial condition \(P_{u}(t=0)\). The equilibrium state of Eq.(1) is a Figure 1: Schematics of the model and methods of microrheology. **(a) Left**: Schematics of active rheology. The external force (\(F\)) is applied to the protein condensates (green spheres) having complex modulus \(G(\omega)\) using optical tweezers (yellow). The relation between strain and stress gives the material property of condensates. **Right**: Schematics of passive rheology. The motion of the tracer element (red) embedded into the condensate is tracked. The element’s mean square displacement encodes the condensate’s material property, which manifests as diffusion coefficient \(D(t)\). **(b) Schematics of the model. The diffusing element takes two states. One is the bound state, where chemical cross-links are densely connected at the reaction sites (green circles) so that the diffusion of the elements is hindered. The other is the unbound state, where the the diffusing elements can freely undergo diffusive motion. We denote the probability density of the bound state as \(p_{b}(E,t)\) and the probability of the unbound state as \(P_{u}(t)\) (see the main text for the detail). \(0\)) = 1 evaluated for different values of \(\alpha\), showing the equilibrium and aging dynamics. To complete the model of an aging protein condensate we introduce a constitutive equation that describes effects of binding and unbinding of cross-links in an elastic network. Cross-link lifetimes are accounted for by the trap model in Eqs.(1). The network has an elastic modulus \(G_{0}\). As cross-links detach and attach in a new configuration the network is remodelled into a new reference state. The shear strain rate due to this remodelling is \(\dot{\epsilon}_{u}=P_{u}(t)\sigma(t)/\eta_{0}\), where \(\sigma(t)\) is the shear stress and \(\eta_{0}\) is the viscosity of detached network components. The overall shear strain rate \(\dot{\epsilon}\) is a combination of the viscous and elastic responses of the material \[\dot{\epsilon}(t)=\frac{\sigma(t)}{\eta_{0}}P_{u}(t)+\frac{\dot{\sigma}(t)}{G _{0}}. \tag{4}\] This is an equation of a viscoelastic Maxwell material with a viscosity \(\eta_{0}/P_{u}(t)\) that can exhibit aging dynamics described in Eq.(3). ## III Active rheology of aging condensates ### Relaxation function of a Maxwell glass We now derive and discuss the linear response of a viscoelastic material described by Eqs.(1) and (4). In order to compare our model with rheology experiments, we solve Eq.(4) for the shear stress \[\sigma(t)=\int_{0}^{t}dt^{\prime}K(t,t^{\prime})\dot{\epsilon}(t^{\prime}), \tag{5}\] where \[K(t,t^{\prime})=G_{0}e^{-\frac{G_{0}}{\eta_{0}}\int_{t^{\prime}}^{t}dt^{ \prime\prime}P_{u}(t^{\prime\prime})} \tag{6}\] is the relaxation function and \(t=0\) corresponds to the sample preparation time at which \(\sigma(0)=0\). For \(\alpha>1\), the equilibrium steady state \(P_{u}^{\rm eq}(t)\) exists and the relaxation function becomes \(K(t-t^{\prime})=G_{0}\exp\big{(}-P_{u}^{\rm eq}G_{0}/\eta_{0}\cdot(t-t^{ \prime})\big{)}\). This is the exponential relaxation with the rate \(P_{u}^{\rm eq}G_{0}/\eta_{0}\), which corresponds to a Maxwell fluid. For \(0<\alpha<1\), no steady state exists, and the relaxation function exhibits glassy behavior. In the asymptotic regime, \(P_{u}(t)\) follows Eq.(3), from which we obtain: \[K(t,t^{\prime})\simeq G_{0}\exp\Big{[}-\frac{\kappa G_{0}}{\alpha\Gamma_{0} \eta_{0}}\big{(}(\Gamma_{0}t)^{\alpha}-(\Gamma_{0}t^{\prime})^{\alpha}\big{)} \Big{]}. \tag{7}\] Therefore in the aging regime, the relaxation function takes the form of stretched exponential that often appears in the relaxation of glass forming materials [27; 28]. Note that the time translational invariance is broken in Eq.(7), a signature of the aging regime. We refer to the relaxation function in Eq.(7) as the relaxation function of an aging Maxwell fluid, i.e., Maxwell glass. ### Age dependent relaxation time We consider an experimental protocol where the system is prepared at \(t=0\) and system is strained starting at the waiting time \(t_{w}\). The resulting stress is written as \[\sigma(t)\simeq\int_{t_{w}}^{t}dt^{\prime}K(t,t^{\prime})\dot{\epsilon}(t^{ \prime}), \tag{8}\] where \(\epsilon(t_{w})=0\). We consider the relaxation function in terms of the observation time \(\tau=t-t_{w}\). In the limit of a short observation time compared to the waiting time \(\tau\ll t_{w}\), the relaxation function \(K(t_{w}+\tau,t_{w}+\tau^{\prime})\) can be approximated by a time translation invariant function \[K_{t_{w}}(\tau-\tau^{\prime})\equiv G_{0}e^{-\frac{\kappa G_{0}}{\eta_{0}}( \Gamma_{0}t_{w})^{\alpha-1}(\tau-\tau^{\prime})}. \tag{9}\] Figure 2: Dynamics of the unbound probability \(P_{u}(t)\). Solid lines are numerically obtained from Eq.(1) and dashed lines are analytical solutions from Eq.(2) or Eq.(3). The initial condition \(p_{b}(E,0)=0\) (\(P_{u}(t=0)=1\)). We set \(\Gamma_{0}=1\), which characterizes the time scale of the initial relaxation (\(t\approx 1/\Gamma_{0}\)), and measure the time (\(t\)) in the unit of \(1/\Gamma_{0}\). We fix \(\beta_{0}\) to \(1\) and vary \(\beta\). **(a)**\(P_{u}(t)\) for \(\alpha\geq 1\). The dashed lines in cyan are the analytical solutions from Eq.(2). The equilibrium solutions exist for \(\alpha>1\). **(b)**\(P_{u}(t)\) for \(\alpha<1\). \(P_{u}(t)\) shows aging dynamics (slow relaxation) for long time regime. The dashed lines in cyan are the analytical solutions from Eq.(3). This relaxation function shows that a Maxwell glass behaves as a Maxwell fluid when observed on short times \(\tau\ll t_{w}\), but with age-dependent relaxation time \[\tau_{c}(t_{w})=\frac{\eta_{0}}{\kappa G_{0}}(\Gamma_{0}t_{w})^{1-\alpha}. \tag{10}\] This result provides a connection between aging in an underlying cross-linker network and the experimentally observed age-dependent relaxation time reported in the protein condensates rheology experiments [1]. ### Instantaneous complex modulus The relaxation time \(\tau_{c}\) in a Maxwell fluid is related to the complex modulus as \(G(\omega)=i\omega\tau_{c}G_{0}/(1+i\omega\tau_{c})\). The complex modulus \(G(\omega)=G^{\prime}(\omega)+iG^{\prime\prime}(\omega)\), where \(G^{\prime}(\omega)\) and \(G^{\prime\prime}(\omega)\) represent the storage and loss moduli, respectively, characterizes the linear response of a time-translation-invariant material as a function of the angular frequency \(\omega\). However, for an aging material, \(G(\omega)\) is not a well-defined observable. Nevertheless, a frequency-dependent linear response can still be employed if the observation time window \(\tau\) is short enough such that the material properties do not undergo significant changes during the observation (Appendix A). To remove the restriction of a short observation time window, which limits the applicability of active rheology for aging material, we now introduce an analytic signal method that allows us to define the instantaneous complex modulus of an aging material \(G(\omega,t,t_{w})\) at time \(t\) and at frequency \(\omega\), similar to the time-varying viscoelastic spectrum [24], see Appendix D. The analytic signal of a function \(f(t)\) is defined as \(f_{a}(t)\equiv f(t)+i\mathcal{H}[f(t)](t)\), where \(\mathcal{H}\) is the Hilbert transform, see Appendix D. The analytic signal \(f_{a}(t)\) is a complex function and can be written in the polar form, \(f_{a}(t)=|f_{a}(t)|\exp(i\varphi(t))\), where \(|f_{a}(t)|\) is the instantaneous amplitude, also called envelope, and \(\varphi(t)=\arg[f_{a}(t)]\) is the instantaneous phase of the signal \(f(t)\). Using this definition of the analytic signal, we define the instantaneous complex modulus as \[\begin{split} G(\omega,t,t_{w})\equiv&\frac{\sigma _{a}(\omega,t,t_{w})}{\epsilon_{a}(\omega,t)}\\ =&\frac{|\sigma_{a}(\omega,t,t_{w})|}{|\epsilon_{a} (\omega,t)|}\exp\big{(}i\delta\varphi(\omega,t,t_{w})\big{)},\end{split} \tag{11}\] where \(\delta\varphi(\omega,t,t_{w})\) is the instantaneous phase difference between shear strain and stress. Here \(\sigma_{a}(\omega,t,t_{w})\) is the analytic signal of measured shear stress \(\sigma(\omega,t,t_{w})\) in response to an imposed sinusoidal shear strain \(\bar{\epsilon}(\omega,t,t_{w})=\Theta(t-t_{w})\epsilon(\omega,t)\) with frequency \(\omega\) starting at \(t=t_{w}\), where \(\epsilon(\omega,t)=\epsilon_{0}\cos\left(\omega t+\varphi_{0}\right)\) and \(\Theta\) is the Heaviside step function. \(\epsilon_{0}\) and \(\varphi_{0}\) are the amplitude and initial phase of the shear strain, respectively. The analytical signal of the strain is \(\epsilon_{a}(\omega,t)=\epsilon_{0}e^{i(\omega t+\varphi_{0})}\). The instantaneous complex modulus \(G(\omega,t,t_{w})\) is a generalization of the conventional complex modulus \(G(\omega)\) to the time dependent signals and they become equal for a time translation invariant system, see Appendix D. It reduces to the time-varying viscoelastic spectrum defined in Ref.[24] for slow aging limit as discussed in Appendix D. We use the instantaneous complex modulus to analyze the rheology of our model. For simplicity we choose a waiting time \(t_{w}=0\), which does not affect aging process in our model. We therefore omit the \(t_{w}\) dependence in the following. We solve Eq.(4) with Eq.(1) numerically for the sinusoidal shear strain as input \(\bar{\epsilon}(\omega,t)\) and obtain the shear stress \(\sigma(\omega,t)\) as output. Fig.3a shows the shear strain and stress for \(\omega=\pi/10\) and \(\omega=\pi/100\) for \(\alpha=10\) and \(\alpha=0.5\), respectively. For \(\alpha=10\), the strain is stationary, reflecting the equilibrium viscosity in Eq.(4). In contrast, for \(\alpha=0.5\) the amplitude of shear stress increases in time due to aging, reflected in changing viscosity \(\eta_{0}/P_{u}(t)\). In Fig.3b, we calculate the real and imaginary part of the instantaneous complex modulus, \(G^{\prime}(\omega,t)\) and \(G^{\prime\prime}(\omega,t)\), respectively, for a range of input frequencies. For \(\alpha=10\), \(G(\omega,t)\) does not depend on the time. On the contrary, we observe a striking difference for \(\alpha=0.5\): the instantaneous complex modulus shifts to lower frequencies over time, showing that the characteristic relaxation time of the material increases, as shown in Fig.3b, right panel. Such aging behavior was observed experimentally in the protein condensates [1]. Moreover, Jawerth _et al._[1] demonstrated that experimentally measured complex moduli in the Maxwell glass collapse when rescaled by \(G_{c}\) and frequencies by \(\omega_{c}\), where \(G_{c}\) and \(\omega_{c}\) are defined by \(G^{\prime}(\omega_{c},t)=G^{\prime\prime}(\omega_{c},t)=G_{c}\). We show in Fig.3c that our numerically evaluated complex moduli indeed collapse on a single master curve of the Maxwell fluid when rescaled moduli and frequency by \(G_{c}\) and \(\omega_{c}\), respectively. ## IV Generalized fluctuation-response relation in aging condensates In an equilibrium system, the relaxation of spontaneous fluctuations and the linear response to an external perturbation are closely related by the fluctuation-dissipation theorem [29]. Using the generalized Stokes-Einstein relation derived from the fluctuation-dissipation theorem, rheological properties of the material can be determined from equilibirum fluctuations [1; 30]. Although the equilibrium fluctuation-response relations do not apply in the aging materials, we derive specific fluctuation-response relations that characterise the aging Maxwell fluid. To this end, we consider a spatially resolved version of Eq.(1) that takes into account diffusion of unbound elements \[\frac{1}{\Gamma_{0}}\frac{\partial p_{b}(x,E,t)}{\partial t}= -p_{b}(x,E,t)e^{-\beta E}+p_{u}(x,t)\rho(E), \tag{12a}\] \[\frac{1}{\Gamma_{0}}\frac{\partial p_{u}(x,t)}{\partial t}= \frac{D_{0}}{\Gamma_{0}}\frac{\partial^{2}p_{u}(x,t)}{\partial x^ {2}}-p_{u}(x,t)\] \[+\int_{0}^{\infty}dEp_{b}(x,E,t)e^{-\beta E}. \tag{12b}\] In Eq.(12), \(p_{b}(x,E,t)\) is the probability density of elements bound at position \(x\) with energy \(E\) at time \(t\) and \(p_{u}(x,t)\) is the density of diffusing elements at position \(x\) at time \(t\). The mean square displacement of fluctuating elements is \[\langle\Delta x^{2}\rangle(t)=\Delta_{u}(t)+\int_{0}^{\infty}dE\Delta_{b}(E,t), \tag{13}\] where we have defined the positional variance of diffusing and bound states, respectively, as \[\Delta_{u}(t)\equiv \int_{-\infty}^{\infty}dxx^{2}p_{u}(x,t); \tag{14}\] \[\Delta_{b}(E,t)\equiv \int_{-\infty}^{\infty}dxx^{2}p_{b}(x,E,t).\] Using Eqs.(12) and Eqs.(14), we obtain the time evolution of the mean squared displacement, \[\frac{1}{\Gamma_{0}}\frac{\partial\Delta_{b}(E,t)}{\partial t} =-\Delta_{b}(E,t)e^{-\beta E}+\Delta_{u}(t)\rho(E), \tag{15a}\] \[\frac{1}{\Gamma_{0}}\frac{\partial\Delta_{u}(t)}{\partial t} =2\frac{D_{0}}{\Gamma_{0}}P_{u}(t)-\Delta_{u}(t)\] \[+\int_{0}^{\infty}dE\Delta_{b}(E,t)e^{-\beta E}, \tag{15b}\] with the definition, \[P_{u}(t)\equiv\int_{-\infty}^{\infty}dxp_{u}(x,t). \tag{16}\] The expression for the effective diffusion coefficient, \(D(t)\), can be obtained by taking the time derivative of Eq.(13) and using Eq.(15), \[\frac{d}{dt}\langle\Delta x^{2}\rangle(t)=2D_{0}P_{u}(t), \tag{17}\] leading to \[D(t)\equiv D_{0}P_{u}(t). \tag{18}\] Eq.(17) states that the effective diffusion coefficient is proportional to the probability that the element being in the diffusive state. We now obtain a relation between the aging relaxation function and the mean squared displacement at different times using Eq.(6) and Eq.(17) \[K(t,t^{\prime})=G_{0}\exp\Big{(}-\frac{G_{0}}{2D_{0}\eta_{0}} \big{(}\langle\Delta x^{2}\rangle(t)-\langle\Delta x^{2}\rangle(t^{\prime}) \big{)}\Big{)}. \tag{19}\] This exact relation connects the time dependent rheology \(K(t,t^{\prime})\) of the Maxwell glass to the passive rheology characterised by the mean squared displacement \(\langle\Delta x^{2}\rangle(t)\). Alternatively, we can write the second relation between mean squared displacement and linear response function. Using the strain-stress response function \(\chi(t,t^{\prime})\) defined as \[\epsilon(t)=\int_{0}^{t}\chi(t,t^{\prime})\sigma(t^{\prime}), \tag{20}\] Figure 3: Active rheology for the Maxwell fluid and glass. In the case of \(\alpha=10.0\) the system has a stationary equilibrium state and thus behave as conventional Maxwell fluid. For \(\alpha=0.5\), the system shows aging, thus behaving as the Maxwell glass. The unit time is \(1/\Gamma_{0}\) in Eq.(1). **(a)** The input shear strain \(\bar{\epsilon}(\omega,t)\) (cyan solid line) and the output shear stress \(\sigma(\omega,t)\) (orange dashed lines). \(\omega=\pi/10\) for \(\alpha=10.0\) and \(\omega=\pi/100\) for \(\alpha=0.5\). **(b)** The instantaneous complex modulus \(G(\omega,t)\) in equilibrium and aging regime. The real and imaginary part of \(G(\omega,t)\) is \(G^{\prime}(\omega,t)\) and \(G^{\prime\prime}(\omega,t)\), respectively. **(c)** The collapse of the \(G(\omega,t)\) for different instances onto the single master curve of the Maxwell fluid (dashed line in cyan). The bare viscosity is set to \(\eta_{0}=0.5\). We fix \(\beta_{0}\) to \(1\) and vary \(\beta\). Detailed numerical procedures are in Appendix F. we obtain (see Appendix E) \[\Theta(t-t^{\prime})\frac{d}{dt^{\prime}}\langle\Delta x^{2}(t^{\prime})\rangle=2k _{B}T\chi(t,t^{\prime}). \tag{21}\] Eq.(21) stems from the fact that both the time dependence of the diffusion coefficient \(D(t)\) and of the active response given in Eq.(4) are governed by \(P_{u}(t)\). We have used \(D_{0}=k_{B}T/\eta_{0}\) implying that diffusion coefficient of the unbound elements satisfies the Einstein relation. Note that Eq.(21) is similar to but different from the time translation invariant fluctuation dissipation theorem in equilibrium. It applies to the aging Maxwell model and has both \(t\) and \(t^{\prime}\) dependence signifying the glassy behavior. ## V Discussion We have presented a mean-field model of aging biological condensates, based on a minimal trap model that exhibits glassy behaviour [3]. Our model recapitulates aging rheology recently observed in biological condensates termed Maxwell glass [1]. The relaxation function in our model exhibits a stretched exponential decay at low temperatures, characteristic for glassy systems. Consequently, the Maxwell relaxation time is age dependent and increases with waiting time \(t_{w}\) as a power law \(\tau_{c}\sim t_{w}^{1-\alpha}\). For such an aging material for which time translation invariance is not obeyed, defining the frequency dependent linear response function poses a challenge. To overcome this challenge, we introduce the time-dependent instantaneous complex modulus as a generalization of the conventional complex modulus at steady state. The instantaneous complex modulus is based on analytic signal construction and remains well-defined even in non-stationary systems where approximative measures of the conventional complex modulus would fail. A power-law dependence of the relaxation time on the waiting time has been observed in different system. The aging exponent \(\mu\), which describes the growth of relaxation time with waiting time as \(\tau_{c}\sim t_{w}^{\mu}\) has been introduced in the seminal work [20]. In many polymeric materials, the relaxation time grows sublinearly, \(\mu\simeq 0.5-1\)[12]. In our model \(\mu=1-\alpha\) [see Eq.(10)] and in the aging regime with \(0<\alpha<1\), we find a sublinear dependence of \(\tau_{c}\) on \(t_{w}\) for a Maxwell glass, consistent with the sublinear behavior seen in many experiments on non biological materials. Interestingly, recent experiments suggest that \(\mu\) could be larger than 1 in protein condensates. For example, for the PGL-3 protein, \(\mu\simeq 6.4\) and \(\mu\simeq 2.1\) were estimated for different salt conditions (150 mM KCl and 100mM KCl, respectively) [1]. Our current model does not account for such high values of \(\mu\), as they would require negative values of \(\alpha\) and we currently do not have an explanation of this discrepancy. There are only very few other systems where \(\mu>1\) was measured. An example is polycarbonate (see for instance Fig.15 in [20]). Further research will be required to find out whether \(\mu>1\) is a robust feature of biological protein condensates, and if so, what is the origin of such a different behavior in comparison to aging of non-biological polymers. Finally, we have obtained an exact relation between the relaxation function and the mean squared displacement of particles in the aging regime (Eq. (19)). This relation is similar to the fluctuation-dissipation theorem that holds for equilibrium systems but it applies to the out-of-equilibrium Maxwell glass. In out-of-equilibrium aging systems, the generalized fluctuation-dissipation theorem has been hypothesized and verified for various models, resulting in the definition of an effective temperature [31; 32; 33]. The fluctuation-response relation, given by Eq.(21), does not require an effective temperature. Instead, it directly connects the response function to the fluctuations observed in Maxwell glass. ## Appendix A Rheology of glassy materials Soft materials, including protein condensates, behave as viscoelastic fluids. We consider a material that was prepared at \(t=0\) and start measuring the material properties after a waiting time, \(t=t_{w}\). Linear viscoelasticity is characterized by the linear constitutive relation between stress (\(\sigma\)) and strain (\(\epsilon\)). We consider the stress and strain relative to \(t=0\), which subsume the effect of stress and strain at \(t=0\) into \(\sigma(t)\) and \(\epsilon(t)\), respectively. The linear constitutive relation reads \[\sigma(t)=\int_{0}^{t}G(t,t^{\prime})\epsilon(t^{\prime})dt^{\prime}\quad, \tag{22}\] where we consider a general case without time translation symmetry [24]. Here, \(G(t,t^{\prime})\) is dynamic modulus determining the linear relation between the shear strain and stress. We can alternatively write the relation between stress and strain-rate, \[\sigma(t)=\int_{0}^{t}K(t,t^{\prime})\dot{\epsilon}(t^{\prime})dt^{\prime}, \tag{23}\] where \(\dot{\epsilon}\) is the rate of deformation. \(K(t,t^{\prime})\) is called relaxation function. We obtain the relation between \(G(t,t^{\prime})\) and \(K(t,t^{\prime})\) by applying partial integration to Eq.(23), \[G(t,t^{\prime})=-\frac{dK(t,t^{\prime})}{dt^{\prime}}+2\delta(t-t^{\prime})K( t,t^{\prime}). \tag{24}\] The factor 2 in the above relation is to account for the delta function integrated at the boundary. We used the fact that \(\epsilon(0)=0\). We can also write the linear relationship between stress and strain using the response function, \(\chi(t,t^{\prime})\), \[\epsilon(t)=\int_{0}^{t}\chi(t,t^{\prime})\sigma(t^{\prime})dt^{\prime}\quad. \tag{25}\] When the probing material is in thermodynamic equilibrium and independent on initial conditions, above response functions depend only on the time interval \(t-t^{\prime}\): \(G(t-t^{\prime})\), \(K(t-t^{\prime})\), and \(\chi(t-t^{\prime})\), corresponding to the time translational invariance. Time translational invariance allows us to apply the convolution theorem for the Laplace transform to Eq.(18)-(19), leading to the simple expressions: \[\sigma(s)=G(s)\epsilon(s); \tag{20}\] \[\sigma(s)=sK(s)\epsilon(s); \tag{21}\] and \[\epsilon(s)=\chi(s)\sigma(s). \tag{22}\] We specified the quantities in the Laplace space by the argument \(s\). We use same convention to denote the quantities in Laplace space (\(s\)) and in Fourier space (\(\omega\)). Therefore the response functions have relation \(G(s)=sK(s)=1/\chi(s)\) when time translational invariance is satisfied. For causal functions, such as \(G(t,t^{\prime})\), \(K(t,t^{\prime})\), and \(\chi(t,t^{\prime})\), the Fourier transform is readily obtained from the Laplace transform, by analytic continuation: \(s\to i\omega\). Thus, the analytic continuation may give the equivalent relation in the Fourier space, \(G(\omega)=i\omega K(\omega)=1/\chi(\omega)\). The dynamic modulus in Fourier space \(G(\omega)\), is often referred to as complex modulus [18]: \[G(\omega)=G^{\prime}(\omega)+iG^{\prime\prime}(\omega), \tag{23}\] where the real part \(G^{\prime}(\omega)\) is the storage modulus, and the imaginary part \(G^{\prime\prime}(\omega)\) is the loss modulus. The storage modulus and the loss modulus reflect the elastic and viscous component of the material response, respectively. The moduli \(G^{\prime}(\omega)\) and \(G^{\prime\prime}(\omega)\) may be obtained using active rheology. Depending on the experimental setup, we can choose either strain or stress as input and output signal. Here, we choose, strain as the input and stress as the output. Using a sinusoidal input strain with frequency \(\omega\), and amplitude \(\epsilon(\omega)\), one can determine the moduli by measuring the steady-state output stress, \(\sigma(\omega)\), from the amplitude change and the phase shift: \[G^{\prime}(\omega) =\frac{\sigma(\omega)}{\epsilon(\omega)}\cos(\delta\varphi( \omega)); \tag{24a}\] \[G^{\prime\prime}(\omega) =\frac{\sigma(\omega)}{\epsilon(\omega)}\sin(\delta\varphi( \omega)), \tag{24b}\] where \(\delta\varphi\) is the phase difference between input and output sinusoidal signals. In contrast to a material at thermodynamic equilibrium, glassy material, on the other hand, violates time translational invariance due to the slow relaxation which implies that memory about the initial state is not lost. The consequence is the explicit dependence on the two time scales in the complex modulus and the relaxation function, \(G(t,t^{\prime})\) and \(K(t,t^{\prime})\). We introduce the waiting time (\(t_{w}\)), the time between the preparation of the material (\(t=0\)) and the start of the measurement, and the observation time \(\tau\) during measurment, such that the time is \(t=t_{w}+\tau\). With the strain imposed starting at \(t=t_{w}\), Eq.(18) becomes \[\sigma(t)=\int_{t_{w}}^{t}G(t,t^{\prime})\epsilon(t^{\prime})dt^{\prime}. \tag{25}\] Using the change of variables, \(\tau=t-t_{w}\) and \(\tau^{\prime}=t^{\prime}-t_{w}\), \[\sigma(t_{w}+\tau)=\int_{0}^{\tau}G(t_{w}+\tau,t_{w}+\tau^{\prime})\epsilon(t _{w}+\tau^{\prime})d\tau^{\prime}. \tag{26}\] One approach to circumvent the complexity of the two time scales is to use observation times \(\tau\) much smaller than time scale associated with the change in rheological properties. For such a measurement time, \(G(t_{w}+\tau,t_{w}+\tau^{\prime})\simeq G(t_{w},t_{w}+\tau^{\prime}-\tau)\) obeys time translational invariance for \(\tau\). We denote the resulting dynamic modulus as \(G_{t_{w}}(\tau-\tau^{\prime})\equiv G(t_{w},t_{w}+\tau^{\prime}-\tau)\). Then Eq.(26) is approximated as, \[\sigma_{t_{w}}(\tau)\simeq\int_{0}^{\tau}G_{t_{w}}(\tau-\tau^{\prime}) \epsilon_{t_{w}}(\tau^{\prime})d\tau^{\prime}, \tag{27}\] where \(\sigma_{t_{w}}(\tau)\equiv\sigma(t_{w}+\tau)\) and \(\epsilon_{t_{w}}(\tau)\equiv\epsilon(t_{w}+\tau)\). Once we approximate the modulus to have time translational invariance for \(\tau\), one can obtain the storage and loss modulus for waiting time \(t=t_{w}\) using the same procedure as for the equilibrium case. Repeating this procedure for different \(t_{w}\), we obtain the \(t_{w}\)-dependent material properties. We remark that the assumption that the observation time \(\tau\) is appreciably smaller than the dynamics of the glassy material is not apriori justified and must be checked posteriorly. An alternative way to obtain the time-dependent material properties during aging, which does not require repeated analysis for different waiting times \(t_{w}\), is to generalize the complex modulus \(G(\omega)\) to time-dependent spectra [24] (Appendix D). The viscoelastic spectra explicitly represent the time-varying material properties, but their computation from experiments is not straightforward. We introduce, in section III.3, the instantaneous complex modulus to characterize the rheology of aging materials. We show in Appendix D that the instantaneous complex modulus and the viscoelastic spectra are closely related. The instantaneous complex modulus does not require the assumption for the observation time-scale and thus captures the full spectrum of the aging material. ## Appendix B Decomposition in dynamic modes. We study the relaxation dynamics of Eq.(1) to the asymptotic solutions for equilibrium and aging regime by defining eigenmodes and eigenvalues. First, we make the transformation \(q_{b}(E,t)=p_{b}(E,t)e^{-\beta E/2}/\sqrt{\rho(E)}\), to transform the operator Hermitian, and rewrite Eq.(1) as \[\frac{1}{\Gamma_{0}}\frac{\partial q_{b}(E,t)}{\partial t}=-q_{b}(E,t )e^{-\beta E}+P_{u}(t)\sqrt{\rho(E)}e^{-\beta E/2}, \tag{30a}\] \[\frac{1}{\Gamma_{0}}\frac{\partial P_{u}(t)}{\partial t}=-P_{u}(t )+\int_{0}^{\infty}dEq_{b}(E,t)\sqrt{\rho(E)}e^{-\beta E/2}. \tag{30b}\] We introduce eigenfunctions \(q_{\lambda}^{b}(E)\) and \(P_{\lambda}^{u}\) of the linear operator defined in Eq.(30). These eigenfunctions obey \[-\frac{1}{\Gamma_{0}}\lambda q_{\lambda}^{b}(E)=-q_{\lambda}^{b} (E)e^{-\beta E}+\sqrt{\rho(E)}e^{-\beta E/2}P_{\lambda}^{u}, \tag{31a}\] \[-\frac{1}{\Gamma_{0}}\lambda P_{\lambda}^{u}=-P_{\lambda}^{u}+ \int_{0}^{\infty}dE\sqrt{\rho(E)}q_{\lambda}^{b}(E)e^{-\beta E/2}. \tag{31b}\] where \(\lambda\) denotes the corresponding eigenvalue. We can eliminate \(q_{\lambda}^{b}\) from Eq. (31) which leads to the condition \[\Big{(}1-\frac{1}{1-\lambda/\Gamma_{0}}\int_{0}^{\infty}dE\frac{ \rho(E)e^{-\beta E}}{e^{-\beta E}-\lambda/\Gamma_{0}}\Big{)}P_{\lambda}^{u}=0. \tag{32}\] In order to find the eigenfunctions, we distinguish two cases. Case (I): \(P_{\lambda}^{u}=0\). In this case Eq.(31) reduces to \[-\frac{1}{\Gamma_{0}}\lambda q_{\lambda}^{b}(E)=-q_{\lambda}^{b} (E)e^{-\beta E}, \tag{33a}\] \[0=\int_{0}^{\infty}dE\sqrt{\rho(E)}q_{\lambda}^{b}(E)e^{-\beta E /2}. \tag{33b}\] This can be solved by the ansatz, \(q_{\lambda}^{b}(E)=a\delta(E-E_{\lambda})+\delta^{\prime}(E-E_{\lambda})\), where \(a\) is a constant. From Eq.(33b) we obtain, \[a=\frac{\beta-\beta_{0}}{2}, \tag{34}\] leading to \[q_{\lambda}^{b}(E)=\frac{\beta-\beta_{0}}{2}\delta(E-E_{\lambda})+\delta^{ \prime}(E-E_{\lambda}), \tag{35}\] with the eigenvalues, \(\lambda=\Gamma_{0}e^{-\beta E}\). Case (II): \(P_{\lambda}^{u}\neq 0\) and \(\int_{0}^{\infty}dE\frac{\rho(E)e^{-\beta E}}{e^{-\beta E}-\lambda/\Gamma_{0} }=1-\lambda/\Gamma_{0}\). Using the variable transform \(x=e^{-\beta E}\), we find \[\int_{0}^{\infty}dE\frac{\rho(E)e^{-\beta E}}{e^{-\beta E}-\lambda/ \Gamma_{0}}=\frac{\beta_{0}}{\beta}\int_{0}^{1}dx\frac{x^{\frac{\beta_{0}}{ \beta}}}{x-\lambda/\Gamma_{0}} \tag{36}\] \[=-\frac{\beta_{0}}{\beta}\frac{\Gamma_{0}}{(1+\beta_{0}/\beta) \lambda}{}^{2}\text{F}_{1}\Big{(}1,\frac{\beta_{0}}{\beta}+1,\frac{\beta_{0}} {\beta}+2,\frac{\Gamma_{0}}{\lambda}\Big{)}\quad,\] where \({}_{2}\text{F}_{1}\) is the Hypergeometric function [34]. Therefore the corresponding eigenvalue obeys the equation: \[\frac{\alpha}{1+\alpha}{}^{2}\text{F}_{1}\Big{(}1,\alpha+1,\alpha+2,\frac{ \Gamma_{0}}{\lambda}\Big{)}=\frac{\lambda}{\Gamma_{0}}\Big{(}\frac{\lambda}{ \Gamma_{0}}-1\Big{)}, \tag{37}\] where \(\alpha=\beta_{0}/\beta\). Because \(P_{\lambda}^{u}=0\) for case (I), the relaxation dynamics of \(P_{u}(t)\) is fully determined by the eigenvalue satisfying Eq.(37), which depends on \(\alpha\). Fig.4 shows the eigenvalue \(\lambda\) as a function of \(\alpha\). ## Appendix C Solutions of dynamic equations using Laplace transforms. In this Appendix, we solve Eq.(1) using the Laplace transform and obtain asymptotic solutions for long time. Because of the conservation of probabilities, \(P_{u}(t)+\int_{0}^{\infty}dE^{\prime}p_{b}(E^{\prime},t)=1\), Eq.(1) can be written in one equation, \[\frac{1}{\Gamma_{0}}\frac{d}{dt}p_{b}(E,t)= -e^{-\beta E}p_{b}(E,t) \tag{38}\] \[-\rho(E)\int_{0}^{\infty}p_{b}(E^{\prime},t)dE^{\prime}+\rho(E).\] We take the Laplace transform of Eq.(38) with respect to \(t\) and solve for \(p_{b}(E,s)\), \[p_{b}(E,s)= -\frac{\rho(E)C(s)}{s/\Gamma_{0}+e^{-\beta E}}+\frac{p_{b}(E,0)/ \Gamma_{0}}{s/\Gamma_{0}+e^{-\beta E}} \tag{39}\] \[+\frac{\rho(E)}{(s/\Gamma_{0}+e^{-\beta E})s},\] where \[C(s)=\frac{\int_{0}^{\infty}dE^{\prime}\frac{p_{b}(E^{\prime},0)/\Gamma_{0}}{ s/\Gamma_{0}+e^{-\beta E^{\prime}}}+\int_{0}^{\infty}dE^{\prime}\frac{\rho(E^{ \prime})}{(s/\Gamma_{0}+e^{-\beta E^{\prime}})s}}{1+\int_{0}^{\infty}dE^{\prime} \frac{\rho(E^{\prime})}{s/\Gamma_{0}+e^{-\beta E^{\prime}}}}. \tag{40}\] Figure 4: Eigenvalue \(\lambda\) as a function of \(\alpha\) obtained by numerically solving Eq.(37). \(\Gamma_{0}\) is set to unity. \(\lambda\) determines the relaxation rate to the asymptotic solutions in equilibrium and aging regime (see Fig.2). Eq.(14-15) with \(P_{u}(s)=1/s-\int_{0}^{\infty}dEp_{b}(E,s)\) give the complete solution of Eq.(1) in Laplace space. We first derive the expression of \(P_{u}(s)\) for \(s\to 0\). Integrating Eq.(14) for \(E\) to obtain, \[P_{b}(s)=-C(s)Q_{\rho}(s)+Q_{0}(s)+\frac{1}{s}Q_{\rho}(s), \tag{16}\] where \[Q_{\rho}(s)\equiv\int_{0}^{\infty}dE\frac{\rho(E)}{s/\Gamma_{0}+e^{-\beta E}}; \tag{17}\] \[Q_{0}(s)\equiv\int_{0}^{\infty}dE\frac{p_{b}(E,0)/\Gamma_{0}}{s/\Gamma_{0}+e^{ -\beta E}}; \tag{18}\] and \[C(s)=\frac{Q_{\rho}(s)}{s(1+Q_{\rho}(s))}+\frac{Q_{0}(s)}{1+Q_{\rho}(s)}. \tag{19}\] \(P_{b}(s)\) simplifies to \[P_{b}(s)=\frac{Q_{\rho}(s)}{s(1+Q_{\rho}(s))}+\frac{Q_{0}(s)}{1+Q_{\rho}(s)}, \tag{20}\] and \[\begin{split} P_{u}(s)&=\frac{1}{s}-P_{b}(s)\\ &=\frac{1}{s}\frac{1}{1+Q_{\rho}(s)}-\frac{Q_{0}(s)}{1+Q_{\rho}( s)}.\end{split} \tag{21}\] The term containing \(Q_{0}(s)\) in the second line of Eq.(21) is the contribution from the initial distribution giving subordinate contribution for long time. Here it is set to \(0\) because \(p_{b}(E,0)=0\), leading to \[P_{u}(s)=\frac{1}{s}\frac{1}{1+Q_{\rho}(s)}. \tag{22}\] One can explicitly evaluate \(Q_{\rho}(s)\) for \(s\to 0\) as follows for equilibrium case (I) and aging case (II). Equilibrium case (I). For the equilibrium case one can expand \(Q_{\rho}(s)\) as follows for \(s\to 0\), \[\begin{split} Q_{\rho}(s)&=\int_{0}^{\infty}dE \frac{\beta_{0}e^{-\beta_{0}E}}{s/\Gamma_{0}+e^{-\beta E}}\\ &\simeq\frac{\beta_{0}}{\beta_{0}-\beta}-\frac{s}{\Gamma_{0}} \frac{\beta_{0}}{\beta_{0}-2\beta}+O(s^{2}).\end{split} \tag{23}\] We substitute the first term of the expansion into Eq.(22) to obtain, \[P_{u}(s)\simeq\frac{\beta_{0}/\beta-1}{s(2\beta_{0}/\beta-1)}. \tag{24}\] Inverting to the real space, we have, \[P_{u}^{eq}=\frac{\alpha-1}{2\alpha-1}, \tag{25}\] where \(\alpha=\beta_{0}/\beta>1\). Aging case (II). For aging case, we first make variable transforms to extract the power law form of \(s\): \[\begin{split} Q_{\rho}(s)&=\int_{0}^{\infty}dE\frac {\beta_{0}e^{-\beta_{0}E}}{s/\Gamma_{0}+e^{-\beta E}}\\ &=\frac{\beta_{0}}{\beta}\int_{0}^{1}dx\frac{x\frac{\beta_{0}}{ \beta}-1}{s/\Gamma_{0}+x}\\ &=\frac{\beta_{0}}{\beta}\Big{(}\frac{s}{\Gamma_{0}}\Big{)}^{ \frac{\beta_{0}}{\beta}-1}\int_{0}^{\Gamma_{0}/s}dy\frac{y^{\frac{\beta_{0}}{ \beta}-1}}{1+y}.\end{split} \tag{26}\] In the second line, we used the change of variables \(x=e^{-\beta E}\) and the third line, \(y=x\Gamma_{0}/s\). In the limit of \(s\to 0\), we can extend the upper bound of the integral in the third line to \(\infty\): \[\int_{0}^{\infty}dy\frac{y^{\frac{\beta_{0}}{\beta}-1}}{1+y}=\pi\csc\Big{(} \frac{\beta_{0}}{\beta}\pi\Big{)}. \tag{27}\] Thus, in the limit of \(s\to 0\), \[Q_{\rho}(s)\simeq\frac{\beta_{0}}{\beta}\Big{(}\frac{s}{\Gamma_{0}}\Big{)}^{ \frac{\beta_{0}}{\beta}-1}\pi\csc\Big{(}\frac{\beta_{0}}{\beta}\pi\Big{)}. \tag{28}\] Noting that \(\beta_{0}/\beta-1<0\) in the aging regime, \(1+Q_{\rho}(s)\simeq Q_{\rho}(s)\) for \(s\to 0\). From Eq.(22), \[P_{u}(s)\simeq\frac{1}{sQ_{\rho}(s)}=\frac{\beta\Gamma_{0}\sin\big{(}\frac{ \beta_{0}}{\beta}\pi\big{)}}{\beta_{0}(s/\Gamma_{0})}. \tag{29}\] By taking the inverse Laplace transform we obtain the result for long time, \[P_{u}(t)=\frac{\sin\big{(}\alpha\pi\big{)}}{\alpha\pi\Gamma\big{[}\alpha\big{]} }\big{(}\Gamma_{0}t\big{)}^{\alpha}, \tag{30}\] where \(\alpha=\beta_{0}/\beta<1\). One can find complete solutions for special cases, infinite temperature (\(\beta=0\)) and zero temperature (\(\beta=\infty\)). For the infinite temperature case, solving Eq.(14-15) and taking the inverse Laplace transform, we obtain, \[\begin{split} P_{b}(t)&=\frac{1}{2}\big{(}1+e^{-2 \Gamma_{0}t}(-1+2P_{b}(0))\big{)};\\ P_{u}(t)&=\frac{1}{2}\big{(}1-e^{-2\Gamma_{0}t}(-1+ 2P_{b}(0))\big{)}.\end{split} \tag{31}\] For the zero temperature case, solving Eq.(14-15) and taking inverse Laplace transform, we obtain, \[\begin{split} P_{b}(t)&=1-P_{u}(0)e^{-\Gamma_{0}t}; \\ P_{u}(t)&=P_{u}(0)e^{-\Gamma_{0}t}.\end{split} \tag{32}\] This suggests that the dynamics is completely frozen for zero temperature. ## Appendix D Hilbert transform, analytic signal, and rheology. We refer Ref. [35; 36] for the theory and various applications with a comprehensive table of Hilbert transform. We discuss here the basic definition of Hilbert transform and analytic signal, and the connection to rheology. The Hilbert transform of a function, \(f(t)\), is defined as \[\mathcal{H}[f](t)=\frac{1}{\pi}p.v.\int_{-\infty}^{\infty}\frac{f(t^{\prime})} {t-t^{\prime}}dt^{\prime}, \tag{10}\] where \(p.v.\) denotes Cauchy principle value. Fourier transform (\(\mathcal{F}\)) of Hilbert transformed signal is the \(\pm 90\) degrees phase shift, depending on the sign of the frequency \(\omega\), of the original signal, namely, \[\mathcal{F}\big{[}\mathcal{H}[f]\big{]}(\omega)=-i\text{sgn}(\omega)\mathcal{ F}[f](\omega), \tag{11}\] where sgn is signum function. Using the Hilbert transform, analytic representation of \(f(t)\) is \[f_{a}(t)=f(t)+i\mathcal{H}[f](t). \tag{12}\] In the context of the active rheology of aging material, the following theorem is useful. _Bedrosian's theorem_[37]: Suppose a low-pass signal, \(l(t)\), and high-pass signal, \(h(t)\), have Fourier transforms \(L(\omega)\) and \(H(\omega)\), respectively, where \(L(\omega)=0\) for \(|\omega|>\omega_{0}\) and \(H(\omega)=0\) for \(|\omega|<\omega_{0}\). Then, \[\mathcal{H}[l(t)h(t)]=l(t)\mathcal{H}[h(t)]. \tag{13}\] Namely, the product of a low-pass and a high-pass signal with non-overlapping spectra is obtained by the product of the low-pass signal and the Hilbert transform of the high-pass signal. In the context of rheology, Bedrosian's theorem requires the spectra of the aging to have a maximum spectrum smaller than the frequency of input sinusoidal shear strain. _Time-varying viscoelastic spectrum._ We illustrate the connection of the analytic signal to the time-dependent rheology of aging materials. Let us consider the relation between the stress and strain-rate of a material with a relaxation function \(K(t,t^{\prime})\): \[\sigma(t)=\int_{0}^{t}K(t,t^{\prime})\dot{\bar{\epsilon}}(t^{\prime})dt^{ \prime}. \tag{14}\] We apply the sinusoidal strain having frequency \(\omega\) starting at \(t=t_{w}\): \(\bar{\epsilon}(\omega,t,t_{w})=\Theta(t-t_{w})\epsilon(t)\) where \(\epsilon(t)=\Re[\epsilon_{0}e^{i(\omega t+\varphi_{0})}]\) and \(\Theta(t)\) is Heaviside step function. Substituting \(\epsilon(\omega,t,t_{w})\) to Eq.(14) leads to \[\sigma(\omega,t,t_{w})=\Re[\epsilon_{0}e^{i(\varphi_{0}+\omega t)}G^{*}( \omega,t,t_{w})], \tag{15}\] where \[\begin{split} G^{*}(\omega,t,t_{w})\equiv& i\omega\int_{t_{w}}^{t}e^{-i\omega(t-t^{\prime})}K(t,t^{\prime})dt^{ \prime}\\ &+e^{-i\omega(t-t_{w})}K(t,t_{w}).\end{split} \tag{16}\] \(G^{*}(\omega,t,t_{w})\) is the time-varying viscoelastic spectrum [24]. We show that the time-varying viscoelastic spectrum may be obtained from the method of analytic signal. The analytic signal of the input strain, \(\Re[\epsilon_{0}e^{i(\omega t+\varphi_{0})}]\), is \(\epsilon_{a}(t)=\epsilon_{0}e^{i(\omega t+\varphi_{0})}\). Taking the Hilbert transform of Eq.(15), \[\mathcal{H}[\sigma(\omega,t,t_{w})]=\Re\Big{[}\mathcal{H}\big{[}\epsilon_{a}( \omega,t)G^{*}(\omega,t,t_{w})\big{]}\Big{]}. \tag{17}\] Assuming the spectra of \(G^{*}(\omega,t,t_{w})\) for \(t\) and spectra of the input shear strain, \(\omega\), satisfy the Bedrosian's theorem, \[\begin{split}\mathcal{H}[\sigma(\omega,t,t_{w})]&= \Re\Big{[}\mathcal{H}\big{[}\epsilon_{a}(\omega,t)\big{]}G^{*}(\omega,t,t_{w} )\Big{]}\\ &=\Re\big{[}-i\epsilon_{a}(\omega,t)G^{*}(\omega,t,t_{w})\big{]} \\ &=\Im\big{[}\epsilon_{a}(\omega,t)G^{*}(\omega,t,t_{w})\big{]}. \end{split} \tag{18}\] Thus, from the definition of analytic signal [Eq.(12)] with Eq.(15) and Eq.(18), the analytic signal of \(\sigma(\omega,t,t_{w})\) is written as \[\sigma_{a}(\omega,t,t_{w})=\epsilon_{a}(\omega,t)G^{*}(\omega,t,t_{w}). \tag{19}\] Therefore the definition of the instantaneous complex modulus, Eq.(11), gives: \[G(\omega,t,t_{w})\equiv\frac{\sigma_{a}(\omega,t,t_{w})}{\epsilon_{a}(\omega,t )}=G^{*}(\omega,t,t_{w}). \tag{20}\] This shows that, under the Bedrosian's theorem, the instantaneous complex modulus and the viscoelastic spectra are identical. It may be instructive to consider the simple Maxwell fluid. Because the Hilbert transform is a linear transform, we can write the constitutive equation of simple Maxwell fluid using analytic signal, \[\dot{\epsilon}_{a}(t)=\frac{\sigma_{a}(t)}{\eta_{0}}+\frac{\dot{\sigma}_{a}(t)} {G_{0}}. \tag{21}\] Let us consider the input stress \(\sigma(\omega,t)=\sigma_{0}\cos(\omega t)\). The analytic signal of \(\sigma(\omega,t)\) is \(\sigma_{a}(\omega,t)=\sigma_{0}e^{i\omega t}\). The explicit integration of right-hand side, setting integration constant \(0\), to obtain \(\epsilon_{a}(\omega,t)\) leads to \(\epsilon_{a}(\omega,t)=\sigma_{0}e^{i\omega t}(1/G_{0}-i/\big{(}\eta_{0} \omega\big{)})\). Therefore \(G(\omega,t)=\sigma_{a}(\omega,t)/\epsilon_{a}(\omega,t)=1/\big{(}1/G_{0}-i/ (\eta_{0}\omega)\big{)}\), which is the complex modulus of Maxwell fluid which does no have time dependence. Therefore, Eq.(11) recovers the definition of conventional complex modulus. ## Appendix E Generalized fluctuation-dissipation theorem for Maxwell glass We first obtain the strain-stress response function, \(\chi(t,t^{\prime})\), for the constitutive equation, Eq.(4). \[\begin{split}\epsilon(t)&=\int_{0}^{t}\Theta(t-t^{ \prime})\Big{(}\frac{P_{u}(t^{\prime})}{\eta_{0}}+\frac{1}{G_{0}}\frac{d}{dt^{ \prime}}\Big{)}\sigma(t^{\prime})dt^{\prime}\\ &=\int_{0}^{t}\Big{(}\Theta(t-t^{\prime})\frac{P_{u}(t^{\prime})}{ \eta_{0}}+\frac{2\delta(t-t^{\prime})}{G_{0}}\Big{)}\sigma(t^{\prime})dt^{ \prime},\end{split} \tag{22}\] where \(\Theta(t)\) is the Heaviside step function. The factor 2 in front of the delta function is to account for the boundary. Therefore the response function is given, as \[\chi(t,t^{\prime})=\Theta(t-t^{\prime})\frac{P_{u}(t^{\prime})}{\eta_{0}}+\frac{ 2\delta(t-t^{\prime})}{G_{0}}. \tag{10}\] On the other hand, using Eq.(17) and the constant \(4k_{B}T/G_{0}\), we compute \[\begin{split}&\Theta(t-t^{\prime})\frac{d}{dt^{\prime}}\langle \Delta x^{2}(t^{\prime})\rangle\\ &=\Theta(t-t^{\prime})\Big{(}2D_{0}P_{u}(t^{\prime})+\frac{d}{dt ^{\prime}}\frac{4k_{B}T}{G_{0}}\Big{)}\\ &=2k_{B}T\Big{(}\Theta(t-t^{\prime})\frac{P_{u}(t^{\prime})}{\eta _{0}}+\frac{2\delta(t-t^{\prime})}{G_{0}}\Big{)},\end{split} \tag{11}\] where we used integration by parts from the second line to the third line and the Einstein relation \(D_{0}\eta_{0}=k_{B}T\)[38]. Therefore we obtain the fluctuation-response relation, Eq.(21). The response function \(\chi(t,t^{\prime})\) is related to the dynamic modulus \(G(t,t^{\prime})\) by inverse, thus uniquely determined. To see this we notice that the shear strain \(\epsilon(t)\) is written using Eq.(10) and Eq.(11) as \[\begin{split}\epsilon(t)&=\int_{0}^{t}dt^{\prime} \chi(t,t^{\prime})\int_{0}^{t^{\prime}}dt^{\prime\prime}G(t^{\prime},t^{ \prime\prime})\epsilon(t^{\prime\prime})\\ &=\int_{0}^{t}dt^{\prime\prime}\epsilon(t^{\prime\prime})\int_{t ^{\prime\prime}}^{t}dt^{\prime}\chi(t,t^{\prime})G(t^{\prime},t^{\prime\prime}).\end{split} \tag{12}\] By direct calculation using Eq.(10) and Eq.(12) and using that the general form of \(K(t,t^{\prime})\) in our model [Eq.(6)] has exponential form, we obtain \[\int_{t^{\prime\prime}}^{t}dt^{\prime}\chi(t,t^{\prime})G(t^{\prime},t^{\prime \prime})=2\delta(t-t^{\prime\prime}), \tag{13}\] leading to the consistent expression for Eq.(12). Note that the factor 2 accounts for the integration of the delta function at the boundary. Eq.(13) shows that \(\chi(t,t^{\prime})\) and \(G(t,t^{\prime})\) are related by inverse and uniquely determined. ## Appendix F Numerical procedures to compute instantaneous complex modulus. To compute the instantaneous complex modulus \(G(\omega,t,t_{w})\), we employed the analytic signal approach to obtain the instantaneous amplitude and phase of the input and output signals. Specifically, we utilized the Python package "scipy.signal.hilbert" [39] to extract the instantaneous amplitude and phase for the input shear strain and output shear stress, as illustrated in Fig.5. This method allowed us to accurately capture the time-varying behavior of the signals and determine the complex modulus at any given time and frequency. The computation of the instantaneous complex modulus \(G(\omega,t,t_{w})\), as defined in Eq.(11), requires the input shear strain \(\epsilon(\omega,t)\) to span from \(t=-\infty\) to \(t=\infty\). Practically, when implementing the numerical computation of instantaneous complex modulus, we extrapolate the input shear strain used in the rheology experiment. Here we extended the imposed sinusoidal shear strain starting \(t=t_{w}\) and ending \(t=t_{f}\): \(\epsilon(\omega,t)\Theta(t-t_{w})\Theta(t_{f}-t)\), to the signal from \(t=t_{w}-\tau\) to \(t=t_{f}+\tau\), where \(\tau=t_{f}-t_{w}\) is the duration of the shear strain. For the output shear stress, we inserted 0 from \(t=t_{w}-\tau\) to \(t=t_{w}\) and from \(t=t_{f}\) to \(t=t_{f}+\tau\), to adjust the length of the input and output signals. After the extension of the input shear strain and output shear stress we computed Hilbert transform and then extracted back the original, experimentally relevant, part of the signal defined from \(t=t_{w}\) to \(t=t_{f}\). We computed the instantaneous complex modulus using the obtained analytic signal for the input shear strain and output shear stress. The Hilbert transform, computed using Fourier transform as shown in Eq.(10), may produce unwanted oscillations, known as the Gibbs phenomenon, due to the finite discontinuous signal (as illustrated in Fig.5). To obtain accurate results, we truncated the two edges of the complex modulus, i.e., the initial and final times where the artifact is most prominent. Additionally, we convolved the resulting \(G(\omega,t,t_{w})\) with a box-kernel whose length was identical to the wavelength of the input shear strain to mitigate the oscillations caused by the Gibbs phenomenon. This step helped to improve the accuracy of our results, shown in Fig.3.
2309.03335
SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis. While deep learning-based approaches have achieved impressive performance in this area, existing deep networks often fail to effectively utilize the shape structures of objects presented in images. As a result, the topology of reconstructed objects may not be well preserved, leading to the presence of artifacts such as discontinuities, holes, or mismatched connections between different parts. In this paper, we propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues. In contrast to previous methods that primarily rely on spatial correlations of image intensities for 3D reconstruction, our model leverages shape priors learned from the training data to guide the reconstruction process. To achieve this, we develop a joint learning network that simultaneously learns a mean shape under deformation models. Each reconstructed image is then considered as a deformed variant of the mean shape. We validate our model, SADIR, on both brain and cardiac magnetic resonance images (MRIs). Experimental results show that our method outperforms the baselines with lower reconstruction error and better preservation of the shape structure of objects within the images.
Nivetha Jayakumar, Tonmoy Hossain, Miaomiao Zhang
2023-09-06T19:30:22Z
http://arxiv.org/abs/2309.03335v2
# SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction ###### Abstract 3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis. While deep learning-based approaches have achieved impressive performance in this area, existing deep networks often fail to effectively utilize the shape structures of objects presented in images. As a result, the topology of reconstructed objects may not be well preserved, leading to the presence of artifacts such as discontinuities, holes, or mismatched connections between different parts. In this paper, we propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues. In contrast to previous methods that primarily rely on spatial correlations of image intensities for 3D reconstruction, our model leverages shape priors learned from the training data to guide the reconstruction process. To achieve this, we develop a joint learning network that simultaneously learns a mean shape under deformation models. Each reconstructed image is then considered as a deformed variant of the mean shape. We validate our model, SADIR, on both brain and cardiac magnetic resonance images (MRIs). Experimental results show that our method outperforms the baselines with lower reconstruction error and better preservation of the shape structure of objects within the images. ## 1 Introduction The reconstruction of 3D images from a limited number of 2D images is fundamental to various applications, including object recognition and tracking [12], robot navigation [44], and statistical shape analysis for disease detection [4, 36]. However, inferring the complete 3D geometry and structure of objects from one or multiple 2D images has been a long-standing ill-posed problem [25]. A bountiful literature has been investigated to recover the data from a missing dimension [9, 32, 34, 37]. Initial approaches to address this challenge focused on solving an inverse problem of projecting 3D information onto 2D images from geometric aspects [8]. These solutions typically require images captured from different viewing angles using precisely calibrated cameras or medical imaging machines [7, 28]. In spite of producing a good quality of 3D reconstructions, such methods are often impractical or infeasible in many real-world scenarios. Recent advancements have leveraged deep learning (DL) techniques to overcome the limitations posed in previous methods [5, 15, 27]. Extensive research has explored various network architectures for 3D image reconstruction, including UNets [30], transformers [14, 22], and state-of-the-art generative diffusion models [37]. These works have significantly improved the reconstruction efficiency by learning intricate mappings between stacks of 2D images and their corresponding 3D volumes. While the DL-based approaches have achieved impressive results in reconstructing detailed 3D images, they often lack explicit consideration of shape information during the learning process. Consequently, important geometric structures of objects depicted in the images may not be well preserved. This may lead to the occurrence of artifacts, such as discontinuities, holes, or mismatched connections between different parts, that break the topology of the reconstructed objects. Motivated by recent studies highlighting the significance of shape in enhancing image analysis tasks using deep networks [6, 20, 26, 39, 43], we introduce a novel shape-aware 3D image reconstruction network called SADIR. Our methodology builds upon the foundation of diffusion models while incorporating shape learning as a key component. In contrast to previous methods that mainly rely on spatial correlations of image intensities for 3D reconstruction, our SADIR explicitly incorporates the geometric shape information aiming to preserve the topology of reconstructed images. To achieve this goal, we develop a joint deep network that simultaneously learns a shape prior (also known as a mean shape) from a given set of full 3D volumes. In particular, an atlas building network based on deformation models [39] is employed to learn a mean shape representing the average information of training images. With the assumption that each reconstructed object is a deformed variant of the estimated mean shape, we then utilize the mean shape as a prior knowledge to guide the diffusion process of reconstructing a complete 3D image from a stack of sparse 2D slices. To evaluate the effectiveness of our proposed approach, we conduct experiments on both real brain and cardiac magnetic resonance images (MRIs). The experimental results show the superiority of SADIR over the baseline approaches, as evidenced by substantially reduced reconstruction errors. Moreover, our method successfully preserves the topology of the images during the shape-aware 3D image reconstruction process. ## 2 Background: Frechet Mean via Atlas Building In this section, we briefly review an unbiased atlas building algorithm [21], a widely used technique to estimate the Frechet mean of group-wise images. With the underlying assumption that objects in many generic classes can be described as deformed versions of an ideal template, descriptors in this class arise naturally by matching the mean (also referred as atlas) to an input image [21, 38, 45, 42, 46]. The resulting transformation is then considered as a shape that reflects geometric changes. Given a number of \(N\) images \(\{\mathcal{Y}_{1},\cdots,\mathcal{Y}_{N}\}\), the problem of atlas building is to find a mean or template image \(\mathcal{S}\) and deformation fields \(\phi_{1},\cdots\phi_{N}\) with derived initial velocity fields \(v_{1},\cdots v_{t}\) that minimize the energy function \[E(\mathcal{S},\phi_{n})=\sum_{n=1}^{N}\frac{1}{\sigma^{2}}\text{Dist}[ \mathcal{S}\circ\phi_{n}(v_{t}),\mathcal{Y}_{n}]\,+\,\text{Reg}[\phi_{n}(v_{t} )], \tag{1}\] where \(\sigma^{2}\) is a noise variance and \(\circ\) denotes an interpolation operator that deforms image \(\mathcal{Y}_{n}\) with an estimated transformation \(\phi_{n}\). The \(\text{Dist}[\cdot,\cdot]\) is a distance function that measures the dissimilarity between images, i.e., sum-of-squared differences [3], normalized cross correlation [2], and mutual information [40]. The \(\mathrm{Reg}[\cdot]\) is a regularizer that guarantees the smoothness of transformations. Given an open and bounded \(d\)-dimensional domain \(\Omega\subset\mathbb{R}^{d}\), we use \(\mathrm{Diff}(\Omega)\) to denote a space of diffeomorphisms (i.e., a one-to-one smooth and invertible smooth transformation) and its tangent space \(V=T\mathrm{Diff}(\Omega)\). A well-developed algorithm, large deformation diffeomorphic metric mapping (LDDMM) [3], provides a regularization that guarantees the smoothness of deformation fields and preserves the topological structures of objects for the atlas building framework (Eq. (1)). Such a regularization is formulated as an integral of the Sobolev norm of the time-dependent velocity field \(v_{n}(t)\in V(t\in[0,1])\) in the tangent space, i.e., \[\mathrm{Reg}[\phi_{n}(v_{t})]=\int_{0}^{1}(Lv_{t},v_{t})\,dt,\quad\text{with} \quad\frac{d\phi_{n}(t)}{dt}=-D\phi_{n}(t)\cdot v_{n}(t), \tag{2}\] where \(L:V\to V^{*}\) is a symmetric, positive-definite differential operator that maps a tangent vector \(v_{t}\in V\) into its dual space as a momentum vector \(m_{t}\in V^{*}\). We write \(m_{t}=Lv_{t}\), or \(v_{t}=Km_{t}\), with \(K\) being an inverse operator of \(L\). The operator \(D\) denotes a Jacobian matrix and \(\cdot\) represents element-wise matrix multiplication. In this paper, we use a metric of the form \(L=(-\alpha\Delta+\gamma\mathbf{I})^{3}\), in which \(\Delta\) is the discrete Laplacian operator, \(\alpha\) is a positive regularity parameter that controls the smoothness of transformation fields, \(\gamma\) is a weighting parameter, and \(\mathbf{I}\) denotes an identity matrix. The minimum of Eq. (2) is uniquely determined by solving an Euler-Poincare differential equation (EPDiff) [1, 29] with a given initial condition of velocity fields, noted as \(v_{0}\). This is known as the _geodesic shooting_ algorithm [35], which nicely proves that the deformation-based shape descriptor \(\phi_{n}\) can be fully characterized by an initial velocity field \(v_{n}(0)\). The mathmatical formulation of the EPDiff equation is \[\frac{\partial v_{n}(t)}{\partial t}=-K\left[(Dv_{n}(t))^{T}\cdot m_{n}(t)+Dm _{n}(t)\cdot v_{n}(t)+m_{n}(t)\cdot\mathrm{div}\,v_{n}(t)\right], \tag{3}\] where the operator \(D\) denotes a Jacobian matrix, \(\mathrm{div}\) is the divergence, and \(\cdot\) represents element-wise matrix multiplication. We are now able to equivalently minimize the atlas building energy function in Eq. (1) as \[E(\mathcal{S},\phi_{n})=\sum_{n=1}^{N}\frac{1}{\sigma^{2}}\text{Dist}[ \mathcal{S}\!\circ\!\phi_{n}(v_{n}(t)),\mathcal{Y}_{n}]+(Lv_{n}(0),v_{n}(0)), \text{ s.t. Eq. (\ref{eq:D_eq_1}) \& (\ref{eq:D_eq_2}). } \tag{4}\] For notation simplicity, we will drop the time index in the following sections. ## 3 Our Method: SADIR In this section, we present SADIR, a novel reconstruction network that incorporates shape information in predicting 3D volumes from a limited number of input 2D images. We introduce a sub-module of the atlas building framework, which enables us to learn shape priors from a given set of full 3D images. It is worth mentioning that while the backbone of our proposed SADIR is a diffusion model [16], the methodology can be generalized to a variety of network architectures such as UNet [33], UNet++ [47], and Transformer [11]. ### Shape-Aware Diffusion Models Based on Atlas Building Network Given a number of \(N\) training data \(\{I_{n},\mathcal{Y}_{n}\}_{n=1}^{N}\), where \(I_{n}\) is a stack of sparse 2D images with its associated full 3D volume \(\mathcal{Y}_{n}\). Our model SADIR consists of two submodules: 1. An atlas building network, parameterized by \(\theta^{a}\), that provides a mean image \(\mathcal{S}\) of \(\{\mathcal{Y}_{n}\}\). In this paper, we employ the network architecture of Geo-SIC [39]; 2. A reconstruction network, parameterized by \(\theta^{r}\), that considers each reconstructed image \(\hat{\mathcal{Y}}_{n}\) as a deformed variant of the obtained atlas, i.e., \(\hat{\mathcal{Y}}_{n}\stackrel{{\Delta}}{{=}}\mathcal{S}\circ \phi_{n}(v_{n}(\theta^{r}))\). In contrast to current approaches learning the reconstruction process based on image intensities, our model is developed to learn the geometric shape variations represented by the predicted velocity field \(v_{n}\). Next, we introduce the details of our shape-aware diffusion models for reconstruction, which is a key component of SADIR. Similar to existing diffusion models [16, 37], we develop a forward diffusion and a reverse diffusion process to predict the velocity fields associated with the pair of input training images and an atlas image. For the purpose of simplified math notations, we omit the index \(n\) for each subject in the following sections. **Forward diffusion process.** Let \(y^{0}\) denote the original 3D image with full volumes and \(\tau\) denote the time point of the diffusion process. We assume the data distribution of \(y^{\tau}\) is a normal distribution with mean \(\mu\) and variance \(\beta\), i.e., \(y^{\tau}\sim\mathcal{N}(\mu,\beta)\). The forward diffusion of \(y^{\tau-1}\) to \(y^{\tau}\) is then recursively given by \[p(y^{\tau}\,|\,y^{\tau-1})=\mathcal{N}(y^{\tau};\sqrt{1-\beta^{\tau}}y^{\tau-1},\beta^{\tau}\mathbf{I}), \tag{5}\] where \(\mathbf{I}\) denotes an identity matrix, and \(\beta^{\tau}\in[0,1]\) denotes a known variance increased along the time steps with \(\beta^{1}<\beta^{2}<\cdots<\beta^{\tau}\). The forward diffusion process is repeated for a fixed, predefined number of time steps. It is shown in [16] that repeated application of Eq. (5) to the original image \(y^{0}\) and setting \(\alpha^{\tau}=1-\beta^{\tau}\) and \(\bar{\alpha}^{\tau}=\prod_{i=1}^{\tau}\alpha^{i}\) yields \[p(y^{\tau}\,|\,y^{0})=\mathcal{N}(y^{\tau};\sqrt{\bar{\alpha}^{\tau}}y^{0},(1- \bar{\alpha}^{\tau})\mathbf{I}).\] Therefore, we can write \(y^{\tau}\) in terms of \(y^{0}\) as \[y^{\tau}=\sqrt{\bar{\alpha}^{\tau}}y^{0}+\sqrt{1-\bar{\alpha}}^{\tau}\epsilon \quad\mathrm{with}\quad\epsilon\sim\mathcal{N}(0,\mathbf{I}).\] **Reverse diffusion process.** Given a concatenation of a sparse stack of 2D images \(I\), an atlas image \(\mathcal{S}\), and \(y^{\tau}\) from the forward process, our diffusion model is designed to remove the added noise in the reverse process. Following the work of [41], we will now predict \(y^{\tau-1}\) from the input \(y^{\tau}\). The joint probability distribution \(p(y^{\tau-1}\,|\,y^{\tau})\) is predicted by a trained neural network (e.g., UNet) in each reverse time step for all \(\tau\in\{1,\cdots,T\}\), where \(T\) is the maximal time step. With the network model parameters denoted by \(\theta^{r}\), we can write the reverse process as \[p_{\theta^{r}}(y^{\tau-1}\,|\,y^{\tau})=\mathcal{N}(y^{\tau-1};\mu_{\theta^{r} }(y^{\tau},\tau),\Sigma_{\theta^{r}}(y^{\tau},\tau)).\] Similarly, we can write \(y^{\tau-1}\) backward in terms of \(y^{\tau}\) as \[y^{\tau-1}=\frac{1}{\sqrt{\alpha^{\tau}}}(y^{\tau}\frac{1-\alpha^{\tau}}{ \sqrt{1-\tilde{\alpha}^{\tau}}}\epsilon_{\theta^{r}}(y^{\tau},\tau))+\sigma^{t }\mathbf{z},\] where \(\sigma^{\tau}\) is the variance scheme the model can learn, the component \(\mathbf{z}\) is a stochastic sampling process. The model is trained with input \(y^{\tau}\) to subtract the noise scheme \(\epsilon_{\theta^{r}}(y^{\tau},\tau)\) from \(y^{\tau}\) to produce \(y^{\tau-1}\). The output of this reverse process is a predicted velocity field \(v(\theta^{r})\), which is then used to generate its associated transformation \(\phi(v(\theta^{r}))\) to deform the atlas \(S\). Such a deformed atlas is the reconstructed image \(\hat{\mathcal{Y}}=\mathcal{S}\circ\phi(v(\theta^{r}))\). An overview of the proposed SADIR network architecture is shown in Fig. 1. Figure 1: An overview of our proposed 3D reconstruction model SADIR. ### Network Loss and Optimization The network loss function of our model, SADIR, is a joint loss of the atlas building network and the diffusion reconstruction network. We first define the atlas building loss as \[\mathcal{L}(\theta^{a})=\sum_{n=1}^{N}\frac{1}{\sigma^{2}}\|\mathcal{S}(\theta^{ a})\circ(\phi_{n}(v_{n}))-\mathcal{Y}_{n}\|_{2}^{2}+(Lv_{n},v_{n})+\mathrm{reg}( \theta^{a}), \tag{6}\] where \(\mathrm{reg}(\cdot)\) denotes a regularization on the network paramters. We then define the loss function of the diffusion reconstruction network as a combination of sum-of-squared differences and Sorensen\(-\)Dice coefficient [10] loss (for distinct anatomical structure, e.g., brain ventricles or myocardium) between the predicted reconstruction and ground-truth in following \[\mathcal{L}(\theta^{r})=\sum_{n=1}^{N}\|\mathcal{S}\circ\phi_{n}(v_{n}(\theta^ {r}))-\mathcal{Y}_{n}\|_{2}^{2}+\eta\left[1-\text{Dice}(\mathcal{S}\circ\phi_ {n}(v_{n}(\theta^{r})),\mathcal{Y}_{n})\right]+\mathrm{reg}(\theta^{r}), \tag{7}\] where \(\eta\) is the weighting parameter, and \(\text{Dice}(\hat{\mathcal{Y}},\mathcal{Y}_{n})=2(|\hat{\mathcal{Y}}|\cap| \mathcal{Y}_{n}|)/(|\hat{\mathcal{Y}}|+|\mathcal{Y}_{n}|)\), considering \(\hat{\mathcal{Y}}_{n}\overset{\Delta}{=}\mathcal{S}\circ\phi_{n}(v_{n}(\theta ^{r}))\). Defining \(\lambda\) as a weighting parameter, we are now ready to write the joint loss of SADIR as \[\mathcal{L}=\mathcal{L}(\theta^{a})+\lambda\mathcal{L}(\theta^{r}).\] **Joint network learning with an alternative optimization.** We use an alternative optimization scheme [31] to minimize the total loss \(\mathcal{L}\) in Eq. (3.2). More specifically, we jointly optimize all network parameters by alternating between the training of the atlas building and diffusion reconstruction network, making it end-to-end learning. A summary of our joint training of SADIR is presented in Alg. 1. ``` Input : A group of \(N\) input images with full 3D volumes \(\{\mathcal{Y}_{n}\}\) and a stack of sparse 2D images \(\{I_{n}\}\). Output : Generate mean shape or atlas \(\mathcal{S}\), initial velocity fields \(v_{n}\), and reconstructed images \(\hat{\mathcal{Y}}_{n}\) 1for i = 1 to \(p\)do /* Train geometric shape learning network */ 2 Minimize the atlas building loss in Eq. (6) 3 Output the atlas \(\mathcal{S}\) /* Train diffusion network */ 4 Minimize the diffusion reconstruction loss in Eq. (7) 5 Output the initial velocity fields \(\{v_{n}\}\) and the reconstructed images \(\hat{\mathcal{Y}}_{n}\) 6 7 end for Until convergence ``` **Algorithm 1**Joint Training of SADIR. ## 4 Experimental Evaluation We demonstrate the effectiveness of our proposed model, SADIR, for 3D image reconstruction from 2D slices on both brain and cardiac MRI scans. **3D Brain MRIs:** For 3D real brain MRI scans, we include \(214\) public T1-weighted longitudinal brain scans from the latest released Open Access Series of Imaging Studies (OASIS-III) [23]. All subjects include both healthy and disease individuals, aged from \(42\) to \(95\). All MRIs were pre-processed as \(256\times 256\times 256\), \(1.25mm^{3}\) isotropic voxels, and underwent skull-stripped, intensity normalized, bias field corrected and pre-aligned with affine transformation. To further validate the performance of our proposed model on specific anatomical shapes, we select left and right brain ventricles available in the OASIS-III dataset [23]. **3D Cardiac MRIs:** For 3D real cardiac MRI, we include \(215\) publicly available 3D myocardium mesh data from MedShapeNet dataset [24]. We convert the mesh data to binary label maps using 3D slicer [13]. All the images were pre-processed as \(222\times 222\times 222\) and pre-aligned with affine transformation. ### Experimental Settings We first validate our proposed model, SADIR, on reconstructing 3D brain ventricles, as well as brain MRIs from a sparse stack of eight 2D slices. We compare our model's performance with three state-of-the-art deep learning-based reconstruction models: 3D-UNet [9]; DDPM, a probabilistic diffusion model [16]; and DISPR, a diffusion model based shape reconstruction model with geometric topology considered [37]. Three evaluation metrics, including the Sorensen-Dice coefficient (DSC) [10], Jaccard Similarity [19], and RHD95 score [18], are used to validate the prediction accuracy of brain ventricles for all methods. For brain MR images, we show the error maps of reconstructed images for all the experiments. To further validate the performance of SADIR on different datasets, we run tests on a relatively small dataset of cardiac MRIs to reconstruct 3D myocardium. **Parameter setting:** We set the mean and standard deviation of the forward diffusion process to be \(0\) and \(0.1\), respectively. The scheduling is \(\mathrm{linear}\) for the noising process and is scaled to reach an isotropic Gaussian distribution irrespective of the value of \(T\). For the atlas building network, we set the depth of the UNet architecture as \(4\). We set the number of time steps for Euler integration in EPDiff (Eq. (3)) as \(10\), and the noise variance \(\sigma=0.02\). For the shooting, we use a kernel map valued \([0.5,0,1.0]\). Besides, we set the parameter \(\alpha=3\) for the operator \(L\). Similar to [37], we set the batch size as \(1\) for all experiments. We utilize the cosine annealing learning rate scheduler that starts with a learning rate of \(\eta=1\mathrm{e}^{-3}\) for network training. We run all models on training and validation images using the Adam optimizer and save the networks with the best validation performance. In the reverse process of the diffusion network, we set the depth of the 3D attention-UNet backbone as 6. We introduce the attention mechanism via spatial excitation chan nels [17], with ReLU (Rectified Linear Unit) activation. The UNet backbone has ELU activation (Exponential Linear Unit) in the hidden convolution layers and GeLU (Gaussian error Linear Unit) activation with tanh approximation. For each training experiment, we utilize Rivanna (high-performance computing servers of the University of Virginia) with NVIDIA A100 and V100 GPUs for \(\sim 18\) hours (till convergence). For all the experimental datasets, we split all the training datasets into \(70\%\) training, \(15\%\) validation, and \(15\%\) testing. For both training and testing, we downsample all the image resolutions to \(64\times 64\times 64\). ### Experimental Results Fig. 2 visualizes examples of ground truth and reconstructed 3D volumes of brain ventricles from all methods. It shows that SADIR outperforms all baselines in well preserving the structural information of the brain ventricles. In particular, models without considering the shape information of the images (i.e., 3D-UNet and DDPM) generate unrealistic shapes such as those with joint ventricles, holes in the volume, and deformed ventricle tails. While the other algorithm, DISPR, shows improved performance of enforcing topological consistency on the object surface, its predicted results of 3D volumes are inferior to SADIR. Tab. 1 reports the average scores along with the standard deviation of the Dice similarity coefficient (DSC), Jaccard similarity, and Hausdorff distance computed between the brain ventricles reconstructed by all the models and the ground truth. Compared to all the baselines, SADIR achieves the best performance with a \(1.6\%-5.6\%\) increase in the average DSC with the lowest standard deviations across all metrics. Figure 2: Top to bottom: examples of reconstructed 3D brain ventricles from sparse 2D slices; Left to right: a comparison of brain ventricles of all reconstruction models with ground truth. Fig. 3 visualizes the ground truth and reconstructed 3D brain MRIs as a result of evaluating DDMP and our method SADIR on the test data, along with their corresponding error maps. The error map is computed as absolute values of an element-wise subtraction between the ground truth and the reconstructed image. The images reconstructed by SADIR outperform the DDPM with a low absolute reconstruction error. Our method also preserves crucial anatomical features such as the shape of the ventricles, corpus callosum and gyri, which cannot be seen in the images reconstructed by the DDPM. This can be attributed to the lack of incorporating the shape information to guide the 3D MRI reconstruction. Moreover, our model has little to no noise in the background as compared to the DDPM. Tab. 2 reports the average scores of DSC, Jaccard similarity, and Hausdorff distance evaluated between the reconstructed myocardium from all algorithms and the ground truth. Our method proves to be competent in reconstructing 3D volumes without discontinuities, artifacts, jagged edges or amplified structures, as can be seen in results from the other models. Compared to the baselines, SADIR achieves the best performance in terms of DSC, Jaccard similarity, and RHD95 with the lowest standard deviations across all metrics. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & DSC \(\uparrow\) & Jaccard similarity \(\uparrow\) & RHD95 \(\downarrow\) \\ \hline 3D-Unet & 0.878 \(\pm\) 0.0128 & 0.804 \(\pm\) 0.0204 & 4.366 \(\pm\) 1.908 \\ DDPM & 0.731 \(\pm\) 0.0292 & 0.652 \(\pm\) 0.0365 & 8.827 \(\pm\) 9.212 \\ DISPR & 0.918 \(\pm\) 0.0097 & 0.861 \(\pm\) 0.0158 & **1.041 \(\pm\) 0.130** \\ **SADIR** & **0.934 \(\pm\) 0.013** & **0.900 \(\pm\) 0.021** & 1.414 \(\pm\) 0.190 \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of 3D brain ventricle reconstruction for all methods. Figure 3: Left to right: a comparison of ground truth, DDPM, and SADIR along with the error map. Fig. 4 visualizes a comparison of the reconstructed 3D myocardium between the ground truth and all models. It shows that our method consistently produces reconstructed volumes that preserve the original shape of the organ with less artifacts. Fig. 5 shows examples of the superior, left, anterior and left-anterior views of the 3D ground truth and SADIR-reconstructed volumes of the myocardium for different subjects. We observe that the results predicted by SADIR have little to no difference from the ground truth, thereby efficiently preserving the anatomical structure of the myocardium. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & DSC \(\uparrow\) & Jaccard similarity \(\uparrow\) & RHD95 \(\downarrow\) \\ \hline 3D-Unet & 0.870 \(\pm\) 0.0158 & 0.771 \(\pm\) 0.024 & 0.840 \(\pm\) 0.202 \\ DDPM & 0.823 \(\pm\) 0.014 & 0.668 \(\pm\) 0.019 & 1.027 \(\pm\) 0.093 \\ DISPR & 0.950 \(\pm\) 0.017 & 0.906 \(\pm\) 0.031 & 0.347 \(\pm\) 0.032 \\ **SADIR** & **0.978 \(\pm\) 0.016** & **0.957 \(\pm\) 0.031** & **0.341 \(\pm\) 0.023** \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of 3D myocardium reconstruction for all methods. Figure 4: A comparison of reconstructed 3D myocardium between ground truth, 3D-UNet, DDPM, DISPR, and SADIR over four different views. ## 5 Conclusion This paper introduces a novel shape-aware image reconstruction framework based on diffusion model, named as SADIR. In contrast to previous approaches that mainly rely on the information of image intensities, our model SADIR incorporates shape features in the deformation spaces to preserve the geometric structures of objects in the reconstruction process. To achieve this, we develop a joint deep network that simultaneously learns the underlying shape representations from the training images and utilize it as a prior knowledge to guide the reconstruction network. To the best of our knowledge, we are the first to consider deformable shape features into the diffusion model for the task of image reconstruction. Experimental results on both 3D brain and cardiac MRI show that our model efficiently produces 3D volumes from a limited number of 2D slices with substantially low reconstruction errors while better preserving the topological structures and shapes of the objects. #### Acknowledgement. This work was supported by NSF CAREER Grant 2239977 and NIH 1R21EB032597.
2309.13126
Stretched-exponential relaxation in weakly-confined Brownian systems through large deviation theory
Stretched-exponential relaxation is a widely observed phenomenon found in ordered ferromagnets as well as glassy systems. One modeling approach connects this behavior to a droplet dynamics described by an effective Langevin equation for the droplet radius with a $r^{2/3}$ potential. Here, we study a Brownian particle under the influence of a general confining, albeit weak, potential field that grows with distance as a sub-linear power law. We find that for this memoryless model, observables display stretched-exponential relaxation. The probability density function of the system is studied using a rate function ansatz. We obtain analytically the stretched-exponential exponent along with an anomalous power-law scaling of length with time. The rate function exhibits a point of nonanalyticity, indicating a dynamical phase transition. In particular, the rate function is double-valued both to the left and right of this point, leading to four different rate functions, depending on the choice of initial conditions and symmetry.
Lucianno Defaveri, Eli Barkai, David A. Kessler
2023-09-22T18:22:56Z
http://arxiv.org/abs/2309.13126v2
# Stretched-exponential relaxation in weakly-confined Brownian systems through large deviation theory ###### Abstract Stretched-exponential relaxation is a widely observed phenomenon found in glassy systems. It was previously modeled with non-Markovian dynamics reflecting a memory effect. Here, we study a Brownian particle under the influence of a confining, albeit weak, potential field that grows with distance as a sub-linear power law. We find that for this memoryless model, observables display stretched-exponential relaxation. The probability density function of the system is studied using a rate function ansatz. We obtain analytically the stretched-exponential exponent along with an anomalous power-law scaling of length with time. The rate function exhibits a point of nonanalyticity, indicating a dynamical phase transition. In particular, the rate function is double-valued both to the left and right of this point, leading to four different rate functions, depending on the choice of initial conditions and symmetry. _Introduction_.- Anomalous relaxation, characterized by non-exponential decay, is observed in a wide range of physical systems [1]. One class of such behavior is stretched-exponential relaxation [2; 3; 4; 5; 6]. This has been seen, for example, in disordered or heterogeneous systems, where the complex interplay of interactions leads to a broad distribution of relaxation times [7]. In the context of diffusion in disordered media, the heterogeneous nature of the medium gives rise to a stretched-exponential pattern in particle movement, as observed in disordered dielectrics [8; 9; 10]. Moreover, the relaxation of systems with glassy dynamics often exhibits a stretched-exponential behavior [11; 12]. However, the possibility of simple Brownian models exhibiting such relaxation properties remains largely unexplored. In this Letter, we present a solution to the Fokker-Planck equation with an external potential that grows with distance as a sub-linear power law. Remarkably, we observe that the relaxation of the various moments of the position to their equilibrium values follows an anomalous stretched exponential. Formally, the Fokker-Planck equation (see Eq. (2)) for \(P(x,t)\), the probability density function (PDF), can be solved via an eigenfunction expansion, and yields stretched-exponential relaxation. However, it turns out that there is a simpler, more direct, approach, which also yields additional physical insight. We find that in the long-time limit, \(P(x,t)\) takes the form of the exponential of a rate function which possesses a scaling form, as usually results from a large-deviation formalism [13], which looks at the far tails of the distribution of an observable. Large deviation theory is a subject of much active interest in statistical physics, [14; 15; 16; 17; 18; 19; 20; 21; 22; 23], in particular, due to the recent discovery of dynamical phase transitions in the large-deviation behavior of some model systems [24; 25; 26; 27; 28; 29; 30; 31]. These are nonanalytic points of the rate function, and are so-called due to the analogy of the rate function to an equilibrium free energy [17; 26]. We show the relationship between the stretched-exponential relaxation and the presence of a dynamical phase transition. The rate function is defined as the logarithm of the PDF \(P(x,t)\), divided by a power of the time [17; 18; 26] \[\mathcal{I}(z)\equiv\lim_{\begin{subarray}{c}t,z\rightarrow\infty\\ z=x/t\end{subarray}}\frac{-\ln P(x,t)}{t^{\nu}}\,, \tag{1}\] with the anomalous time-exponent \(\nu\neq 1\) and where \(\mathcal{I}\) is a function of the scaling variable \(z\equiv x/t^{\gamma}\). The stretched-exponential relaxation of observables will be governed by the same anomalous exponent \(\nu\). It should be noted that in the previously identified cases with anomalous temporal scaling, the observable in question was nonlocal in time, whereas here it is the PDF of \(x\) itself that exhibits the anomalous scaling. The appearance of a rate function in our problem implies the surprising result that the anomalous stretched-exponential relaxation we observe is a result of large deviations, i.e. the dynamics at large \(x\). We find that the rate function is multivalued, and possesses a critical-point \(z_{c}\). The rate function has two possible branches for \(z<z_{c}\) and two for \(z>z_{c}\), all meeting at \(z_{c}\). All four possible combinations of branches below and above \(z_{c}\) have different interpretations, corresponding to different classes of initial conditions and parity. Two of the combinations have a jump discontinuity in \(\mathcal{I}^{\prime\prime}(z)\), marking a dynamical phase transition. Such a multi-valued rate function appears not to have previously been encountered. Using the appropriate branches of the rate function, we obtain the time scales of the stretched-exponential relaxation of the even and odd moments of \(x\). _Model_.- We study non-interacting Brownian particles, in contact with a heat bath at temperature \(T\), that are also subject to a binding potential field \(V(x)\). The spatial spreading of the cloud of particles can be described via the PDF \(P(x,t)\), obtained from the Fokker-Planck equation \[\frac{\partial}{\partial t}P(x,t)=D\left[\frac{\partial^{2}}{\partial x^{2}}+ \frac{\partial}{\partial x}\left(\frac{V^{\prime}(x)}{k_{B}T}\right)\right]P (x,t)\,, \tag{2}\] where \(D\) is the diffusion coefficient. We consider herein even potentials, satisfying \(V(x)=V(-x)\), which for large \(x\) grow as a sublinear power-law, \(V(x)\propto x^{\alpha}\), where \(0<\alpha<1\). This means that the force, \(F(x)=-V^{\prime}(x)\), will be negligible for large \(x\), as \(F(x)\propto x^{\alpha-1}\). At long times, the particles will reach the stationary equilibrium Boltzmann-Gibbs state \(P_{\rm BG}(x)=\exp[-V(x)/k_{B}T]/Z\), with \(Z\) being the normalizing partition function. For the more commonly considered superlinear growth of the potential, \(\alpha>1\), \(F(x)\) grows with distance and everything is standard. The system exponentially relaxes to the Boltzmann-Gibbs state, at a rate given by the first nonzero eigenvalue of the Fokker-Planck operator, which has a discrete spectrum starting at \(0\). This discreteness follows from the fact that under a similarity transformation, the Fokker-Planck equation becomes a Schrodinger equation [32] with an effective potential \(V_{S}(x)=F(x)^{2}/4k_{B}T+F^{\prime}(x)/2k_{B}T\), which grows without bound as \(x\to\infty\). However, for \(\alpha<1\), \(V_{S}(x)\) decays at large \(x\) as \(x^{-2(1-\alpha)}\) and so the spectrum goes continuously down to \(0\). Potentials of this form have already been studied in the context of resetting processes [33] and active processes [34], in particular, the \(\alpha=1\) limiting case [35; 36; 37]. For \(\alpha\to 0\) (and assuming \(V(x)\sim x^{\alpha}/\alpha\)), we have that for large \(x\), \(V(x)\sim\ln x\). In that limit, the relaxation has been shown to be governed by a power law [38; 39; 40; 41]. The question is then what happens for \(0<\alpha<1\)? For our numerical examples, we use the family of potentials \[V(x)=V_{0}\left(1+\frac{x^{2}}{\ell^{2}}\right)^{\alpha/2}\,. \tag{3}\] For \(x\gg\ell\), with \(\ell\) being the lengthscale of the center region, the potential exhibits the desired power law growth. Throughout this Letter, we shall scale the position variable by \(\ell\) and correspondingly the time by \(\ell^{2}/D\). As mentioned above, the \(P(x,t)\) can be decomposed in a sum of eigenfunctions, each decaying as \(e^{-\lambda t}\) with a continuous eigenvalue spectrum \(\lambda\) starting at \(0^{+}\), together with the Boltzmann-Gibbs bound state at \(\lambda=0\). It turns out that the dominant continuum contribution to \(P(x,t)\) for large times comes from the vicinity of a particular finite eigenvalue \(\lambda_{*}\). This dominant eigenvalue scales as a negative power of the time, \(\lambda_{*}(t)\sim t^{\nu-1}\), \(0\leq\nu<1\), so that \(e^{-\lambda_{*}(t)t}\ \sim e^{-t^{\nu}}\), a stretched-exponential relaxation to equilibrium. This stretched-exponential relaxation is easily demonstrated numerically (see below). The eigenvalue calculation will be sketched in the Supplemental Material (SM). We turn now instead to the rate-function calculation. Most strikingly, we shall see that the power of \(t\) in the stretched exponential can be obtained immediately from our scaling ansatz for the rate function. _Large deviation formalism.-_ From Eq. (1) we can write the PDF, up to pre-exponential factors, as \(P(x,t)\sim e^{-t^{\nu}\mathcal{I}(z)}\). Inserting this ansatz into the Fokker-Planck equation (2), we find that in the long-time limit \[\frac{z\gamma\mathcal{I}^{\prime}(z)-\nu\mathcal{I}(z)}{t^{1-\nu}}=\frac{ \mathcal{I}^{\prime}(z)^{2}}{t^{2\gamma-2\nu}}-\frac{V_{0}}{k_{B}T}\frac{ \alpha}{z^{1-\alpha}}\frac{\mathcal{I}^{\prime}(z)}{t^{(2-\alpha)\gamma-\nu}}. \tag{4}\] We impose that the time dependence of all terms is the same, namely, \(1-\nu=2\gamma-2\nu=(2-\alpha)\gamma-\nu\), and find that the scaling exponents are \[\nu=\frac{\alpha}{2-\alpha}\text{ and }\gamma=\frac{1}{2-\alpha}\,. \tag{5}\] With these exponents, both the exponent of the Boltzmann-Gibbs solution, \(V(x)/k_{B}T=V_{0}/k_{B}Tx^{\alpha}=t^{\nu}V_{0}/k_{B}Tz^{\alpha}\), and of free-diffusion, \(x^{2}/4Dt=t^{\nu}z^{2}/4D\), are compatible with our scaling. The rate function can be found by solving the nonlinear differential equation (4), which now reads \[\mathcal{I}^{\prime}(z)^{2}-\left(\frac{V_{0}}{k_{B}T}\frac{\alpha}{z^{1- \alpha}}+\frac{z}{2-\alpha}\right)\mathcal{I}^{\prime}(z)+\frac{\alpha \mathcal{I}(z)}{2-\alpha}=0. \tag{6}\] When we consider the large-\(z\) limit, the rate function solution will assume the form \(\mathcal{I}(z)\approx\xi_{0}z^{\mu_{0}}\), where \(\mu_{0}\) and \(\xi_{0}\) must be determined. Plugging \(\mathcal{I}(z)\) into Eq. (6) in the limit of large \(z\), we have \[\xi_{0}\mu_{0}^{2}z^{\mu_{0}-2}-\frac{V_{0}}{k_{B}T}\alpha\mu_{0}z^{\alpha-2} \sim\left(\frac{\mu_{0}-\alpha}{2-\alpha}\right)\,. \tag{7}\] There are two possible solutions for this equation. First, we have \(\mu_{0}=\alpha\), leading to \(\mathcal{I}(z)=V_{0}z^{\alpha}/k_{B}T\), i.e. the mentioned Boltzmann-Gibbs state, which is a solution of Eq. (6) for all \(z\). Second, we have \(\mu_{0}=2\) and \(\xi_{0}=1/4\), which is equivalent to large \(z\) diffusive behavior. We emphasize that these are the only possible asymptotic solutions of Eq. (6). Both these behaviors are necessary. The Figure 1: All possible rate functions, defined by Eq. (1), which are solutions of the differential equation in Eq. (6), versus the scaled position \(z\equiv x/t^{\gamma}\) (the scaled exponents defined in Eq. (5)) for \(\alpha=1/2\). The rate function has two low-\(z\) and two high-\(z\) branches, leading to four different possible forms. The region where the discriminant of Eq. (8) is negative is highlighted in gray. Boltzmann-Gibbs state is a possible time-independent solution of the problem, when the initial state is the equilibrium state. However, under any initial conditions that decay with \(x\) faster than the Boltzmann-Gibbs state and for any finite time \(t\), we cannot expect the Boltzmann-Gibbs state to describe the whole PDF. The particles cannot spread faster than what is permitted by diffusion. Since \(F(x)\to 0\) as \(x\to\infty\), we expect diffusive behavior at large \(|x|\), consistent with the second asymptotic behavior. The next step is to solve Eq. (6) globally. Since Eq. (6) is a second-order polynomial in \(\mathcal{I}^{\prime}(z)\), we have in fact two different options for the ODE: \[\mathcal{I}^{\prime}(z) =\frac{1}{2}\left(\frac{V_{0}}{k_{B}T}\frac{\alpha}{z^{1-\alpha} }+\frac{z}{2-\alpha}\right)\] \[\pm\frac{1}{2}\sqrt{\left(\frac{V_{0}}{k_{B}T}\frac{\alpha}{z^{1 -\alpha}}+\frac{z}{2-\alpha}\right)^{2}-\frac{4\alpha\mathcal{I}(z)}{2-\alpha }}\,. \tag{8}\] These two ODEs give rise to two smooth solutions that cross at a critical point \(z_{c}\), at which the square-root vanishes. One solution has the positive sign of the square root for \(z<z_{c}\) and the negative sign for \(z>z_{c}\) and the other with the opposite choices. Interestingly, two other solutions are also possible, where the sign does not switch, and which have a jump in \(\mathcal{I}^{\prime\prime}\), leaving us with four possible descriptions, two for \(z<z_{c}\) and two for \(z>z_{c}\), as shown in Fig. 1. The next task is to uncover the physical content of these rate-function branches. _Boltzmann-Gibbs thermal initial condition._- As noted above, if we were to start at time \(t=0\) with the thermal state in all of space, the state would remain unchanged for all time. In Fig. 1, this is shown as the curve \(ACE\). The square-root in Eq. (8) vanishes at a critical point \(z_{c}\), \[z_{c}=\left(\frac{\alpha(2-\alpha)V_{0}}{k_{B}T}\right)^{1/(2-\alpha)}\,. \tag{9}\] The pure thermal state is obtained when we switch from positive (\(z<z_{c}\)) to negative (\(z>z_{c}\)) in Eq. (8). _Localized initial condition._- Our goal is to associate the solutions shown in Fig. 1 with classes of initial conditions and with parity. For an initially localized packet of particles, we expect the large \(z\) behavior to be diffusive rather than Boltzmann-Gibbs. However, for small \(z\), we expect the behavior to match Boltzmann-Gibbs, so that the Boltzmann-Gibbs regime in \(x\) expands as \(t^{\gamma}\). Keeping the positive sign of the square-root for \(z>z_{c}\) leads to the desired diffusive-like large \(z\) behavior. The resulting singularity in \(\mathcal{I}^{\prime\prime}\) is precisely the dynamical phase transition. The resulting curve, \(ACD\) in Fig. 1, describes the localized initial condition. We can write odd/even PDFs using distinct rate functions as \[P_{\text{even}}\sim e^{-t^{\gamma}\mathcal{I}_{\text{even}}(z)}\text{ and }P_{\text{odd}}\sim e^{-t^{\gamma}\mathcal{I}_{\text{odd}}(z)}\,. \tag{10}\] If the particles start at the origin (\(x_{0}=0\)), by symmetry, \(P_{\text{odd}}(x,t)=0\), since \(V(x)\) is even and so all odd moments vanish, as they do in equilibrium. On the other hand, for \(x_{0}\neq 0\), the odd part is present, \(P_{\text{odd}}=(P(x,t)-P(-x,t))/2\), and is described by the curve \(BCD\). We then write that \(\mathcal{I}_{\text{odd}}(z)\equiv\mathcal{I}_{BCD}(z)\), which tends to a constant value \(\mathcal{I}_{\text{odd}}(0)\) as \(z\) approaches zero, see Fig. 1. Therefore, we have that \(P_{\text{odd}}(x,t)\) has an upper bound that decays as \(e^{-\mathcal{I}_{\text{odd}}(0)t^{\gamma}}\), indicating that the odd contributions will decay as a stretched exponential. Curve \(BCE\) describes the contribution from the continuum modes. This contribution is obtained by Figure 2: Comparison between theoretical and numerical (at finite times) rate functions, Eq. (1), as a function of the scaled position \(z\equiv x/t^{\gamma}\). The solid lines represent numerical results obtained integrating Eq. (2) at different times, indicated in the legend (panel (a)) for all panels. In panel (a) we show the clear convergence to the rate function \(\mathcal{I}_{ACD}(z)\) and highlight (white-filled circle) the critical transition point \(z_{c}\). In panel (b) we show the derivative \(\mathcal{I}^{\prime}(z)\), and the non-analytical behavior becomes clear. The solution is equivalent to Boltzmann-Gibbs up until the critical point, where the system changes to the free-particle solution for larger values of \(z\). In panel (c) we have the odd rate functions, where there is no dynamical phase transition. The numerical solutions demonstrate clear convergence towards the expected rate functions. We have used \(\alpha=3/4\), \(\ell=1\), \(D=1\) and \(V_{0}/k_{B}T=1\). subtracting from \(P(x,t)\) the Boltzmann-Gibbs solution, \(P^{*}\equiv P-P_{\rm BG}\). At large \(x\), this results in the negative of the Boltzmann-Gibbs solution, as the full solution is small. The stretched-exponential relaxation of the even moments of \(x\) to their equilibrium values is controlled by \(P^{*}\). The rate function, \(BCE\) in Fig. 1, also displays a dynamical phase transition at the critical point. Finally, the even part, \(P_{\rm even}=(P(x,t)+P(-x,t))/2\), is described by the \(ACD\) rate function, \(\mathcal{I}_{\rm even}(z)\equiv\mathcal{I}_{ACD}(z)\). This solution also captures the leading behavior of the density \(P(x,t)\) for localized initial conditions. We presented the four different scenarios for the rate function represented by branches in Fig. 1. Which scenario is the relevant one is based on the choice of the initial condition and the parity. We now use finite time numerical integration of the FPE to show that, using Eq. (1) (adapted for finite times), the results converge to the expected rate functions. In Fig. 2(a), for localized initial conditions, we show the numerical convergence to the rate function \(ACD\), while in Fig. 2(b) we show the derivative of the curve \(ACD\). The derivative clearly shows the non-analytical behavior at the point \(z_{c}\). In Fig. 2(c), we compare our long-time prediction for \(\mathcal{I}_{\rm odd}(z)\) (\(I_{BCD}(z)\)) with numerical results (\(t^{-\nu}\log P_{\rm odd}\), with \(P_{\rm odd}\) obtained numerically), showing clear convergence. In summary, we find that two of the rate functions are completely analytical. Those are the Boltzmann-Gibbs (curve \(ACE\)) and the odd rate functions (curve \(BCD\)). The other two possibilities, the localized initial condition rate function (curve \(ACD\)) and the continuum modes rate function (curve \(BCE\)) have a non-analytical behavior, and therefore a dynamical phase transition, at the critical point \(z_{c}\). _Remark on the notation and prefactors.-_ We highlight that the rate functions defined in Eq. (10) satisfy \(\mathcal{I}_{\rm even/odd}(z)=\mathcal{I}_{\rm even/odd}(-z)\) as these functions are even. Further, they do not depend explicitly on the initial conditions. The parity of \(P_{\rm even/odd}\) is determined by the pre-exponential factors. The exponential prefactors of \(P_{\rm odd}\) will depend on the initial condition, since for symmetric initial conditions, the whole odd part must vanish. The prefactor is obtained using the WKB method in the SM. _Anomalous relaxation.-_ Here we study the relaxation properties of the system, focusing on the first moment of the position \(x\), starting with an asymmetric initial condition (\(x_{0}\neq 0\)). Because this observable, \(x\), is odd, the only non-zero contribution comes from the asymmetric part of the PDF. The odd part of the PDF can be written using \(\mathcal{I}_{\rm odd}\), up to pre-exponential factors, as shown in Eq. (10). We obtain, up to pre-exponential factors, the stretched-exponential characteristic of the mean as \[\langle x\rangle=\int_{-\infty}^{\infty}x\,P_{\rm odd}(x,t)dx\sim e^{- \mathcal{I}_{\rm odd}(0)t^{\nu}}\,. \tag{11}\] The value of \(\mathcal{I}_{\rm odd}(0)\) governs the anomalous time-scale \(\tau\) of the relaxation. It is possible to obtain this value numerically by integrating Eq. (8) in the correct branch (positive sign). We have obtained an analytical expression through our eigenfunction calculations (see SM) \[\frac{1}{\tau^{\nu}}=\mathcal{I}_{\rm odd}(0)=\frac{\left(\sqrt{\pi}\left( \frac{\alpha V_{0}}{2k_{B}T}\right)^{\frac{1}{1-\alpha}}\frac{\Gamma\left( \frac{\alpha}{2-2\alpha}\right)}{\Gamma\left(\frac{\alpha}{2-2\alpha}\right)} \right)^{1-\nu}}{\nu^{\nu}(1-\nu)^{1-\nu}}. \tag{12}\] It is remarkable that a rate function controls the relaxation, in the sense that it cannot be considered describing a rare event, nor is it particularly hard to measure it. In Fig. 3, the stretched-exponential behavior is shown numerically in the long-time limit. The observed time scale matches our prediction in Eq. (12) and is independent of the odd moment under consideration. The same can be extended for even moments \(x^{2n}\) (see SM). In order to obtain a complete expression for the mean in Eq. (11), the rate function is not enough. We must account for the pre-exponential factor, \(\mathcal{A}(t)\), that is, \(P_{\rm odd}\approx\mathcal{A}(t)\,x^{1-\alpha}e^{-\mathcal{I}_{\rm odd}(z)t^ {\nu}}\). In the long-time limit, the main contribution to the integral in Eq. (11) arises from the region where \(x\) is much smaller than the critical \(x_{c}(t)\), corresponding to the small \(z\) region [42]. For small \(z\), we have \(\mathcal{I}_{\rm odd}(z)\approx\mathcal{I}_{\rm odd}(0)(1+k_{B}Tz^{2-\alpha} /V_{0}(2-\alpha)^{2})\), and the pre-exponential factor is (see SM) \[\mathcal{A}(t)=x_{0}\frac{k_{B}T}{(2-\alpha)V_{0}}\frac{\mathcal{I}_{\rm odd} (0)e^{\frac{V(0)}{k_{B}T}}}{t^{3/2-\nu}}\sqrt{\frac{\gamma-\nu}{4\pi}}\,. \tag{13}\] Note that for the transition value \(\alpha=1\), \(\gamma=\nu=1\), and \(\mathcal{A}(t)\) will be null. With the contribution of the prefactor, Figure 3: The log of the numerical ensemble average of the position for a system initialized with localized condition \(P(x,t=0)=\delta(x-x_{0})\) for different values of \(\alpha\) (shown in legend). The stretched-exponential behavior is clearly shown for long times, where we compare with \(\mathcal{I}_{\rm odd}(0)t^{\nu}\), where the anomalous time scale \(\mathcal{I}_{\rm odd}(0)\) is obtained in Eq. (12), and \(\nu=\alpha/(2-\alpha)\) (dashed black lines). On the inset, the complete theoretical prediction (dotted black lines), described by Eq. (14), is compared with the numerical simulations (colored lines) for the same exponents \(\alpha\) as in the main label. We have used \(V_{0}/k_{B}T=1\), \(\ell=1\) and \(x_{0}=0.04\). we obtain, from Eq. (11), the long-time expression for the relaxation, \[\frac{\langle x\rangle}{x_{0}} \sim \frac{\mathcal{C}_{1}\left(\frac{V_{0}}{k_{B}T},\alpha\right)}{t^ {\nu^{2}/2}}e^{-\mathcal{I}_{\mathrm{odd}}(0)t^{\nu}}\,, \tag{14}\] where the definition of \(\mathcal{C}_{1}\left(V_{0}/k_{B}T,\alpha\right)\) is found in the SM. Thus, the time relaxation of the system to equilibrium is through a stretched exponential (multiplied by a power law in time). We show the excellent agreement of Eq. (14) with the numerical results in the inset of Fig. 3. _Discussion.-_ The results we obtained in this Letter are quite general, but nevertheless, many extensions to this work are possible. As a first step in this direction, we studied a case in dimensions higher than one, showing that the main results remain valid. The characteristics of time-averaged observables and the potential link between different branches and singularities in the cumulant generating function warrant attention [43; 44]. It is likely that the multivalued nature of the rate function under study, which depends on symmetry and initial condition, is an important feature for other systems. Thus, the appearance of the multivalued rate function in more systems and its relationship with dynamical phase transitions requires further study. **Acknowledgements:** The support of Israel Science Foundation's grant 1614/21 is acknowledged. LD and EB thank Naftali Smith for the insightful conversations.
2309.08248
Verifiable Privacy-Preserving Computing
Privacy-preserving computation (PPC) methods, such as secure multiparty computation (MPC) and homomorphic encryption (HE), are deployed increasingly often to guarantee data confidentiality in computations over private, distributed data. Similarly, we observe a steep increase in the adoption of zero-knowledge proofs (ZKPs) to guarantee (public) verifiability of locally executed computations. We project that applications that are data intensive and require strong privacy guarantees, are also likely to require verifiable correctness guarantees, especially when outsourced. While the combination of methods for verifiability and privacy protection has clear benefits, certain challenges stand before their widespread practical adoption. In this work, we analyze existing solutions that combine verifiability with privacy-preserving computations over distributed data, in order to preserve confidentiality and guarantee correctness at the same time. We classify and compare 37 different schemes, regarding solution approach, security, efficiency, and practicality. Lastly, we discuss some of the most promising solutions in this regard, and present various open challenges and directions for future research.
Tariq Bontekoe, Dimka Karastoyanova, Fatih Turkmen
2023-09-15T08:44:13Z
http://arxiv.org/abs/2309.08248v3
# Verifiable Privacy-Preserving Computing ###### Abstract. Privacy-preserving computation (PPC) methods, such as secure multiparty computation (MPC) and homomorphic encryption (HE), are deployed increasingly often to guarantee data confidentiality in computations over private, distributed data. Similarly, we observe a steep increase in the adoption of zero-knowledge proofs (ZKPs) to guarantee (public) verifiability of locally executed computations. We project that applications that are data intensive and require strong privacy guarantees, are also likely to require correctness guarantees, especially when outsourced. While the combination of methods for verifiability and privacy protection has clear benefits, certain challenges stand before their widespread practical adoption. In this work, we analyze existing solutions that combine verifiability with privacy-preserving computations over distributed data, in order to preserve confidentiality and guarantee correctness at the same time. We classify and compare 32 different schemes, regarding solution approach, security, efficiency, and practicality. Lastly, we discuss some of the most promising solutions in this regard, and present various open challenges and directions for future research. distributed ledger technologies, homomorphic encryption, public auditability, public verifiability, secure multiparty computation, verifiable computing, zero-knowledge proofs + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science for the different solutions and discuss which constructions are most promising. To summarize, our contributions are as follows: * We identify and classify 32 existing solutions from academic literature into three main classes. We further divide each class based on the schemes' distinguishing primitives. * We compare these works based on the settings in which privacy and correctness are provided. Moreover, we study the efficiency of the different schemes, and discuss practical aspects such as public verifiability, suitable use case, and practical limitations. * We compare the different construction methods for different VPPC schemes and show which ones seem most promising for practical usage. Next to this, we denote open challenges and improvement areas for the different solutions, regarding security, efficiency, or practicality. ### Organization The remainder of this work is organized as follows. Section 2 discusses preliminaries regarding verifiability approaches and Section 3 discusses relevant background information regarding (non-verifiable) PPCs. We classify the VPPC schemes we found in Section 4 and discuss and compare the different classes at a high-level. Each solution (paradigm) and its challenges are discussed in detail, per class, in Sections 5 to 7. Finally, Section 8 concludes this work. ## 2. Preliminaries This section provides relevant background on the three methods for (public) verifiability that are used to construct the significant majority of VPPC schemes: ZKPs, homomorphic MACs and TEEs. ### Zero-knowledge proofs A zero-knowledge proof allows a _prover_ to prove correctness of a certain statement to a (group of) _verifier_(s), whilst hiding all private information (Zhu et al., 2017). This is useful in circumstances where the correctness of a claim is not evident, but the prover does hold private information that is sufficient to create a correctness proof. A ZKP scheme should satisfy at least the following (informal) requirements: * _Complete:_ An honest prover should be able to convince the verifier of the validity of any true statement. * _Sound:_ A prover should not be able to convince the verifier of the validity of a false statement. * _Zero-knowledge:_ Given a true statement, the protocol should not reveal any other information than the validity of a statement to the verifier, as long as the prover follows the protocol. In particular, no information should be leaked regarding the witness. Initially, most ZKP schemes were solutions for specific problems, mostly in the form of \(\Sigma\)-protocols, e.g., Schnorr's protocol (Schnorr, 1977). While these schemes could be used to create proofs for arbitrary NP-statements, they often have a proof size and verification time that scale linearly (or worse) in the computation size. On top of that, \(\Sigma\)-protocols _interactive_ which is often undesired. On the plus side, \(\Sigma\)-protocols can be made _non-interactive_ using the Fiat-Shamir (FS) heuristic (Schnorr, 1977), which additionally guarantees _public verifiability._ The introduction of Pinocchio (Pinocchio, 2017) as the first _succinct_ ZKP or zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) gave the first scheme with efficient verification and small proof size at the cost of requiring a _trusted setup_. Moreover, it could be used to prove any computation that can be described as an arithmetic circuit, i.e., it supports verifiable computing. Pinocchio was followed by many schemes with improved efficiency or different properties, e.g., _Grothh16_(Grothh16), _Marlin_(Pinocchio, 2017), _Fractal_(Pinocchio, 2017). Another line of efficient non-interactive zero-knowledge proofs (NIZKs) was started with the introduction of Bulletproofs (Krishnan, 2017). More recently, the efficient construction used for Bulletproofs has been applied to create \(\Sigma\)-bullets (Krishnan, 2017), or compressed \(\Sigma\)-protocol theory (Pinocchio, 2017). The protocols based on the latter can be used to construct zero-knowledge proofs for arithmetic circuits from standard security assumptions, which is not the case for zk-SNARKs, at the cost of an increased proof size and verification time. For a more extensive and up-to-date overview of ZKP schemes we refer the reader to, e.g., (Pinocchio, 2017; Pinocchio, 2017). ### Homomorphic MACs A message authentication code (MAC) scheme enables a user to generate a short, unforgeable tag for a message such that, later, any recipient of both tag and message can verify the integrity and authenticity of that message. The tag is computed using a secret key and is verified by checking the MAC against this same secret authentication key. Regular MACs are non-malleable on purpose, i.e., it is undesirable for parties to be able to alter both message and tag in such a way that verification still succeeds. However, homomorphic MACs deliberately allow for a restricted and pre-defined set of operations to be applied to both message and tag such that verification is still successful. ### Trusted Execution Environments TEEs are dedicated hardware (and software) components, running in an isolated part of the main CPU, that are capable of running code while maintaining data privacy and integrity. Nowadays, TEEs are offered by most major hardware vendors as well as a number of open source projects (Bartos et al., 2017; Pinocchio, 2017; Pinocchio, 2017). Code running on a TEE is isolated in such a way that it cannot be tampered with by any other process. A user wishing to use a remote TEE securely can verify that it is running the right code and has been created using safe setup procedures, by using a process known as (remote) attestation, as supported by, e.g., Intel SGX (Pinocchio, 2017). However, the user does have to trust that the hardware has not been tampered with or is broken in an undetectable manner. There are cases in academic literature where TEE security has been broken, however due to the novelty of the technology it is still unclear to which extent such attacks are possible on the most recent TEE hardware (Krishnan, 2017). ## 3. Privacy-Preserving Computations on Distributed Data There are different technologies that can be used to construct schemes for PPC over distributed data. Three of them are specifically suitable for our setting: HE, MPC, and DLT. The first two are well-established PPC methods, providing flexibility and offering strong privacy guarantees, making them excellent building blocks for VPPCs. The latter technology is more recent and does not directly provide privacy guarantees as needed for PPCs. However, DLT does provide a different way of realizing distributed computations in a more flexible manner. By pairing DLT with appropriate cryptographic building blocks, it can be used for VPPC. ### Homomorphic encryption HE denotes a collection of (asymmetric) encryption schemes that allow users to perform operations on encrypted data without decrypting it first. In general, an HE scheme is described by four (probabilistic) polynomial-time algorithms (KeyGen, Enc, Dec, Eval), respectively the key generation, encryption, decryption, and function evaluation algorithm. While there are different ways to represent the function \(f\) that is to be evaluated on HE ciphertexts, most constructions prefer a translation to an arithmetic circuit \(\mathcal{C}_{f}\) over \(\mathbb{M}\). Notably, not all homomorphic encryption schemes can be used to evaluate arbitrary arithmetic circuits. We distinguish, following (Becker et al., 2017), several types of homomorphic schemes in increasing order of functionality. _Partially homomorphic encryption (PHE)_ is a type of homomorphic encryption that is only _additively_ or only _multiplicatively_ homomorphic, implying that it can only evaluate arithmetic circuits consisting solely of addition, respectively multiplication gates. _Somewhat homomorphic encryption (SWHE)_ and _level homomorphic encryption (LHE)_ constructions can evaluate arithmetic circuits consisting of both addition and multiplication gates. However, the circuits that can be evaluated may be limited in the number of operations (of either type). For example a scheme that can only evaluate circuits with a maximum multiplication depth of 2. These schemes can also have practical limitations, such as ciphertexts or key sizes that increase exponentially based on the amount of multiplications in the circuit that is to be evaluated. _Fully homomorphic encryption (FHE)_ schemes can evaluate arbitrary arithmetic circuits, without exponential blowup of ciphertext or key size. This generality often comes at the cost of computationally expensive bootstrapping after a number of multiplications (Zhu and Kwiecinski, 2017). In the remainder of this work, when we talk about HE we only refer to SWHE, LHE and predominantly FHE schemes. ### Secure multiparty computation MPC is a collection of techniques that allows for performing joint computations over distributed data, while keeping all input data private and revealing nothing but the computation output. There are several ways to construct an MPC protocol or framework. We will discuss the most common constructions. Generally, there are \(n\) parties that participate in an MPC computation, and any participating party contributes with their respective private inputs. All \(n\) parties execute an _interactive_ protocol to compute the joint function output from their private inputs. Some of these parties may be corrupted by an adversary. Therefore, any secure MPC protocol should at least satisfy the following (informal) requirements (Krishnan et al., 2017), given the constraints of the used adversarial model (see Appendix A): * _Input privacy:_ No party should learn anything other than the function output it is assigned. Specifically, no party should learn anything about other parties' inputs, except from what can be derived from its assigned output. * _Correctness:_ Every party that receives a (partial) output, obtains a value that is correct with respect to the protocol specification and input data. * _Independence of inputs:_ Corrupted parties must pick their inputs independently of the honest parties' private inputs. _Secret Sharing._ In secret sharing-based MPC, private data is split over multiple parties. Information is split in such a way that only predefined sets of parties can reconstruct the full information, while other (sub)sets cannot even deduce partial information. Most secret sharing schemes divide a secret \(s\) among \(n\) parties in such a way that only subsets with \(t\) or more parties can reconstruct \(s\). Such a scheme is called a \(t\)-out-of-\(n\)-threshold scheme. _Additive secret sharing_ is one of the most intuitive examples of a secret sharing scheme. Given prime \(q\) and finite field \(\mathbb{Z}_{q}\), a secret \(s\in\mathbb{Z}_{q}\) is shared among \(n\) parties \(P_{1},\ldots,P_{n}\) by sending a random number \(s_{i}\in\mathbb{Z}_{q}\) to each party \(P_{i}\) such that \(\sum_{i}s_{i}=(\text{mod }q)\). Clearly, the secret can only be reconstructed by using the individual shares of all parties, while any strict subset of parties cannot even deduce partial information on \(s\), i.e., it is an example of an \(n\)-out-of-\(n\)-threshold scheme. It is also a _linear secret sharing scheme (LSSS)_, meaning that any linear operation performed on individual shares is applied to the secret when reconstructed. Multiplication of shares often requires the use of online secure multiplication protocol, which is fairly efficient for honest majority situations. In case of a dishonest majority, an LSSS shares can be multiplied efficiently using so called Beaver's multiplication triplets (Becker et al., 2017). These triplets are input/function-independent and can thus be generated in a so called _offline_ preprocessing phase. This significantly reduces the cost of the more expensive _online_ phase. Examples of popular secret sharing-based schemes are: Shamir's secret sharing (Shamir, 2018), SPDZ (Zhu and Kwiecinski, 2017), and MASCOT (Shamir, 2018). For an up-to-date overview on existing schemes with implementations we refer to, e.g., the MP-SPDZ library (Zhu and Kwiecinski, 2017). _Garbled Circuits._ Yao's garbled circuits (GCs) (Zhu and Kwiecinski, 2017) are a way to enable two distrusting parties to securely evaluate a function on their private input data. It requires that the underlying function is expressed as a boolean circuit consisting of 1-out-2-in gates. One party, the _garbler_, generates a garbled version of the entire circuit, by garbling the individual gates. In the original implementation, a gate is garbled by assigning a unique, random label to the possible values (true, false) of each input wire, and doubly encrypting the possible output values, under the corresponding input labels. The garbler randomly permutes the encrypted outputs and shares these with the _evaluator_ accompanied by the random labels corresponding to the private inputs of the garbler. The evaluator then participates in an oblivious transfer (OT) protocol (Krishnan et al., 2017) with the garbler to obtain the labels corresponding to their private inputs. Having received both input labels, the evaluator can correctly decrypt exactly one of the garbled outputs, thereby obtaining the true output bit(s). When a gate output is used as input to another gate, the true output will not be a bit, but rather a new random label that is used in the decryption of this other gate. An alternative construction to Yao's GCs is the BMR framework (Kang et al., 2017). Implementations of both and other frameworks can be found in, e.g., MP-SPDZ (Kang et al., 2017), ABY (Kang et al., 2018), or ABY\({}^{3}\)(Kang et al., 2019). PHE-based MPC.MPC can also be based on PHE. These solutions often consist of interactive protocols, making use of the homomorphic properties of PHE ciphertexts to reduce the communication. Most PHE-based MPC protocols consist of custom protocols for specific computations, e.g., division (Kang et al., 2019) or comparison (Kang et al., 2019). However, MPC frameworks for generic computations also exist, one such example based on threshold Paillier is the CDN framework (Paillier, 2019). ### Distributed ledger technologies DLTs cover a collection of technologies where a (large) number of parties collectively create, maintain, and agree on a shared state or ledger. The shared state can be known completely to all parties, but can also be distributed over the respective parties. A well-known example of a DLT is blockchain, where all parties agree -- through a consensus mechanism -- on a public ledger describing the full system state. Blockchain is best known for realizing decentralized currencies such as Bitcoin (Bitcoin, 2017), or smart contract systems like Ethereum (Ethereum, 2017). However, DLTs, also have applications in many other domains: from identity management (Bartos et al., 2018) (e.g., self-sovereign identity (SSI)) to healthcare (Bartos et al., 2018). We specifically observe the proliferation of decentralized apps (or dApps), each of which can be seen as a blockchain-based autonomous application, whose behavior is defined by a collection of scripts (or smart contracts). In lockstep with an increasing number of dApps, we observe an increase in using DLT for decentralized computations. Such decentralized computations can be used in any situation where multiple parties wish to perform a group computation. A group computation can be: (1) a single computation on distributed data; (2) a decentralized computation on centrally stored data; (3) a combination of the two. Especially of interest for this work are decentralized computations that take data privacy into account, e.g., (Bartos et al., 2018; Kang et al., 2019). ## 4. Classification In this section, we first summarize the approach for our literature review and then classify the 32 works that we found in three classes. This is followed by a description of the relevant properties that we have analyzed for each of the works included in our study. Finally, we compare these three classes (with respect to the properties) and discuss generic, high-level observations, based on our analysis. ### Literature search To obtain a comprehensive list of relevant works, we first determined a small set of recent, relevant articles on VPPC. Specifically, we determined at least one very recent'seed' article for different underlying (privacy-preserving) computation techniques, by querying the Google Scholar and Scopus databases. The most recent, at the time of writing, matching verifiable MPC paper (Kang et al., 2019) was found with the query: _(public verifiability OR public auditability) AND (MPC OR SMPC OR multi-party computation OR secure multi-party computation)_. Two relevant, recent verifiable HE papers (Kang et al., 2019; Kang et al., 2019) were found using: _(verifiable OR public verifiability OR public auditability) AND (homomorphic encryption OR fully homomorphic encryption)_. The final seed paper (Kang et al., 2019) on DLT based privacy-preserving computations was, up front, known to be recent and relevant, hence no specific query was used. Subsequently, we checked all papers in the reference list of these'seed' articles to the second degree. Next to this, all articles that referenced our'seed' articles were also checked. After filtering the articles on their relevance, based on title, abstract, and a quick scan of the content, we found 32 relevant works, presenting a novel or improved solution for VPPCs. ### VPPC classes Based on our literature review, we divide the works we found into three main classes of VPPC schemes, based on the underlying (privacy-preserving) computation technique. An overview of the classification of all papers is given in Table 1. This table also includes a further subdivision of each class based on the distinguishing cryptographic technique used to achieve (public) verifiability, along with other properties of each scheme. We consider four classes of VPPC schemes in our review: _MPC-based VPPC._ This class gathers all solutions that rely on -- pre-existing, adapted, or novel -- MPC schemes for computing functions on distributed private input data. _HE-based VPPC._ This class gathers all solutions that rely on existing HE for computing functions on private data. The private input data may be distributed over multiple parties, but this need not be the case, e.g., when considering verifiable outsourcing. _DLT-based VPPC._ This class concerns all schemes that rely on DLT for realizing computation and communication between the different parties. None of these schemes use MPC or HE to create private computations, and for most solutions data is only kept private from observers on the blockchain that do not partake in the computation taking place. _TTP-based VPPC._ Finally, we consider the solution described in (Kang et al., 2019) separately, since it solely relies on a trusted third party (TTP) to guarantee data privacy. Whilst it does use zero-knowledge proofs to achieve verifiability, we feel that the use of a single TTP for data privacy, makes this scheme unsuitable for a real-world implementation of a VPPC scheme. For that reason, we do not further discuss this class in the remainder of the paper, but solely mention it here as a 'base' solution of sorts. Each of the remaining classes is discussed in detail in Sections 5 to 7, where we also describe the open challenges. A summary thereof is provided in Tables 2 and 3. ### Properties Below, we describe the properties we consider in our analysis of the considered VPPC schemes. #### 4.3.1. Security and verifiability The first category of the properties we consider are related to security and verifiability. In what follows, we take a look at the most important security properties for VPPC schemes and provide an intuitive, informal description thereof. An evaluation of all solutions with respect to these properties is provided in Table 1 and discussed in more detail in the sections that come after. _Data privacy._ For data privacy we follow the definitions as are often used for MPC protocols. No party should learn anything about other parties' inputs, other than what can be derived from its assigned output. _Correctness._ For correctness we also follow the definitions as are often used for MPC protocols. Every party that receives a (partial) output, obtains a value that is correct with respect to the protocol specification and input data. _Public verifiability._ The correctness of the result with respect to the protocol specification and input data can be verified by anyone, i.e., also parties that did not participate in the computation, and does not require knowledge of a secret state. Public verifiability ideally holds even when all computation parties are corrupted. _Assumptions._ Correctness and data privacy can be based on different kinds of assumptions. Most schemes base their security on (computational) cryptographic assumptions. We classify the used assumptions as either _standard_ or _non-standard_. Standard assumptions are those that have been widely accepted by the cryptographic community, e.g., CDH or DDH; such assumptions have often been accepted for a longer period of time. Next to cryptographic assumptions, some schemes, especially those based on zk-SNARKs may require a _trusted setup_. If this trusted setup is broken, malicious parties are able to create false proofs and break correctness and verifiability guarantees. Alternatively, some solutions rely on _trusted hardware_ or _TTPs_ for guaranteeing data privacy and/or correctness. #### 4.3.2. Efficiency and practicality Practical performance and efficiency also play a big role in which solution is best suitable in a certain setting. However, since all solutions use different techniques, and different ways of evaluating the practical performance (if any), it is not feasible to compare the practical performance fairly. Instead, we focus on the asymptotic computation and communication complexities of all schemes and summarize these. An overview thereof is provided in Appendix B. Lastly, we describe in Table 1 whether the original paper includes some form of experimental performance evaluation, whether an implementation is publicly available, or whether there are any other practical aspects that influence the usability of the solution. ### High-level comparison We will first describe the use cases for which the different classes of VPPC schemes are best used. Then, we discuss the difference in efficiency trade-offs between the classes. First, we note that DLT-based solutions are best suited for situations with participants that do not have direct communication channels with one another, or for computations with varying groups of participants. A downside of DLT with respect to the other solutions is the limitation in message sizes and verification times that are imposed by the use of a shared ledger. Moreover, the lack of private communication channels often leads to more complicated solutions than are possible when such channels are present. In cases where direct communication channels are practical, HE- and MPC-based solutions will generally be more practical and efficient. MPC-based solutions have a strong focus on computations over distributed data, however can also be applied in settings of verifiable outsourcing. In that setting, the computations are performed by multiple servers, of which a fraction is required to be honest to guarantee data privacy. The minimum size of this fraction depends on the underlying MPC protocol, and is listed in Table 1. In verifiable outsourcing settings, the use of an HE-based VPPC scheme is often more practical, since computations on HE ciphertexts can be executed by a single central server that need not be trusted to guarantee data privacy. However, HE-based solutions can also be used in settings with distributed data. In that case, all data owners use a distributed, or threshold, key generation protocol to obtain a public-private key pair where the private key is shared over all data owners (Bartos et al., 2016; Bartos et al., 2016). Then, all data owners encrypt their data under the same public key and let the central server perform the actual computation on the received ciphertexts. The main difference between HE- and MPC-based schemes lies in the efficiency of both schemes. Generally speaking MPC-based schemes require significant amounts of communication, either in multiple online rounds, or in one large offline pre-processing round. The computations themselves are often rather simple and can be executed locally by each party. Alternatively, HE-based schemes can communicate all required ciphertexts in one round and let the computations be executed by a single party. Downside of HE with respect to MPC, is the high computational costs of performing HE operations and the large size of HE ciphertexts. For complicated computations MPC will often be the more time-efficient solutions, however the right choice will differ per use case. Additionally, we observe that MPC schemes have already been widely researched for multiple decades and have long reached practical efficiency. The same can not be said for HE schemes that have been around for about one decade and have only recently started to reach practical efficiency. Seeing as this is an area of active research, many optimizations for HE are yet to be expected. Finally, we remark that the addition of verifiability techniques such as ZKPs, homomorphic MACs, and TEEs can have significant influence on communication and computation efficiency. We discuss these differences in more detail in subsequent sections. ### High-level observations We make a number of high-level observations regarding (the comparison of) VPPC schemes in general. This predominantly concerns topics that are currently underexposed, however are very relevant for the adoption of VPPC schemes. _Input data authentication._ Verifiability of a privacy-preserving computation guarantees that the (public) output is computed correctly from the secret input. In other words, no corrupted party has meddled with the results. However, corrupted parties could still produce incorrect or malicious outputs by providing false inputs. In most real-world use cases, computation correctness alone will not be sufficient, and input authentication will be required. One would protect the entire data pipeline from authenticated input to result, by combining the two. Moreover, such solutions can be used to guarantee reproducibility and comparability, i.e., it can be guaranteed that computations were verifiability executed on the same data. This implies that different computations on the same data will give comparable, and therefore more useful, results. In our analysis we found only one recent solution (Rasmaglia et al., 2017) that focused on both verifiability and input authentication. _Reusability._ In practice, we often run different algorithms on the same data, or the same algorithms on different data. Logically, the question can be raised whether reusing parts of the (intermediate) data can improve efficiency, reduce communication, or provide guarantees for reproducibility. The solutions we found in our analysis had little to no attention for such reusability. However, we do observe a number of non-verifiable solutions for reusable MPC (Kaltonen et al., 2016) or reusable GC (Kaltonen et al., 2017) appear in recent years. With increased efficiency and less communication, reusability is especially beneficial in decentralized settings with large amounts of data or many participants. Benefits become even more clear, when considering that VPPCs uses costly primitives like HE and ZKP. _Post-quantum security._ In a world where the threat of quantum computers on classical cryptography is increasing rapidly, post-quantum secure solutions are becoming increasingly important (Kaltonen et al., 2017). Especially, in cases where ciphertexts and/or commitments are made publicly available (e.g., with public verifiability), forward security should be guaranteed by using post-quantum secure solutions,to prevent harvest-now-decrypt-later attacks (Kaltonen et al., 2017). Most, recent FHE schemes are believed to be post-quantum secure, and information-theoretically secure MPC protocols have been around for a long time. However, many other primitives used to create VPPC schemes are not post-quantum secure. More importantly, security against quantum adversaries was not discussed in the majority of the works we saw, even though it is becoming increasingly relevant by the day. _Comparing practical performances._ In our analysis, we observed that all works use very different methods to evaluate their asymptotic and/or practical performance (also see Appendix B). A surprisingly large subset of papers does not discuss performance aspects in its entirety. We admit it is no small feat to compare different VPPC schemes, especially those of different classes. However, to make well-informed choices for future research and real-world adoption it is of the utmost importance to be able to fairly compare different solution approaches at least at a high level. Making implementations of the majority of the schemes publicly available, would also greatly improve the ability to compare the practical performance of the different solutions, and adopt real-world adoption. We specifically mention as an example the MP-SPDZ work (Serban et al., 2017), which presented a well maintained framework of many recent MPC solutions, making easy comparison of performance and usage for practical use cases available for the research community and beyond. ## 5. MPC-based Vppc Solutions that use MPC as privacy-preserving computation mechanism for constructing a VPPC scheme can be divided in three groups. Each group uses a different mechanism for verifiability: succinct ZKPs, non-succinct ZKPs, or other. The final group consists of schemes that use mechanisms that are different from all the other papers in this class. For each group, we sketch the construction used and subsequently evaluate, analyze, and compare the works. ### Non-succinct ZKP The majority of verifiable MPC solutions uses commitments to (their shares) of input, intermediate, and output values in combination with NIZKs to guarantee correctness. First, we discuss solutions using non-succinct ZKP. #### 5.1.1. Description The first set of works (Serban et al., 2017; Serban et al., 2017; Serban et al., 2017; Serban et al., 2017), uses \(\Sigma\)-protocols and the FS heuristic to obtain NIZKs from standard assumptions. One solution is based on the CDN (Serban et al., 2017) framework. The three other works (Serban et al., 2017; Serban et al., 2017; Serban et al., 2017) use a more efficient SPDZ(-like) protocol for their MPC computation. Two of the SPDZ(-like) protocols (Serban et al., 2017; Serban et al., 2017) additionally rely on MACs similar to those used in the original SPDZ protocol. We also note that (Serban et al., 2017) makes use of post-quantum secure lattice-based commitments. A downside of \(\Sigma\)-protocols is the large proof size and significant effort required on the verifier's side for larger computations. Hence, recent works (Serban et al., 2017; Serban et al., 2017) often apply the more efficient compressed \(\Sigma\)-protocol theory (Bauer et al., 2017), or \(\Sigma\)-bullets (Serban et al., 2017), while still relying only on standard assumptions. Verification of all protocols in this subclass is done by verifying the entire transcript of the MPC computation. A verifier needs to check the ZKPs at each step to guarantee that each commitment to a new share is computed correctly. #### 5.1.2. Evaluation All protocols in this subclass provide correctness and public verifiability even when all parties are corrupted. The number of honest parties required to guarantee data privacy depends on the underlying MPC scheme used. Moreover, all solutions rely only on standard cryptographic assumptions and do not have a trusted setup. The protocols relying on standard \(\Sigma\)-protocols in combination with the FS heuristic (Serban et al., 2017; Serban et al., 2017; Serban et al., 2017) are generally speaking more costly in verification time and proof size than the schemes relying on compressed \(\Sigma\)-protocols (Serban et al., 2017; Serban et al., 2017). The efficiency of the MPC computations and communication thereof depend mostly on the underlying protocol used. The choice of MPC scheme also depends on the amount of dishonest parties one accepts, regarding the breakage of data privacy. For a comparison of these schemes with respect to their privacy guarantees and asymptotic complexities we refer to Table 1 and Table 4. We note that some of the schemes did not report efficiency metrics, or only provided incomparable experimental evaluations. An open question is whether we can find a uniform way to compare these schemes with respect to their efficiencies. Finally we note that there is one work (Serban et al., 2017) that uses a large number of post-quantum secure primitives. Although, this does not make the scheme fully post-quantum secure, the authors speculate that it is possible to do so. Unfortunately, we found no other works in this class that discussed the risk of quantum computing on the security of their solutions. Further research is needed to determine which solutions could be made post-quantum secure. ### Succinct ZKP Next to schemes relying on non-succinct ZKPs, we observe solutions using succinct ZKPs. These schemes work similarly, but are often more efficient with respect to verification and transcript size. #### 5.2.1. Description All solution that we observed (Serban et al., 2017; Serban et al., 2017; Serban et al., 2017), use distributed, often adaptive, zk-SNARKs to assure correctness. Most of these solutions allow for the use of vector commitments, implying that each party publishes one commitment to all their shares. The computation can then be verified by checking the zk-SNARK proof given the parties' commitments and the computed output. #### 5.2.2. Evaluation Constructions based on succinct ZKPs are in many ways similar to those based on non-succinct ZKPs. All proofs guarantee correctness and public verifiability even when all parties have been corrupted. The number of honest parties needed to provide data privacy is determined by the underlying MPC scheme as well. The difference between succinct and non-succinct ZKPs lies in the trade-off between security and efficiency. Succinct ZKP-based solutions have very small proof sizes and verification times. Moreover, these solutions work very efficiently with vector commitments, and do not require a verifier to check the entire transcript of the computation, reducing the communication and verification costs even more. This all comes at the cost of relying on non-standard cryptographic assumptions and the fact that most zk-SNARKs require a trusted setup. It is an open question who should be involved in this trusted setup to guarantee trust in the correctness proofs. The succinct schemes we found in our literature review compare as follows. The construction of (Serban et al., 2017) only guarantees data privacy given an honest majority of computation parties. The other works can be used with any LSSS scheme, offering the users a choice in the number of parties that are required to be honest, this is clearly the more flexible choice of the two. The main difference between the schemes is the type of distributed zk-SNARK that is used. (Serban et al., 2017) uses a zk-SNARK that is non-universal, i.e., a new setup is needed for a new computation, moreover it requires a trusted setup. (Serban et al., 2017) does use a universal zk-SNARK, making it possible to perform one universal trusted setup, rather than one for each computation. Finally, the solution described in (Serban et al., 2017) allows for the usage of any zk-SNARK proof that can be computed in a distributed fashion. This allows for flexibility and adoption of novel schemes that may be more efficient or have better security assumptions. An open question is whether zk-SNARK schemes with a trustless setup or post-quantum security, e.g., Fractal (Serban et al., 2017), can be used in this way. ### Other Below, we describe the remaining solutions that did not fit any other category. #### 5.3.1. Description (S For each group, we sketch the construction used and subsequently evaluate, analyze, and compare the works. ### Homomorphic MACs We observe three different approaches for combining HE ciphertexts and MACs, agreeing with (Kumar et al., 2017): (1) Encrypt-then-MAC; (2) Encrypt-and-MAC; (3) MAC-then-Encrypt. #### 6.1.1. Description The _Encrypt-then-MAC_ approach is used in (Kumar et al., 2017). In this construction, one first homomorphically encrypts each input and only then computes a homomorphic MAC of each ciphertext. Therefore, the MAC does not have to be hiding. The function is then computed by executing it on both the input MACs and the input ciphertexts. A correct computation can be verified before decryption, by verifying that the resulting MAC matches the resulting ciphertext. _Encrypt-and-MAC_ approaches have a MAC and ciphertext that are independent of one another, both are computed directly from the plaintext. The MAC therefore needs to be both hiding and support the ciphertext operations required for the FHE scheme. Also here, the requested function is computed on both the input MACs and the input ciphertexts. A correct computation can be verified before decryption, by verifying that the resulting MAC matches the resulting ciphertext. We found the most occurring approach of the three to be _MAC-then-Encrypt_, where the MAC is computed from the plaintext and concatenated to the plaintext before encryption. This removes the need for the MAC to be either hiding or support complex ciphertext maintenance operations. The MAC now only needs to support operations in the plaintext domain. In this case, the function is computed by executing it only on the input ciphertexts. A correct computation, however, can only be verified after decryption, by verifying that the decrypted MAC matches the decrypted result. #### 6.1.2. Evaluation We only found one occurrence of the _Encrypt-then-MAC_ approach dating from 2014 (Kumar et al., 2017). The presented solution is rather limited, in that it only supports the computation of quadratic circuits. Due to the lack of other, more recent, works using this approach, it seems doubtful whether this approach can be improved to support general circuits. One would need to design a MAC scheme that supports all operations of the HE scheme used, including ciphertext maintenance operations. It is unclear whether this is possible at all. Solutions using the _Encrypt-and-MAC_ approach suffer from similar issues. However, the homomorphic MAC used in (Kumar et al., 2017) does support any amount of additions and multiplications. Unfortunately, it does not support ciphertext maintenance operations, making bootstrapping impossible. This severely limits the multiplicative depth of the function to be computed, if one wants the HE computations to remain practical. To overcome this problem one would need to design a homomorphic MAC supporting these more complex HE operations, that is also hiding. The most promising, and occurring, approach seems to be _MAC-then-Encrypt_, since in this case the homomorphic MAC only needs to support plaintext addition and multiplication. The first solutions of this type were still limited by technical constraints. First, (Kumar et al., 2017) introduced a scheme for verifying computations over a boolean circuit. However, this approach was no more efficient than running the computation itself. An improvement upon this scheme, with more efficient verification of HE computations on arithmetic circuits, is proposed in (Kumar et al., 2017). The approach is however limited to arithmetic circuits of bounded depth. (Kumar et al., 2017), a more recent work using the same approach, does not have any of these limitations and can thus be used to verify HE evaluations of any arithmetic circuit. Whilst the _MAC-then-Encrypt_ approach is the most practical of the three, it should be noted that contrary to the other methods, the MAC can now only be checked after decryption and not before. Implying that some information could be leaked if the computation is incorrect. Finally, we note that none of these schemes are publicly verifiable, due to needing the full secret key to verify the MAC. ### ZKPs Only in recent years the first solutions that use ZKPs for verifiable computations on HE ciphertexts have started to appear. #### 6.2.1. Description All solutions that we observed used zk-SNARKs to achieve verifiability. The (distributed) computation is performed directly on the homomorphic ciphertexts. The resulting ciphertext is accompanied by a zk-SNARK proof verifying that it is indeed the result of applying the function to the input ciphertexts. The first solutions (Kumar et al., 2017; Kumar et al., 2017) both require homomorphic hashing to make the ciphertexts fit inside the groups that are used by the specific zk-SNARKs used. To do so, (Kumar et al., 2017) requires very specific setup parameters for the HE scheme, leading to inefficient HE operations. (Kumar et al., 2017) improves upon this work by not restricting the HE parameter space. However, both solutions are limited to'simpler' FHE schemes, i.e., without complex ciphertext maintenance operations, since these are not supported by the homomorphic hashing scheme used. (Kumar et al., 2017) proposes a method that does not rely on homomorphic hashes, but rather uses Rinocchio: a zk-SNARK that natively operates on ring elements, i.e., HE ciphertexts. Alternatively, one can also directly encode the HE operations into an arithmetic circuit suitable for any circuit-zk-SNARK. Experimental results for this approach are discussed in (Kumar et al., 2017). #### 6.2.2. Evaluation Two of the early solutions (Kumar et al., 2017; Kumar et al., 2017), suffer from drawbacks that make them highly impractical to use. The restriction on the HE parameters in (Kumar et al., 2017) makes the HE computations too slow to be practical. And the lack of support of ciphertext maintenance operations in both schemes, puts an implicit bound on the multiplicative depths of the functions that can be computed. The alternative solution, of translating the HE operations, including ciphertext maintenance operations directly into an arithmetic circuit (Kumar et al., 2017), makes it possible to support modern, efficient HE schemes. However, the complexity of HE operations, and the fact that HE ciphertexts do not naturally fit zk-SNARK groups, makes proof generation costly, and impractical for realistic use cases. The most promising solution using ZKPs is Rinocchio (Kumar et al., 2017), a zk-SNARK that natively operates on HE ciphertexts, thereby drastically reducing prover time for complex computations (Kumar et al., 2017). However, an open question for Rinocchio is how to perform ciphertext maintenance operations. This generally necessitates operations not supported by rings or switching between different rings, which is not supported by Rinocchio. An advantage of zk-SNARK-based approaches is the fact that the proofs are publicly verifiable, unlike homomorphic MACs or TEE attestation. Moreover, proof sizes are succinct, verification times small, and correctness of the resulting ciphertext can be verified before decryption. A downside of zk-SNARKs is their reliance on non-standard cryptographic assumptions, and the fact that most schemes require a trusted setup. ### TEEs HE computations can also be verified by executing them inside a TEE and using remote attestation to guarantee correctness. #### 6.3.1. Description [66] presents such a construction for FHE-inside-TEE. Clients send their encrypted data to a TEE in the cloud, and only need to attest that the TEE performs exactly the desired computation. This verification can take place before decrypting the results, allowing all parties to check correctness beforehand. #### 6.3.2. Evaluation The biggest advantage of using a TEE is computation time, since TEEs natively support most operations needed for HE computations. Even though such operations will be slower than on a regular CPU, they are faster than computing, e.g., a zero-knowledge proof. Optimizing FHE computations for TEEs can lead to notable performance improvements [82], making the need for research into this topic clear. The use of a TEE guarantees data privacy and correctness of the computation, given that one trusts the specialized hardware being used. It is however unclear how to achieve public verifiability, i.e., how attestation can be (efficiently) performed by external parties, or long after the computation has been executed. ### Other Below, we describe the remaining solutions that did not fit in any of the above categories. #### 6.4.1. Description [62] presents an approach for verifiable FHE, they call 'blind hashing', where the hash of the plaintext is appended to the plaintext before encryption, i.e., this is similar to MAC-then-Encrypt. The hash is subsequently verified by also computing the hash of the decrypted result and ensuring it equals the hash that was included in the ciphertext. [44] proposes to use Yao's GC for one-time verifiable computations between a client and server. Subsequently, they propose to add FHE on top to transform this one-time approach into a reusable one, allowing multiple evaluations of the same function on different input data. It should be noted that a different computation needs a new setup, and computing the setup is very inefficient. #### 6.4.2. Evaluation The 'blind hashing' approach [62] is a preliminary publication that lacks a security proof/reasoning, and only explains how to perform a single matrix multiplication. It is unclear how this approach extends to other operations, such as needed for ciphertext maintenance and more complex circuits. We deem an actual MAC-then-Encrypt approach more viable than this approach purely based on hashing. Being published in 2010, makes the approach using Yao's GC [44] the oldest verifiable HE approach we found. Whilst this scheme is efficient, the fact that each different computation requires the computation of a rather inefficient setup, makes it impractical for real-world usage. Our literature search showed no later work using a similar approach. All together this makes us conclude that this approach is not as viable as other HE-based solutions. ### Comparison Out of the different methods for HE-based VPPC schemes we observe three predominant categories, either using homomorphic MACs, zk-SNARKs, or TEEs. Of the homomorphic MACs approaches, the MAC-then-Encrypt approach seems to be the most promising, due to it putting the least requirements on the homomorphic MAC used. A downside compared to the other methods is that one does need to decrypt the ciphertext before verifying the MAC, it is an open question how we can resolve this issue. Another problem with MAC-based approaches, is the fact that one needs to know the secret key to allow for verification of the MAC, making public verifiability not directly possible. We expect it to be possible to solve this issue using other cryptographic techniques, such as ZKPs or MPC. But, further research into this topic is needed. zk-SNARKs-based approaches do offer public verifiability. However, current solutions suffer from efficiency issues, making them impractical for larger computations; especially when ciphertext maintenance operations are required. The overhead caused by proof computation is much larger than the overhead of homomorphic MACs or TEEs attestation. However, in cases were public verifiability is a requirement, zk-SNARKs are currently the only solution. We do expect that further improvements in zk-SNARKs allow for more efficient proof generation, making this method catch up regarding efficiency. Another downside of zk-SNARKs with respect to MACs is the fact that zk-SNARKs often require a trusted setup and are based on non-standard assumptions. Moreover, we speculate that current (non-succinct) ZKPs schemes, based on standard assumptions, require too large proof sizes and computational efforts to be feasible. Further research into efficient versions of those schemes with respect to HE operations could make such solutions possible. TEE-based solutions seem to be the most practical of the three, especially with optimizations in place. A downside of TEE-based solutions, with respect to the other solutions, is the requirement to trust specific hardware. This trust may not always be present in practice. Another downside of TEE-based approach with respect to zk-SNARKs is the lack of public verifiability. All in all, when public verifiability is a requirement, currently zk-SNARKs seem to be the best solution. However, further research into the other directions is expected to lead to more efficient alternatives. When public verifiability is not a requirement, the main trade-off to be made is between trust in specific hardware and efficiency. This trade-off will be use case dependent and can not be made in general. _Open challenges._ Solutions relying on ZKPs from standard cryptographic assumptions and without a trusted setup. * More efficient ZKPs that natively support ring operations; with a specific focus on ciphertext maintenance operations and ring switching operations. * Public verifiability using homomorphic MACs or TEEs. * Realizing MAC-then-Encrypt constructions for which the MAC can be verified before decrypting the results. * Optimization of HE operation inside TEEs. ## 7. DLT-based VPPC VPPC schemes that use DLT as basis for their distributed computations can be divided in three groups, based on the mechanism used for verifiability. We sketch the construction used in each group and evaluate pros and cons, based on their different properties. ### Succinct ZKPs While most DLT applications using succinct ZKPs focus purely on financial transactions, e.g., Zerocash (Zerocash, 2017), we also identified works that focus on privacy and verifiability for smart contracts. #### 7.1.1. Description Hawk (Hawk, 2017) is a smart contract system that can be initialized over any decentralized cryptocurrency. It has a basic protocol for transferring coins to another party anonymously in the same fashion as Zerocash. However, it also allows for combining a coin transfer with programmable logic, in which the function to be computed is provided as a smart contract. To achieve this, all users participating do not directly spend a private coin, but commit both a coin and their private function input to a smart contract, along with a zk-SNARK proof of correct construction. Each user also includes an opening for the function input encrypted under the public key of a trusted manager. The actual computation of the function takes place off-chain by this trusted manager, who first opens the commitments to all inputs. The manager then computes the function and publishes the output, along with a zk-SNARK proof and a spending distribution of the committed coins. Thereby completing both the transaction and computation, whilst keeping the inputs private, i.e., only known by the trusted manager. ZEXE (Zerocash, 2017) uses a different construction, called decentralized private computation (DPC), that does not rely on a trusted manager for input privacy, and keeps the function itself private too. In a DPC each blockchain entry is a record containing input data, a birth predicate, and a death predicate. The birth predicate defines under which conditions this record was constructed. The death predicate defines under which conditions a record can be consumed. Any user of the blockchain system can consume any records for which the death predicates can be satisfied and use these to create new records, with new payload and predicates, i.e., perform a state transition. Since death predicates can put conditions on all details of the newly created records and on the users that can perform a state transition, we can use this system to model a smart contract. ZEXE guarantees anonymity of the payload, or input data, by only adding commitments to the records on-chain. Any valid transaction that consumes and creates records, has to add a zk-SNARK proof attesting to the correct creation of the new records, and the fact that the user also knows a valid proof for the predicates of each consumed record. #### 7.1.2. Evaluation By using zk-SNARKs, Hawk (Hawk, 2017) offers public verifiability of correctness of the executed computations, given that the used zk-SNARK is secure, irrespective of the behavior of the trusted manager. Moreover, since zk-SNARKs have very small proof sizes and verification times, they are suitable to use in a DLT environment. Privacy of function inputs is guaranteed, but is dependent upon the trusted manager not revealing any of these inputs, which is unrealistic in practice. Zexe (Zerocash, 2017) guarantees data privacy without such a trusted manager. Moreover, it also adds function privacy to any other observer on the blockchain. Only the party consuming a record needs to know the functions, or predicates, used to create this record. A downside is that, where in Hawk a complete computation is performed at once, in ZEXE a computation is performed per party, given the (intermediate) state. This leads to longer computation times. Moreover, ZEXE does not by default keep record data private from the party consuming the record. One would still have to rely on HE- or MPC-style computations to achieve this. Public verification of the correctness of the executed computations is very efficient. Next to that, the actual function can be computed locally. However, most computation time will be consumed by proof generation. In the case of ZEXE, waiting on verification of previous blocks with input records for the next step of the computation might lead to a large amount of latency on top of this. Finally, we note that zk-SNARKs are based on non-standard cryptographic assumptions, which may be undesirable in practice. Next to this, most zk-SNARKs require a trusted setup, which, if broken, could be used to create false proofs. ### Non-succinct ZKPs We also found one solution based on non-succinct ZKPs, making a different trade-off between security and efficiency. #### 7.2.1. Description Zether (Zether, 2017) is a privacy-preserving payment system for smart contract platforms using \(\Sigma\)-bullets. \(\Sigma\)-bullets combine the optimizations of bulletproofs (Zerocash, 2017) with classic \(\Sigma\)-protocol theory, to create more efficient \(\Sigma\)-style proofs. Where non-private cryptocurrencies require access to the transaction details to verify each transaction, Zether only adds commitments to the transaction details on the public ledger. These commitments are accompanied by \(\Sigma\)-bullets, that attest to exactly the predicates that are normally checked in verification. Rather than checking these predicates directly, any verifier can now check correctness of the provided proof to obtain the same guarantees. #### 7.2.2. Evaluation The advantage of using \(\Sigma\)-bullets rather than zk-SNARKs is that they only rely on standard security assumptions and do not require a trusted setup, leading to stronger privacy and correctness guarantees, whilst still providing public verifiability. This comes at the cost of more expensive verification and larger proof sizes. Both of these increase with the size of the computation that is executed. Zether (Zether, 2017) only offers private coin transfer, and does not support generic computations. While \(\Sigma\)-bullets can be used for generic computations (Blek et al., 2017), this would lead to verification times and proof sizes that are likely too large to be used in a DLT setting. It is doubtful whether non-succinct ZKPs are suitable for generic computations in a DLT setting. ### TEEs Ekiden (Ekiden, 2015) is a blockchain solution where smart contracts over private data are executed off-chain. #### 7.3.1. Description In Ekiden (Ekiden, 2015), specialized compute nodes with TEEs execute the smart contract and compute the results. The consensus nodes use remote attestation to verify these results and update the smart contract results on-chain accordingly. #### 7.3.2. Evaluation Rather than relying on expensive cryptographic machinery or trusted parties, Ekiden (Ekiden, 2015) uses trusted hardware to guarantee correctness, whilst maintaining privacy. Whilst TEEs are slower than computations on regular CPUs, they are multiple orders of magnitudes faster than generating zero-knowledge proofs (Ekiden, 2015). This comes however at the cost of relying upon the privacy and correctness guarantees of the trusted hardware. If the hardware is compromised or faulty, privacy and correctness could be broken as a whole. To what extent one can and should trust TEE hardware is an open question and might require the technology to be around for longer to gain more trust. Another open question is that of public verifiability, TEEs have very different remote attestation mechanisms. It is unclear whether one can achieve public verifiability, or even verify correctness long after the computation has taken place. Moreover, since not all parties may have the means or access to perform remote attestation, Ekiden puts trust in a select group of nodes to perform verification. This may not be desirable in all cases. ### Comparison DLT applications often require very small verification time and small message sizes in order to make block verification practical, and keep the size of the ledger manageable. Due to the larger verification time and proof sizes of non-succinct ZKPs approaches, we do not expect such solutions to be feasible in a DLT setting for generic computations. When considering ZKPs, zk-SNARKs seem the logical choice in a DLT setting. Especially the DPC approach as described by ZEXE seems very promising. Not only does it provide data privacy and public verifiability, but also guarantees function privacy, something that has not been observed in any of the other classes. Open questions, however, exist regarding how to improve the efficiency of composable zk-SNARKs and how to remove trusted setups. Moreover, DPC based approaches do not hide the input data (or function to be computed), from the other parties involved in the computation. This could be solved using, e.g., an MPC based computation. However, a purely MPC-based approach might be more efficient in that case. There should be clear, additional benefits of using DLT before choosing it over other approaches. An alternative approach to a ZKP-based approach, is to perform computations using a TEE. This is more efficient than using ZKPs, but does require the user to have trust in and access to specific trusted hardware. It is doubtful whether this is practical in a DLT setting, mostly due to the fact that TEE hardware is often only available centrally, thereby defeating the purpose of DLT. Both promising approaches require trust in other means that standard cryptographic assumptions. We do not expect this to be circumventable in the near future, due to the current requirements on message sizes and verification times in DLT settings. #### Open challenges * More efficient composable zk-SNARK constructions. * Schemes using ZKPs without trusted setup. * Efficient solutions to keep data and/or functions private from other computation parties. ## 8. Conclusion We presented a systematic overview of VPPC solutions, applicable to scenarios with distributed data and identified three main classes: MPC-, HE-, and DLT-based. Each class is applicable best in different scenarios, depending on the available infrastructure and resources. Next, we analyzed the solutions in each class, by dividing the classes based on the verifiability paradigm that was used and identified the most pressing open challenges. A high-level summary thereof is depicted in Tables 2 and 3. For DLT-based approaches the use of succinct ZKPs for verifiability seemed the most promising approach, given the constraints on verification time and proof size. Construction using ZKPs also were most promising in MPC-based solutions. We note that approaches using succinct ZKPs are significantly more efficient than those based on non-succinct ZKPs. However, this comes at the cost of non-standard security assumptions and trusted setups. Finally, for HE based approaches, constructions using MAC-then-Encrypt with homomorphic MACs and TEEs seem most promising, although both approaches have some practical limitations. \begin{table} \begin{tabular}{l l l l l l l} Solution paradigm & & & & & & \\ MPC-Non-succinct ZKP & Yes & Yes & High & Medium & High & S \\ MPC-Succient ZKP & Yes & Yes & Low & Medium & Low & NS+TS \\ HE-MAC & Partial & & & Low & Low & S \\ HE-ZKP & Partial & Yes & Low & High & Low & N+TS \\ HE-TE & Yes & No & Low & Low & Low & TH \\ DLT-Non-succinct ZKP & No & Yes & High & Medium & High & S \\ DLT-Nuccient ZKP & Yes & Yes & Low & Medium & Low & N+TS \\ DLT+TE & No & No & Low & Low & Low & TH \\ \hline \end{tabular} * \({}^{\dagger}\) S = Standard; NS = Non-Standard; TS = Trusted Setup; TH = Trusted Hardware \end{table} Table 2. High-level comparison of solutions paradigms. \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{c}{**Collenge**} & **Solution paradigm(s)** \\ \hline Quantum secure & All\({}^{\ddagger}\) \\ Modulairty & All\({}^{\ddagger}\) \\ Dealing with dishonest inputs & All\({}^{\ddagger}\) \\ More efficient proof generation & Any\({}^{\ddagger}\)-ZKP \\ More efficient verification & Any\({}^{\ddagger}\)-Non-succinct ZKP \\ Smaller communication size & Any\({}^{\ddagger}\)-Non-succinct ZKP \\ Dealing with or removing trusted setup & Any\({}^{\ddagger}\)-Succinct ZKP \\ Combine (rector) commitments with ZKP & MPC-ZKP \\ Support ciphertext maintenance operations & HE-MAC \\ Public verifiability & DLT-TEE; HE+TEE; HE+MAC \\ \hline \({}^{\dagger}\) All = {MPC, HE, DLT} + any verifiability paradigm & \({}^{\ddagger}\) Amy = {MPC, HE, DLT} \\ \end{tabular} \end{table} Table 3. Summary of open challenges per solution paradigm. ## Acknowledgments The authors thank Mohammed Alghazwi, Vincent Dunning, and Berry Schoenmakers for their feedback on initial versions of this paper.
2301.02621
Deep leakage from gradients
With the development of artificial intelligence technology, Federated Learning (FL) model has been widely used in many industries for its high efficiency and confidentiality. Some researchers have explored its confidentiality and designed some algorithms to attack training data sets, but these algorithms all have their own limitations. Therefore, most people still believe that local machine learning gradient information is safe and reliable. In this paper, an algorithm based on gradient features is designed to attack the federated learning model in order to attract more attention to the security of federated learning systems. In federated learning system, gradient contains little information compared with the original training data set, but this project intends to restore the original training image data through gradient information. Convolutional Neural Network (CNN) has excellent performance in image processing. Therefore, the federated learning model of this project is equipped with Convolutional Neural Network structure, and the model is trained by using image data sets. The algorithm calculates the virtual gradient by generating virtual image labels. Then the virtual gradient is matched with the real gradient to restore the original image. This attack algorithm is written in Python language, uses cat and dog classification Kaggle data sets, and gradually extends from the full connection layer to the convolution layer, thus improving the universality. At present, the average squared error between the data recovered by this algorithm and the original image information is approximately 5, and the vast majority of images can be completely restored according to the gradient information given, indicating that the gradient of federated learning system is not absolutely safe and reliable.
Yaqiong Mu
2022-12-15T08:06:46Z
http://arxiv.org/abs/2301.02621v1
# Deep leakage from gradients ###### Abstract With the development of artificial intelligence technology, Federated Learning (FL) model has been widely used in many industries for its high efficiency and confidentiality. Some researchers have explored its confidentiality and designed some algorithms to attack training data sets, but these algorithms all have their own limitations. Therefore, most people still believe that local machine learning gradient information is safe and reliable. In this paper, an algorithm based on gradient features is designed to attack the federated learning model in order to attract more attention to the security of federated learning systems. In federated learning system, gradient contains little information compared with the original training data set, but this project intends to restore the original training image data through gradient information. Convolutional Neural Network (CNN) has excellent performance in image processing. Therefore, the federated learning model of this project is equipped with Convolutional Neural Network structure, and the model is trained by using image data sets. The algorithm calculates the virtual gradient by generating virtual image labels. Then the virtual gradient is matched with the real gradient to restore the original image. This attack algorithm is written in Python language, uses cat and dog classification Kaggle data sets, and gradually extends from the full connection layer to the convolution layer, thus improving the universality. At present, the average squared error between the data recovered by this algorithm and the original image information is approximately 5, and the vast majority of images can be completely restored according to the gradient information given, indicating that the gradient of federated learning system is not absolutely safe and reliable. Federated Learning, CNN, reconstruction attack, Gradient feature ## 1 Introduction In modern Federated Learning (FL) systems [1-3], model updating by exchanging gradient information among multiple participants is a very common approach. The user data of each participant is always stored locally, and only the gradient information is propagated between different models. This type of algorithm does not need to establish a dedicated central node for data processing, which protects the privacy of users and the local model can be fully trained with the help of a federated learning system. For example, medical systems can share the same data model while protecting the patient's private information [4]. Therefore, it is not easy to extract the data information of local models from the gradient, which has long been believed to be able to be propagated among different models without worrying about privacy leakage, but in fact, stealing local information from the gradient is still traceable. With the rapid development of AI technology, federation learning models are increasingly used as a fundamental technique in AI technology. Federal learning keeps the data of each participant locally, and the databases of each participant remain independent of each other during modeling, while the information interaction during joint training is encrypted to ensure the confidentiality and efficiency of the system. In addition, the federated learning system can guarantee that the training effect of the local training model is almost the same as that of the original centralized training model. Nowadays, the development of artificial intelligence and deep learning is rapidly changing, and federated learning solves the problem that data from all parties in the previous centralized model can only be used at the central node, and ensures the privacy and confidentiality of users at each node. Federated learning is suitable for training models with large volumes of data and can be applied in a variety of contexts. Nowadays, the concept of smart cities has gained widespread attention, and federal learning models have greatly contributed to the construction of smart cities. In terms of economy and finance, it can combine data from various banks to build a model of economic fluctuation, which can better predict the future economy, etc. In terms of politics and people's livelihood, it can build a bridge between governments at all levels and the masses, realize effective information sharing between governments and the masses, build a good platform for communication between the masses and the government, and help various governments to build a good system of people's city built by the people, so that the authorities can do their work more efficiently and the masses can do their work more conveniently, etc. efficient, more convenient for the masses, etc. The high efficiency and confidentiality of the federal learning system make it more and more widely used. However, the confidentiality of the federal model needs to be further explored, and if the data involved in the training can be restored by some means, it proves that the system still needs to be improved. With the continuous progress of artificial intelligence, the protection of Internet privacy has gradually become a hot topic of discussion. By studying the vulnerability of the system, the confidentiality of the federation learning system is gradually improved, which can also provide some new ideas for the protection of Internet privacy nowadays. This thesis focuses on the gradient information leakage problem in convolutional neural network-based federal learning systems, and explores how to restore the original data image from the gradients containing very little information. After introducing the basic principles, the effect of Deep Leakage from Gradients (DLG) algorithm to restore the original image is studied, and certain improvements are made based on it, and finally the corresponding conclusions are drawn by comparison. The structure of the thesis is as follows: Chapter 1 briefly introduces the research background, status and significance of this thesis, and briefly composes the content to be studied in this thesis. Chapter 2 briefly introduces the federal learning system, the structure, functions and common models of CNN, and some attack algorithms against the federal learning system. Chapter 3 mainly introduces the general principle of local information leakage, and the working principle and derivation process of DLG algorithm. Chapter 4 mainly shows the implementation of the depth gradient algorithm, analyzes the shortcomings of the algorithm, proposes improvement methods and compares them. Chapter 5 mainly integrates and summarizes the research content of this topic, presents the shortcomings and areas for improvement, and provides an outlook for the gradient attack algorithm for FL. Related Technologies This section introduces the basic concepts and related techniques needed to understand the reconstruction attack based on gradient features, including the introduction of the federal learning model, the convolutional neural network structure used to train the model, the related models, the role of the functions involved in the network, and some methods for gradient-based attacks. ### Federal Learning Model The system for federated learning [22] first utilizes an encryption-based user sample alignment technique where data owners identify the common users of each party while securing the data of their respective users in order to federate the features of these users for modeling, and the modeling training process requires federated models to secure the privacy of each local database. First, the federated model sends the public key to the local database to ensure that the local place completes the local data encryption before performing data exchange. After that, the local place transmits the data to the joint model in encrypted form. The data has been initially calculated by the local place and the gradient is calculated based on the tag value, and then the gradient is encrypted and transmitted to the joint model. The joint model combines the gradients calculated by each local model to find the total gradient value, decrypts it and sends it to each local model, so the local model can update its own model parameters according to the new gradient value and improve the optimized model. The above process is repeated until the gradient is infinitely close to the set value, which completes the training of the whole model. During the model training process, the data of each data owner is not exposed to the federated model and other local models, and the data exchange during training does not lead to data privacy threats. As a result, all parties are able to cooperate in training the model with the help of the federated learning model. ### Convolutional Neural Networks Convolutional Neural Network (CNN) is a deep learning model inspired by biological neural networks [23], formed by interconnecting multiple layers of neurons, where the number of input data in each layer is equal to the number of neurons in the previous layer, and each neuron can receive multiple inputs but can only output one data. This network is often applied in image processing, and the structure and role of each layer will be described next [24]. Input Layer.Convolutional neural networks first need to convert image information into input data. The color of a color picture pixel consists of three attributes: red, green and blue, which are called RGB three channels, and the number of pixels in each row and column of each picture is the resolution of the picture. However, for black and white pictures, the color of the pixels is determined only by the attribute grayscale value. Assume that the value of each channel is between 0 and 511. A color photo with a resolution of 100\(\times\)100 can be converted to a tensor of (100,100,3), and a black and white photo of the same size can be converted to a tensor of (100,100,1). The main work of this layer is to perform a pre-processing of the original image, which consists of three main categories: Centering, which subtracts the average of this dimension from each dimension of the input data, so that the center of the data lies at the zero point. Normalization, which makes the standard deviation of the data to be 1, reduces the effect of different values taken by the data. PCA is used to reduce the correlation between the feature values and strives to eliminate the correlation between image bands; and whitening, which weakens the effect of the magnitude on the feature axis of the data. ### Convolutional Layer. The three hyperparameters of the convolution kernel are Stride, Zero Padding and Depth. Stride is the number of frames that the data frame moves, which in Figure 2-3 is equal to 1. Zero padding protects the edge information of the image from being blurred or lost during the network training process. Depth is the number of convolution kernels, which should be the same as the number of neurons in the next layer. The number of neurons in the convolutional layer is calculated by subtracting the number of neurons from the size of the convolution plus twice the sum of the zero padding, dividing by the step size, and finally adding one to the resulting result. Without parameter sharing, 10\(\times\)64\(\times\)64\(\times\)5\(\times\)5\(\times\)3=3072000 parameters are required, and with parameter sharing, 10\(\times\)5\(\times\)5\(\times\)3=750 parameters are required. It can be seen that parameter sharing reduces the number of features obtained by the convolutional nuclei, which leads to the loss of local features if the image size is large. An effective way to solve this problem is to set multiple convolutional kernels in each convolutional layer. Figure 1: Two-dimensional convolution example **Activating Layer** The role of this layer is, as the name suggests, both to take the output of the convolutional layer and to process it nonlinearly. Commonly used nonlinear mapping functions will be introduced in the following. Sigmoid function Advantages: take the value range (0, 1), simple, easy to understand. Disadvantages: too much data may paralyze the neuron, so that the gradient information cannot be transmitted; the function output data center point does not lie at the zero point. **Pooling Layer** The pooling layer, also called subsampling layer, is used for feature extraction, which reduces the number of neurons to some extent and prevents the appearance of overfitting. This layer removes redundant information and retains only key features, which can improve robustness. The pooling layer, also known as the downsampling layer, causes the features of the input information to be lost, which in turn cuts the number of Figure 3: Sigmoid function Figure 2: Feature Mapping parameters, making the network less computationally burdensome; while keeping the important features unchanged (cropping, stretching, scaling, etc.). One is average pooling, which requires summing the feature points in the neighborhood and then dividing the total feature value equally among the feature points; the other is maximum pooling, which, as the name implies, excludes all smaller feature values in the domain and takes them out. The pooling often makes mistakes in obtaining the feature values: first, the variance of the estimate increases; second, the shift of the mean of the estimate. In terms of the prevailing theory, in image processing, the first error handling method mostly uses the mean pooling operation to moderate the size limitation of the domain to reduce the variance, thus making the image background clearer; while the second error handling method mostly uses the maximum pooling operation, which basically ignores the parameter error of the convolutional layer and guarantees the mean accuracy, thus preserving the texture of the image. Therefore, one of these two methods is missing in convolutional neural networks. #### 2.2.2 Flatten layer and fully connected layer The role of the flatten layer is to flatten multidimensional data into one-dimensional data. The fully-connected layer limits the dimensionality of the data, and thus flattening the data for re-input is essential. The fully connected layer is often used as the closing layer in the convolutional neural network structure, using different activation functions to match different classification requirements. #### 2.2.3 Output Layer The role of this layer is to output the final target result. #### 2.2.4 Structure of convolutional neural networks [26] The layers introduced above are combined to become the complete convolutional neural network structure [27]. Figure 4 shows the basic structure of a CNN, where each convolutional layer applies an activation function for quadratic sampling and then two fully connected layers to give predictions. Figure 4: Basic structure of CNN ### Common models of convolutional neural networks Many models of convolutional neural networks exist, and several commonly used models will be presented here. LeNet LeNet is mainly used to identify and classify non-printed fonts, and it has an accuracy rate of 98%. As a result, the United States put this model into use in the financial industry in the late 20th century. This model is used as the basis of convolutional neural network, with a total of six layers of network, and the convolutional kernels are all 5\(\times\)5 with a step size of 1,using average pooling: conv \(\rightarrow\) pool \(\rightarrow\) conv \(\rightarrow\) pool \(\rightarrow\) conv(fc) \(\rightarrow\) fc. AlexNet This model uses the ReLU function as the activation function, and optimizes the problem that the gradient of the sigmoid function is prone to be uncomputable in a network with more layers. And some improvements are made in the final fully connected layer, where only some neurons are randomly selected to participate in the computation of the network, which can prevent overfitting. Convolutional neural networks usually use average pooling and maximum pooling alternately, but in this model, only maximum pooling is used, basically ignoring the parameter error of the convolutional layer and the size limitation of the neighborhood. This model reduces the step size to achieve a pooling kernel size larger than the step size value, so the output of the pooling layer enhances the feature richness. A local response normalization layer is created for the first time, so that the neuron responses in this layer show bipolarity and improve generalization ability. VGGNet. The LRN layer used in AlexNet was not found to bring significant performance improvement to the network in later practice, so the LRN layer in VGGNet has no performance gain (A-LRN) and is not extended to other network models. VGGNet increases the number of network layers compared with other previous networks, and the number of layers in its network structure is twice or more than AlexNet without counting the pooling and softmax layers here. The concept of convolutional block is proposed for the first time, and 2\(\sim\)3 convolutional layers form a convolutional block, which can reduce the number of parameters and enhance the learning ability by using ReLU activation function. GoogLeNet. Inception V1 increases the convolution module function compared to several previously proposed network structures. The previous network structure improves the training effect, but the effect benefits from its increased number of network layers also deepens the network depth. However, the deeper depth also brings many problems, such as overfitting, gradient cannot be found in the network and the computational effort increases. **SqueezeNet.** SqueezeNet's model compression uses 3 strategies. (1) replacing 3\(\times\)3 convolution with 1\(\times\)1 convolution: the number of parameters of convolution is reduced to 1/9 of the original one, which helps to improve the speed of network operation; (2) reducing the number of channels of 3\(\times\)3 convolution: the computation of a 3\(\times\)3 convolution is 3\(\times\)3\(\times\)a\(\times\)b (where a, b are the number of channels of input Feature Map and output Feature Map respectively), reducing the number of channels to reduce the number of parameters The number of channels is reduced to reduce the number of parameters, which helps to simplify the operation and improve the performance of the network; (3) the downsampling is set back: the larger Feature Map contains more information, so the downsampling is moved to the classification layer. Such an operation can improve the accuracy of the network, but it will increase the burden of network computation. **ResNet.** Before introducing the model, it is necessary to understand the concept of residuals, first of all, it is necessary to distinguish between residuals and errors. The error is the measured value minus the reference value, and the residual is the difference between the actual observed value and the predicted value, and the residual can detect whether the prediction is accurate or not. The function of one layer in the residual network is set as y=F(x), and the residual model can be expressed as H(x)=G(x) + x, that is, G(x)=H(x)-x. In the unit mapping, y=x is the actual observed value, and H(x) is the fitted value, so G(x) corresponds to the residual, so it is called the residual network. Losing the residuals, as shown in the connection on the left side of the figure, the error in training and the network depth show a negative correlation as the number of networks increases. In contrast, theoretically, the increase of network depth and the model training effect should show a positive correlation. Theoretical and practical deviations often exist, and for an ordinary network without jump connections, the deeper the depth will make the computation more complicated, and the improvement and Figure 5: Residual network enhancement of the algorithm will be more difficult to achieve. Therefore, in reality, there is a positive correlation between the depth of the network and the number of training errors. To solve this problem, the network needs to detect the existence of redundant layers by itself, which makes the optimization algorithm complicated and does not achieve constant mapping. The ResNet model is able to solve this problem in a very fitting way by updating the parameters of the redundant layers with the residual G(x)=0 instead of the fitted value H(x)=x, and by doing so, updating the parameters of the redundant layers. That is, after the network spontaneously detects and infers which layers are redundant and useless, the residual function G(x)=0 makes the network of that layer, after removing the redundant layer, match the input of the previous layer accurately. In this way, the effect of errors caused by redundant layers is almost eliminated, effectively solving the network degradation problem. As an example to explore the cause of network degradation, when one designs the network in the first place, one does not perform the actual operation to grasp the number of layers needed for the network structure. To be on the safe side and to enable the network to train well, people tend to set up more layers of network structure. When the network is actually trained, it is found that only half the number of layers may be needed to complete the task of this network, and then the extra layers are redundant. Therefore, we hope that during the training process, the model can find out that the other half of the layers are redundant and make a constant mapping for only half of the layers, so that the input data will be identical to the output data after passing through the model. But often the model is likely to learn this half of the constant mapping incorrectly, so it may not work as well as a model with 2/3 of the original number of layers set. Therefore, as the number of layers of the network increases, the effect of model training may degrade, which is caused by the redundant layers learning the wrong constant mappings. ### DenseNet. In a comprehensive view, DenseNet has the following advantages over the previous models. (i) the use of dense connectivity, which mainly improves the back propagation speed of gradients to accelerate the training of convolutional neural networks. (2) The parameters are reduced and the values are decreased to improve the efficiency of computation and to reduce the feature maps specific to each layer; (3) Feature reuse is used to reuse the low-level features for the last layer to play the role of classification. ### MobileNet. 1MobileNet-v1 In a nutshell, V1 replaces the usual convolutional layers in vgg with depth-separable convolution, and therefore can greatly reduce the number of parameters; and adds the hyperparameters \(\alpha\) and \(\beta\) on top of vgg. 2MobileNet-v2 MobileNetV2 is proposed by Google in 2018, with better accuracy and smaller model compared to V1. The model highlights have Inverted Residuals structure (Inverted Residuals) and Linear bottlenecks. #### 2.3.2 Deep Residual Learning. The core difference of this algorithm is that it proposes a new structure with a topological spreading to form a new block structure, replacing the convolutional block structure of the previous model, which can optimize the performance of the model prediction and improve the accuracy while adding almost no new parameters. The topological spreading also reduces the number of hyperparameters and improves the generality of the model. #### 2.3.3 ShuffelNet. 1ShuffelNet is improved by two new operations: point-state group convolution and channel scrubbing, similar to the previous model, which can ensure the accuracy of the network structure output results and reduce the computational complexity. The basic cell structure of the model is optimized and improved based on the residual model cells. 2ShuffelNet-v2 The number of neurons in this model is relatively small, and the number of branches between layers is thus reduced to speed up the model convergence. The model input speed depends on the number of input and output feature channels, but too many grouping parameters can affect the model convergence speed. #### 2.3.4 EfficientNet. Convolutional neural networks are usually built after resource evaluation, and the more resources are available, the better the performance of the network model will be. This model delves into how to scale the model up and down and finds that making the depth and width of the network converge across the layers or reducing the gap in resolution can both improve the network's effectiveness. Therefore, a new method is proposed to balance the above three characteristics of the network with composite coefficients, etc. This model was born out of the desire to find a new balance between network depth, width and resolution to measure the accuracy of the network. Previous models have used only one of these aspects to evaluate the effectiveness of the network. This model found that these three aspects together have an impact on the scaling of the network, and explored the evidence of the interaction between the three, based on which the best combination of the three was found. ### General Methods for Gradient-Based Attacks #### 2.4.1 Membership inference. Membership inference [28] refers to speculating whether these data points have been used in the process of training the model based on the known training model and the delimited range of data points. In federation learning, the updated gradient information is fed back to the server every round, so the server is able to have certain local model information. With this attack algorithm, the server is able to know whether the delimited data points are used for model training or not. Sometimes, in certain situations, this attack can directly lead to a privacy breach. For example, if the attack learns that a patient's clinical records are used for training a model for a particular disease, the fact that the patient has that disease is compromised. In practice, Melis et al. demonstrated that this attack approach is extremely accurate on the FourSquare location dataset [29] and can almost determine whether a particular data point data point is used for category classification training. Attribute inference.Attribute inference refers to inferring whether the corresponding training set contains the same labeled attributes as the known model based on the known training model. Note that the attribute is not important in terms of its relevance to the main task. When training a model on the LFW dataset [30] for identifying gender or race, attribute inference can infer whether they wear a mask or not, in addition to the two known labels. In practice, this also poses a potential risk of privacy compromise. If the patient's age, gender, race, and whether they wear a mask or not are known, there is a high risk that the patient's personal information will be compromised, even if the name and clinical records remain confidential. Model inversion.Model inversion is a greater threat to the privacy of the training dataset compared to the first two aggressive ones. Since the learning process is always ongoing, this attack exploits this property by having the adversary train a generative adversarial network (GAN) [31] to generate samples that match the training dataset. The results of the attack show that the images obtained are almost identical to the original images, since the GAN is able to create matching samples that are nearly identical to the original training dataset. Moreover, the higher the similarity of the training set, the better the performance of this attack.The above three attack strategies reveal that the information in the gradient is at risk of leakage to some extent, but each of these three attacks has its own limitations. The membership inference attack relies on delimited data, and the attack will be much more difficult when the input data is not textual information (e.g., images, voice). Attribute inference relaxes the constraint that only a label is needed to perform the attack. However, the attack result will narrow the scope and there is no guarantee to find the specific data. For model inversion, although it can generate synthetic images directly from the statistical distribution of the training data, the results are similar alternatives (rather than the original data) and only work when all class members are similar. What will be investigated and demonstrated in this paper is how to steal the training data completely from the gradient information without prior training data. ### Summary of this chapter This chapter introduced the types of networks and their structures used in this attack. The first section starts with the federal learning system and outlines how it updates the model by gradients; the second section describes the working principle of convolutional neural networks suitable for training classification images and the structure of each level; the third section briefly describes the commonly used convolutional neural network models and provides the basis for the next study on how to select and apply such models for training; the fourth section introduces the The fourth subsection introduces some methods that can be used to perform gradient attacks with prior knowledge of the training data. The theoretical foundation is laid for the subsequent research in this paper to prove the attack algorithm based on gradient features only. ## 3 Design of reconstruction attack algorithm based on gradient features The subject under study is a reconstruction attack based on gradient features, using a convolutional neural network for the training of a federal learning system for image classification. In this paper, we need to use the gradient derived from the image and its label information trained by the convolutional neural network to restore the original information. This chapter first introduces the principle of the attack that can obtain part of the original data, and then delves into the analysis and study of the algorithm that restores the complete original information based on the gradient. ### Local leakage of specific layers First, this chapter starts with a few special layers to study and optimize the attack algorithm step by step. The first one is the fully-connected layer (FC). The fully connected layer is indispensable in both neural networks and convolutional neural networks. For the biased fully connected layer, it is mathematically proven that the reduction of the original input data from the gradient information is done without considering the position of this layer and the class of layers before and after this layer. Lemma 1: Suppose a fully connected layer of a neural network contains weights and biases with input \(X\in\mathbb{R}^{n}\) and output \(Y\in\mathbb{R}^{m}\), weight \(W\in\mathbb{R}^{m\times n}\)and bias \(B\in\mathbb{R}^{m}\), then it is obtained \[Y=WX+B\] (3-1) If there exists \(\frac{dL}{d(B_{i})}\neq 0\), then the input data X can B be reconstructed from \(\frac{dL}{dW}\) and \(\frac{dL}{dB}\). The following proof is carried out: it is known that \(\frac{dL}{d(B_{i})}=\frac{dL}{dY_{i}}\) and \(\frac{d(Y_{i})}{d(W_{i})}=X^{T}\), then \[\frac{dL}{d(W_{i})}=\frac{dL}{d(Y_{i})}\cdot\frac{d(Y_{i})}{d(W_{i})}=\frac{dL }{d(B_{i})}\cdot X^{T}\] (3-2) where\(Y_{i}\)\(W_{i}\)and\(B_{i}\) denote the ith row of output \(Y\), weight \(W\) and bias \(B\). Therefore, the input X can be reconstructed from this formula as long as \(\frac{dL}{d(B_{i})}\neq 0\) is satisfied. The derivative as well as the bias \(\frac{dL}{dB}\) are crucial for reconstructing the input layer. To make the gradient attack more general, Geiping et al. delved deeper and found that if the bias \(B\) is eliminated, the original input data can also be restored from a small amount of gradient information as long as a suitable activation function (e.g., ReLU activation function) is found. The proof process is similar, and the reconstruction of the input data in the fully connected layer still works well. If the function is not derived, the input data information is still implied in the gradient. For example, in the language classification task, the federal learning system generates corresponding gradients only for the words in the input model, and the attack tells which words and phrases were used for model training in each local data set, respectively. The cross-entropy layer in the classification task, on the other hand, can only generate negative gradients for the data with corrected completion labels. This property gives away the true data labels to some extent. However, there are many more factors to consider when extending from the fully connected layer (FC) to the more complex convolutional layer (CONV), where the number of features in the convolutional layer and the dimensionality of the input occupation are much larger than the size of the gradient values. A parsing reconstruction method like the one in Lemma 1 will no longer be applicable. Modern convolutional neural networks require a more general attack algorithm. ### Complete leakage of the gradient Zhu et al [33] proposed a new and improved algorithmic method that is able to solve the above problem by using neural networks with the same structure and matching gradients to restore the reconstructed original dataset. Thus it can ensure that the dataset is private and non-interoperable, and the generality and attack capability of this method are broader and more powerful than the methods in the previous subsection, and this technique is called Deep Gradient Leakage algorithm (DLG). DLG is a reconstruction attack based on gradient features. The attacker receives the gradient update \(\nabla W_{t,k}\), \(k\) from the other participants \(k\) in round \(t\), in order to obtain the training set \((x_{t,k},y_{t,k})\) of participant k from the shared exchange information. Figure 3-1 shows how it works in stealing image information: normal participants input an image from the original private data and derive a prediction by the F-model, then use the difference between the prediction and the labeled value to calculate the gradient, which is returned to the participants to update the model. The algorithm first generates a virtual pixel point image with the same size as the real image, and then initializes a virtual label indicating the probability, such as the cat and dog classification explored in this topic, which sets the label value of 0 for the cat and 1 for the dog. then a softmax layer is generated. the DLG iterates the matching of the image and the label on the intermediate local model to compute the virtual gradient. Note that most FL models share the privacy difference module \(F\)\((x,W\,)\) and the weights \(W\) by default. The loss function is set to be the difference between the true gradient and the virtual gradient, and then the squared number is obtained to ensure that the loss function is positive. The key point of this reconstruction attack is to narrow the gap between the real gradient and the virtual gradient by continuously iterating, and then return to the models of both parties, update their respective parameters, and retrain the attacker's model so that the attacker's gradient value can continuously approximate the real gradient value. When the target loss function is close to zero, the virtual data image will also be infinitely close to the original data image. In Figure 6, the variables to be updated are marked in bold blue. While the local training model is trained using its differential privacy module and calculates the corresponding \(\nabla W\), the attacker uses its own randomly generated input images with label values to derive the gradient \(\nabla W^{\prime}\)and calculates the difference between the two gradients, which the attacker uses as a basis to adjust the parameters and computationally update its virtual input \(X\) and label \(Y\) so that the gradient loss function converges to a minimum. When the optimization is complete, the attacker can restore the original data information from the local model. The flow of the algorithm is shown next in mathematical form. \[\mathbf{x}^{\prime*},\mathbf{y}^{\prime*}=\arg\underset{\mathbf{x}^{\prime}, \mathbf{y}^{\prime}}{\min}\mathbf{\eta}\|\nabla W^{\prime}-\nabla W\|^{2}=\arg \underset{\mathbf{x}^{\prime},\mathbf{y}^{\prime}}{\min}\mathbf{\eta}\left\| \frac{\partial\ell\left(F(\mathbf{x}^{\prime};W)\mathbf{y}^{\prime}\right)}{ \partial W}-\nabla W\right\|^{2}\] (3-3) This equation to show how the virtual input \(\mathbf{x}^{\prime*}\) and the label value \(\mathbf{y}^{\prime*}\mathbb{I}\mathbb{I}\) are obtained from the gradient reduction. Let the input be \(F\) () : the microscopic machine learning model; \(W\): the parameter weights; \(\nabla W\): the gradient computed from the training data; \(\eta\): the learning rate used for DLG optimization. The outputs are the original private training data \(\mathbf{x}\) and the labels \(\mathbf{y}\). 1 DLG algorithm (\(F\), \(W\), \(\nabla W\)) 2 \(\mathbf{x}^{\prime}{}_{1}\leftarrow\mathcal{N}(0,1),\mathbf{y}^{\prime}_{1} \leftarrow\mathcal{N}(0,1)\) Initialize virtual inputs and labels. 3 for \(i\gets 1\) to \(n\) do 4 \(\mathbf{l}^{\prime}_{i}=\text{softmax}(\mathbf{y}^{\prime}_{i})\) 5 \(\nabla W^{\prime}_{i}\leftarrow\partial\ell(F(\mathbf{x}^{\prime}_{i},W), \mathbf{L}^{\prime}_{i})/\ \partial W_{t}\) Calculate the virtual gradient. 6 \(\mathbb{D}_{i}\leftarrow\left\|\nabla W^{\prime}_{i}-\nabla W\right\|^{2}\) Update the input data according to the gradient. 7 \(\mathbf{x}^{\prime}_{i+1}\leftarrow\mathbf{x}^{\prime}_{i}-\eta\nabla_{ \mathbf{x}^{\prime}_{i}}\mathbb{D}_{i}\) Update the labels according to the gradient. 8 \(\mathbf{y}^{\prime}_{i+1}\leftarrow\mathbf{y}^{\prime}_{i}-\eta\nabla_{ \mathbf{y}^{\prime}_{i}}\mathbb{D}_{i}\) Update the labels according to the gradient. Figure 6: DLG algorithm It is important to note that the distance of the gradient, i.e., the loss function must be derivable, so that the virtual input data \(x\) and label \(y\) can be optimized using a standard gradient-based approach. it follows that such optimization requires a second-order derivable function. Here it is assumed that \(F\) is a second-order derivable function and this algorithm is applicable to most modern AI models, most neural networks and related tasks. ### Optimization of DLG algorithm The DLG algorithm can restore the complete original data image in most of the scenes, but in this topic, we found that there is a problem that some of the images cannot be restored completely in practice, and we propose an improvement method based on this problem. Since the original gradient information is generated based on the pixel information of the input image and the label through constant matching, then the richer and more vivid the image color is, the more information the RGB three channels carry, the more pixel information they contain, the more complex the generated gradient is, and the more information can be obtained through the attack, and it is easier to restore the original image. Observe the part of the image that cannot be fully converged, there are mostly large blank areas, which contain relatively less pixel information, so the complete image cannot be restored. The uneven distribution of image pixel information and the small amount of information in local areas lead to difficulties in image restoration. Thus, the improved algorithm adds the calculation of the average value of the amount of information contained in the image, based on which the hue of the whole image is inferred, and then the variance of each pixel point from the average value is calculated and returned to calculate the gradient and adjust the parameters. When most of the light-colored areas exist in the image, the average value of the image is relatively small, and after other color-rich areas are restored, after iteration, that is, it is possible to calculate the remaining areas based on the average value of the pixel information as light-colored, and to reduce the frequency of random pixel points and dark pixel points to some extent. ### Summary of this chapter Starting from the simplest fully connected layer, this chapter analyzes the principle of reconstructing the input data from the gradient, but this method also has its limitations and is not applicable on CNN networks. Then, an optimization algorithm based on this method is introduced, which not only breaks through the original limitations, but also is better in restoring the original data, and can completely restore the original image and labels based on the gradient. Finally, based on the shortcomings of the DLG algorithm, an improvement method is proposed. Performance evaluation of the reconstruction attack algorithm based on gradient features This chapter shows the implementation of the gradient feature-based reconstruction attack algorithm and the performance evaluation of it and the improved algorithm. ### System Environment The implementation of the attack in this paper is based on the algorithm written in python language, using the self-contained libraries in PyCharm to support the writing of the program, and the libraries, versions, and configurations used are described in Table 1 below. The subject is trained on CPU, but the CPU is slow in training images, if conditions allow, it is recommended to use GPU for model training to improve the training efficiency. This section will compare the DLG algorithm and its improved algorithms, using the two metrics of intuitive image presentation and image restoration as a measure. image restoration This paper uses the mean square error between the restored image and the original image data. ### Implementation of reconstruction attacks based on gradient features #### Dogs and cats classification dataset The training set of this model uses the cat and dog dataset disclosed by Kaggle in 2013, which consists of 25,000 examples, including 12,500 examples of cats and 12,500 examples of dogs. Therefore, in this paper, 20,000 images are selected as the training dataset and 2,500 as the test dataset. The data consists of RGB three-channel images of various sizes, in which the types of cats and dogs vary in form and the environment they are in, and the label values of cats and dogs are set to 0 and 1, respectively. #### Implementation of DLG algorithm \begin{table} \begin{tabular}{|c|c|c|} \hline Database & Versions & Description \\ \hline opencv-python & 4.5.5.62 & Converts images into pixel information. \\ \hline Pillow & 8.4.0 & Image processing. \\ \hline scikit-learn & 1.0.2 & \begin{tabular}{c} Contains algorithms such \\ as classification, regression, \\ clustering \\ \end{tabular} \\ \hline scipy & 1.7.3 & \begin{tabular}{c} Differentiation, optimization, image processing \\ \end{tabular} \\ \hline tensorboard & 2.8.0 & View training \\ \hline torch & 1.10.1 & Convert data units \\ \hline torchvision & 0.11.2 & Process image data \\ \hline \end{tabular} \end{table} Table 1: Software Configuration Description The attack process is shown in the figure below. All DLG attacks start with a randomly generated pixel point (the first image) and try to infinitely approximate the generated virtual gradient to the real gradient value. As shown in Table 4-2, the decrease of the mean square error between the virtual image data and the original image data indicates the degree of image convergence, reflecting that the virtual data image gradually approaches the original data image. ### Improved implementation of the algorithm From the above table, it can be seen that the number of pixel points present in the images is positively correlated with the mean square error between the images during the restoration of the dog and cat images. It can be visually seen from the image \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Original image & DLG algorithm & \begin{tabular}{c} DLG \\ Mean Square \\ Error \\ \end{tabular} & \begin{tabular}{c} Improved algorithms \\ \end{tabular} & \begin{tabular}{c} Improved algorithms \\ \end{tabular} \\ \hline & & 24.06 & & 19.54 \\ \hline & & 47.55 & & 42.45 \\ \hline & & 40.11 & & 25.36 \\ \hline & & 28.81 & & 24.34 \\ \hline & & 30.30 & & 22.41 \\ \hline & & 28.35 & & 15.28 \\ \end{tabular} \end{table} Table 3: Comparison of DLG algorithm and improved algorithm rendering effect that the improved algorithm has relatively fewer random pixel points present and the mean squared error between the images and the original image is smaller. ### Experimental results and analysis The DLG attack algorithm used in this paper can attack and restore the vast majority of the original cat and dog pictures based on the gradient, as shown in Figure 7 and Figure 8. Meanwhile, as shown in Table 2, the mean square error between the original data and the original data also tends to the minimum value, which basically stays around 3. However, in the training of a large number of images, it was found that there existed a part of images with poor convergence, which still left randomly generated pixel points. Such images usually have some areas with lighter color nearly white, and after improving the algorithm, as shown in Table 3, it can be observed that the improved algorithm has better restoration of the lighter color areas and the mean square error between the original pixel images is smaller. It illustrates that the reconstruction attack based on gradient features is basically able to restore the local data images in the federal learning system. ### Summary of this chapter This chapter is the implementation and improvement of the gradient feature-based reconstruction attack. The first subsection introduces the programming language used to implement the algorithm, the programming environment, and all the libraries used; the second subsection describes the dataset used and shows the results of the implementation of the attack algorithm in detail; the third subsection analyzes the results and demonstrates that the gradient feature-based reconstruction attack can be a threat to the local data of the federal learning system[34-55]. ## 5 Conclusion and Outlook ### Conclusion In this paper, we study the reconstruction attack based on gradient features, mainly using deep learning techniques and algorithms[56-62] for reconstruction attacks. This paper investigates the mechanism of federation learning, the structural hierarchy of convolutional neural networks (CNNs), and the deep gradient leakage (DLG) algorithm that does not rely on the original dataset for the attack. In this paper, the cat and dog classification dataset is selected as the training model for federation learning, and LeNet, one of the models in CNN, is used for data training. The python language and various libraries in PyCharm are used to complete the reconstruction attack based on gradient features, and the original attack algorithm is improved to make the effect of the restored original image better, which proves that the federation learning gradient has the risk of information leakage. ### Deficiencies and problems In this paper, the gradient-based attack is implemented for the gradient in the federal learning system using relevant techniques, but some problems are found in the implementation and testing sessions of the attack, which need continuous improvement and optimization. (1) When trying to restore high-resolution images, the attack algorithm is not stable enough, the convergence speed is too slow, and the restoration effect is not good. (2) When the attack algorithm is applied to images containing only two colors (such as black and white) with large differences, it may fail to converge or converge poorly, and the images have a large number of random pixel points. (3) The attack algorithm can only do one gradient input to restore an original image for the time being, and cannot input multiple gradients to restore multiple images at the same time. (4) The current algorithm still has problems such as the applicability is not wide enough, and it cannot attack the training model of text class and so on. ### Outlook for follow-up work Federation learning system will be more widely used in future artificial intelligence technology, although it is not yet seen in some industries, but because of its high efficiency, it must be used more in the future to bring more convenient and fast life to human beings. The research in this paper raises certain questions about the confidentiality of federal learning, and this attack algorithm can be further studied and optimized in depth subsequently. (1) The DLG algorithm can restore most of the images at present, but there are still some problems, and the follow-up work hopes to continue to improve this algorithm, and improve the convergence speed and accuracy of the restoration of the algorithm. (2) Different training set categories and training set sizes may affect the training effect and attack effect of the CNN network, which can be supplemented with different categories of images to strengthen the attack algorithm. (3) This attack algorithm temporarily cannot attack multiple images in batch, and the attack speed is slow, which can be further improved to enhance the efficiency.
2309.06265
A total variation version of Breuer--Major Central Limit Theorem under $\mathbb{D}^{1,2}$ assumption
In this note, we establish a qualitative total variation version of Breuer--Major Central Limit Theorem for a sequence of the type $\frac{1}{\sqrt{n}} \sum_{1\leq k \leq n} f(X_k)$, where $(X_k)_{k\ge 1}$ is a centered stationary Gaussian process, under the hypothesis that the function $f$ has Hermite rank $d \geq 1$ and belongs to the Malliavin space $\mathbb D^{1,2}$. This result in particular extends the recent works of [NNP21], where a quantitative version of this result was obtained under the assumption that the function $f$ has Hermite rank $d= 2$ and belongs to the Malliavin space $\mathbb D^{1,4}$. We thus weaken the $\mathbb D^{1,4}$ integrability assumption to $\mathbb D^{1,2}$ and remove the restriction on the Hermite rank of the base function. While our method is still based on Malliavin calculus, we exploit a particular instance of Malliavin gradient called the sharp operator, which reduces the desired convergence in total variation to the convergence in distribution of a bidimensional Breuer--Major type sequence.
Jürgen Angst, Federico Dalmao, Guillaume Poly
2023-09-12T14:28:32Z
http://arxiv.org/abs/2309.06265v1
A total variation version of Breuer-Major Central Limit Theorem under \(\mathbb{D}^{1,2}\) assumption ###### Abstract In this note, we establish a qualitative total variation version of Breuer-Major Central Limit Theorem for a sequence of the type \(\frac{1}{\sqrt{n}}\sum_{1\leq k\leq n}f(X_{k})\), where \((X_{k})_{k\geq 1}\) is a centered stationary Gaussian process, under the hypothesis that the function \(f\) has Hermite rank \(d\geq 1\) and belongs to the Malliavin space \(\mathbb{D}^{1,2}\). This result in particular extends the recent works of [20], where a quantitative version of this result was obtained under the assumption that the function \(f\) has Hermite rank \(d=2\) and belongs to the Malliavin space \(\mathbb{D}^{1,4}\). We thus weaken the \(\mathbb{D}^{1,4}\) integrability assumption to \(\mathbb{D}^{1,2}\) and remove the restriction on the Hermite rank of the base function. While our method is still based on Malliavin calculus, we exploit a particular instance of Malliavin gradient called the sharp operator, which reduces the desired convergence in total variation to the convergence in distribution of a bidimensional Breuer-Major type sequence. ## 1 Framework and main result Let us consider \(X=(X_{n})_{n\geq 1}\) a real-valued centered stationary Gaussian sequence with unit variance, defined on an abstract probability space \((\Omega,\mathscr{F},\mathbb{P})\). Let \(\rho:\mathbb{N}\to\mathbb{R}\) be the associated correlation function, in other words \(\rho(|k-\ell|)=\mathbb{E}[X_{k}X_{\ell}]\), for all \(k,\ell\geq 1\). We will also classically denote by \(\mathcal{N}(0,\sigma^{2})\) the law of a centered normal variable with variance \(\sigma^{2}\). Set \(\gamma(dx):=(2\pi)^{-1/2}e^{-x^{2}/2}dx\) the standard Gaussian measure on the real line and \(\gamma_{d}=\otimes_{k=1}^{d}\gamma\) its analogue in \(\mathbb{R}^{d}\). We then denote by \((H_{m})_{m\geq 0}\) the family of Hermite polynomials which are orthogonal with respect to \(\gamma\), namely \(H_{0}\equiv 1\) and \[H_{m}(x):=(-1)^{m}e^{\frac{x^{2}}{2}}\frac{d^{m}}{dx^{m}}e^{-\frac{x^{2}}{2}}, \quad m\geq 1.\] We denote by \(L^{2}(\mathbb{R},\gamma)\) the space of square integrable real functions with respect to the Gaussian measure. Recall that a real function \(f\in L^{2}(\mathbb{R},\gamma)\) is said to have Hermite rank \(d\geq 0\) if it can be decomposed as a sum of the form \[f(x)=\sum_{m=d}^{+\infty}c_{m}H_{m}(x),\quad c_{d}\neq 0.\] For integers \(k,p\geq 1\), we further denote by \(\mathbb{D}^{k,p}(\mathbb{R},\gamma)\) the Malliavin-Sobolev space consisting of the completion of the family of polynomial functions \(q:\mathbb{R}\to\mathbb{R}\) with respect to the norm \[||q||_{k,p}:=\left|\int_{\mathbb{R}}\left(|q(x)|^{p}+\sum_{\ell=1}^{k}|q^{( \ell)}(x)|^{p}\right)\gamma(dx)\right|^{1/p},\] where \(q^{(\ell)}\) is the \(\ell\)-th derivative of \(q\). Given a real function \(f\), let us finally set \[S_{n}(f):=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}f(X_{k}).\] In this framework, the celebrated Central Limit Theorem (CLT) by Breuer and Major gives sufficient conditions on \(\rho\) and \(f\) so that the sequence \(S_{n}(f)\) satisfies a CLT. **Theorem 1** (Theorem 1 in [1]).: _If the function \(f\) belongs to \(L^{2}(\mathbb{R},\gamma)\) with Hermite rank \(d\geq 1\) and if \(\rho\in\ell^{d}(\mathbb{N})\), i.e. \(\sum_{\mathbb{N}}|\rho(k)|^{d}<+\infty,\) then the sequence \((S_{n}(f))_{n\geq 1}\) converges in distribution as \(n\) goes to infinity to a normal distribution \(\mathcal{N}(0,\sigma^{2})\), where the limit variance is given by_ \[\sigma^{2}:=\sum_{m=d}^{\infty}m!c_{m}^{2}\sum_{k\in\mathbb{Z}}\rho(k)^{m},\] _with \(c_{m}\) being the coefficients appearing in the Hermite expansion of \(f\)._ Recently, under mild additional assumptions, a series of articles has reinforced the above convergence in distribution into a convergence in total variation, with polynomial quantitative bounds, see e.g. [13, 14, 15, 16]. Recall that the total variation distance between the distributions of two real random variables \(X\) and \(Y\) is given by \[d_{\mathrm{TV}}(X,Y):=\sup_{A\in\mathcal{B}(\mathbb{R})}|\mathbb{P}(X\in A)- \mathbb{P}(Y\in A)|,\] where the supremum runs over \(\mathcal{B}(\mathbb{R})\), the Borel sigma field on the real line. To the best of our knowledge, the best statement so far in this direction is the following **Theorem 2** (Theorem 1.2 in [16]).: _Assume that \(f\in L^{2}(\mathbb{R},\gamma)\) has Hermite rank \(d=2\) and that it belongs to \(\mathbb{D}^{1,4}(\mathbb{R},\gamma)\). Suppose that \(\rho\in\ell^{d}(\mathbb{N})\) and that the variance \(\sigma^{2}\) of Theorem 1 is positive. Then, there exists a constant \(C>0\) independent of \(n\) such that_ \[d_{\mathrm{TV}}\left(\frac{S_{n}(f)}{\sqrt{\text{var}(S_{n}(f))}},\mathcal{N} (0,1)\right)\leq\frac{C}{\sqrt{n}}\left[\left(\sum_{|k|\leq n}|\rho(k)|\right)^ {\frac{1}{2}}+\left(\sum_{|k|\leq n}|\rho(k)|^{\frac{4}{3}}\right)^{\frac{3}{ 2}}\right].\] The goal of this note is to establish that the convergence in total variation in fact holds as soon as the function \(f\) is in the Malliavin-Sobolev space \(\mathbb{D}^{1,2}(\mathbb{R},\gamma)\) and has Hermite rank \(d\geq 1\). **Theorem 3**.: _Suppose that \(f\in\mathbb{D}^{1,2}(\mathbb{R},\gamma)\) has Hermite rank \(d\geq 1\). Suppose moreover that \(\rho\in\ell^{d}(\mathbb{N})\) and that the variance \(\sigma^{2}\) of Theorem 1 is positive. Then, as \(n\) goes to infinity_ \[d_{\rm TV}\left(\frac{S_{n}(f)}{\sqrt{\text{var}(S_{n}(f))}},\mathcal{N}(0,1) \right)\xrightarrow[n\to+\infty]{}0.\] Note that, for the sake of simplicity, we only consider here a real Gaussian sequence \((X_{n})_{n\geq 1}\) and a real function \(f\) but our method is robust and would yield, under similar covariance and rank assumptions, a convergence in total variation for a properly renormalized sequence of the type \(\sum_{k=1}^{n}f(X_{k}^{1},\ldots,X_{k}^{d})\) associated with a sequence of Gaussian vectors \((X_{n})_{n\geq 1}\) with values in \(\mathbb{R}^{d}\) and a function \(f\) in the corresponding Malliavin-Sobolev space \(\mathbb{D}^{1,2}(\mathbb{R}^{d},\gamma_{d})\). The detailed proof of Theorem 3 is the object of the next section and the rest of the paper. Unsurprisingly, we use the Malliavin-Stein approach to establish the CLT in total variation. However, our approach differs from the other works mentioned above in that we make use of the so called "sharp gradient", whose definition and main properties are recalled in Section 2.2. With this tool at hand and in view of using Malliavin-Stein equation to characterize the proximity to the normal distribution, we shall see that the convergence in total variation in fact reduces to two rather simple steps \(i)\) a two-dimensional version of the classical Breuer-Major CLT (i.e. in distribution not in total variation), see Section 2.3 ; \(ii)\) some elementary uniform integrability estimates, allowing to pass from a convergence in probability to a convergence in \(L^{1}\), see Section 2.4. ## 2 Proof of the main result As mentioned just above, the setting of the proof of Theorem 3 is the one of Malliavin-Stein calculus. Note that for each fixed \(n\geq 1\), the quantity of interest \(S_{n}(f)\) involves only a finite number of Gaussian coefficients. So let us sketch the framework of Malliavin-Stein method in the finite dimensional setting, and we refer to [11] or [20] for a more general introduction. ### A glimpse of Malliavin calculus Let us fix an integer \(n\geq 1\) and let us place ourselves in the product probability space \((\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n}),\gamma_{n})\) with \(\gamma_{n}:=\otimes_{k=1}^{n}\gamma\), the \(n\)-dimensional standard Gaussian distribution on \(\mathbb{R}^{n}\). Consider the classical _Ornstein-Ulhenbeck_ operator \(\mathcal{L}_{n}:=\Delta-\vec{x}\cdot\nabla\) which is symmetric with respect to \(\gamma_{n}\). We have then the standard decomposition of the \(L^{2}-\)space in Wiener chaoses, namely \[L^{2}(\gamma_{n}) = \bigoplus_{k=0}^{\infty}\operatorname{Ker}\left(\mathcal{L}_{n}+k \mathrm{I}\right),\quad\text{with}\] \[\operatorname{Ker}\left(\mathcal{L}_{n}+k\mathrm{I}\right) = \operatorname{Vect}\left(\prod_{i=0}^{n}H_{k_{i}}(x_{i})\Big{|} \sum_{i=0}^{n}k_{i}=n\right)\mathrel{\mathop{:}\limits_{k\text{-th Wiener chaos}}}\] The square field or "carre du champ" operator \(\Gamma_{n}\) is then defined as the bilinear operator \(\Gamma_{n}:=[\cdot,\cdot]=\nabla\cdot\nabla\). As a glimpse of the power of Malliavin-Stein approach in view of establishing total variation estimates, recall that if \(F\in\operatorname{Ker}\left(\mathcal{L}_{n}+k\mathrm{I}\right)\) is such that \(\mathbb{E}[F^{2}]=1\), then for some constant \(C_{k}\) only depending on \(k\), the total variation distance between the variable \(F\) and a standard Gaussian can be upper bounded by \[d_{TV}\left(F,\mathcal{N}(0,1)\right)\leq C_{k}\sqrt{\operatorname{var}\left( \Gamma\left[F,F\right]\right)}.\] Via the notion of isonormal Gaussian process, the finite dimensional framework for Malliavin-Stein method sketched above can in fact be extended to the infinite dimensional setting giving rise to an Ornstein-Uhlenbeck operator \(\mathcal{L}\) and an associated "carre du champ" \(\Gamma\), see e.g. Chapter 2 in [20]. ### The sharp gradient A detailed introduction to the sharp gradient can be found in Section 4.1 of the reference [1]. We only recall here the basics which will be useful to our purpose. Let us assume that \((N_{k})_{k\geq 1}\) is an i.i.d. sequence of standard Gaussian variables on \((\Omega,\mathcal{F},\mathbb{P})\) which generate the first Wiener chaos. Without loss of generality, we shall assume that \(\mathcal{F}=\sigma(N_{k},\ k\geq 1)\). We will also need a copy \((\hat{\Omega},\hat{\mathcal{F}},\hat{\mathbb{P}})\) of this probability space as well as \((\hat{N}_{i})_{i\geq 1}\) a corresponding i.i.d. sequence of standard Gaussian variables such that \(\hat{\mathcal{F}}=\sigma(\hat{N}_{k},\ k\geq 1)\). We will denote by \(\hat{\mathbb{E}}\) the expectation with respect to the measure \(\hat{\mathbb{P}}\). For any integer \(m\geq 1\) and any function \(\Phi\) in the space \(\mathcal{C}^{1}_{b}(\mathbb{R}^{m},\mathbb{R})\) of continuously differentiable functions with a bounded gradient, we then set \[\sharp\Phi(N_{1},\cdots,N_{m}):=\sum_{i=1}^{m}\partial_{i}\Phi(N_{1},\cdots,N _{m})\hat{N}_{i}. \tag{1}\] In Sections 4.1.1 and 4.1.2 of [1], it is shown that this _gradient_ is closable and extends to the Malliavin space \(\mathbb{D}^{1,2}\), where \[\mathbb{D}^{1,2}:=\left\{F\in\mathbb{L}^{2}(\Omega,\mathcal{F},\mathbb{P}),\, \mathbb{E}[F^{2}]+\mathbb{E}\left[(\sharp F)^{2}\right]<+\infty\right\}.\] The last space \(\mathbb{D}^{1,2}\) is naturally the infinite dimensional version of the Malliavin-Sobolev space \(\mathbb{D}^{1,2}(\mathbb{R},\gamma)\) introduced in Section 1 in the one-dimensional setting. In particular, Proposition 8 in the latter reference shows that \[\forall F\in\mathbb{D}^{1,2},\,\forall\phi\in\mathcal{C}^{1}_{b}(\mathbb{R}, \mathbb{R})\ :\ \sharp\phi(F)=\phi^{\prime}(F)^{\sharp}F.\] Given \(F\in\mathbb{D}^{1,2}\), taking first the expectation \(\hat{\mathbb{E}}\) with respect \(\hat{\mathbb{P}}\) and using Fubini inversion of sums yields the following key relation, for all \(\xi\in\mathbb{R}\) \[\mathbb{E}\left(\exp\left(-\frac{\xi^{2}}{2}\Gamma[F,F]\right)\right)=\hat{ \mathbb{E}}\mathbb{E}\left(\exp\left(i\xi^{\sharp}F\right)\right). \tag{2}\] By essence, via their Laplace/Fourier transforms, this key equation allows to relate the asymptotic behavior in distribution (or in probability if the limit is constant) of the carre du champ \(\Gamma[F,F]\) with the one of the sharp gradient \({}^{\sharp}F\). Finally, let us remark that by definition, the image \(({}^{\sharp}X_{k})_{k\geq 1}\) of our initial stationary sequence \((X_{k})_{k\geq 1}\) by the sharp gradient is an independent copy of \((X_{k})_{k\geq 1}\). We will write \(({}^{\sharp}X_{k})_{k\geq 1}=(\tilde{X_{k}})_{k\geq 1}\) in the sequel. ### Convergence in probability via a two dimensional CLT Let us suppose that \(f\) satisfies the assumptions of Theorem 3, namely \(f\in\mathbb{D}^{1,2}(\mathbb{R},\gamma)\) with Hermite rank \(d\geq 1\), so that it can be decomposed as \(f=\sum_{m=d}^{\infty}c_{m}H_{m}\) in \(L^{2}(\mathbb{R},\gamma)\). Let \(\mathcal{L}^{-1}\) denote the pseudo-inverse of the Ornstein-Uhlenbeck operator and consider the pre-image \[g(x):=-\mathcal{L}^{-1}[f](x)=\sum_{m=d}^{\infty}\frac{c_{m}}{m}H_{m}(x).\] To simplify the expressions in the sequel, we set \[F_{n}:=S_{n}(f)=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}f(X_{k}),\ \ \text{and}\ \ G_{n}:=S_{n}(g)=-\mathcal{L}^{-1}F_{n}=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}g(X _{k}).\] Now, take \((s,t,\xi)\in\mathbb{R}^{3}\) and let us apply the above key relation (2) with the random variable \(tF_{n}+sG_{n}\), we get \[\mathbb{E}\left[\exp\left(-\frac{\xi^{2}}{2}\Gamma[tF_{n}+sG_{n},tF_{n}+sG_{n }]\right)\right]=\hat{\mathbb{E}}\mathbb{E}\left[\exp\left(i\xi\left(t{}^{ \sharp}F_{n}+s{}^{\sharp}G_{n}\right)\right)\right]. \tag{3}\] On the one hand, by bilinearity of the carre du champ operator, we have \[\Gamma[tF_{n}+sG_{n},tF_{n}+sG_{n}]=t^{2}\Gamma[F_{n},F_{n}]+s^{2}\Gamma[G_{n },G_{n}]+2ts\Gamma[F_{n},-\mathcal{L}^{-1}F_{n}]. \tag{4}\] On the other hand, the right hand side of Equation (3) is simply the characteristic function under \(\mathbb{P}\otimes\hat{\mathbb{P}}\) of the couple \(({}^{\sharp}F_{n},{}^{\sharp}G_{n})\) where \[\left({}^{\sharp}F_{n},{}^{\sharp}G_{n}\right)=\frac{1}{\sqrt{n}}\sum_{k=1}^{ n}\left(f^{\prime}(X_{k})\tilde{X_{k}},g^{\prime}(X_{k})\tilde{X_{k}}\right),\] is a "Breuer-Major type" sequence with respect to the \(\mathbb{R}^{2}-\)valued centered stationary Gaussian process \((\hat{X_{k}},X_{k})_{k\geq 1}\) and the \(\mathbb{R}^{2}-\)valued functional \[(x,y)\mapsto\Psi(x,y):=(f^{\prime}(x)y,g^{\prime}(x)y).\] Since \(f\) is in \(\mathbb{D}^{1,2}(\mathbb{R},\gamma)\), its derivative \(f^{\prime}\) is in \(L^{2}(\mathbb{R},\gamma)\) and \((\hat{X}_{k})_{k\geq 1}\) and \((X_{k})_{k\geq 1}\) are independent, therefore the functional \(\Psi\) is in \(L^{2}(\mathbb{R}^{2},\gamma_{2})\) and the multivariate counterpart of the classical Breuer-Major Theorem applies, see Theorem 4 of [10]. As a result, the bidimensional sequence \((\sharp F_{n},\sharp G_{n})\) converges in distribution, under \(\mathbb{P}\otimes\bar{\mathbb{P}}\), towards a bidimensional centered Gaussian vector with a symmetric semi-positive covariance matrix \(\Sigma\). Therefore, from Equations (3) and (4) and via the characterization of convergence in distribution in terms of Fourier transform, there exists real numbers \(\lambda,\mu,\nu\) (depending on the limit covariance matrix \(\Sigma\)) such that for any \((s,t,\xi)\in\mathbb{R}^{3}\), as \(n\) goes to infinity, we have \[\mathbb{E}\left[e^{-\frac{\xi^{2}t^{2}}{2}\Gamma[F_{n},F_{n}]-\frac{\xi^{2}s^ {2}}{2}\Gamma[G_{n},G_{n}]-\xi^{2}ts\Gamma[F_{n},-\mathcal{L}^{-1}F_{n}]} \right]\xrightarrow[n\to\infty]{}e^{-\frac{\xi^{2}}{2}\left(\lambda t^{2}+\mu s ^{2}+2\nu ts\right)}.\] Since the above convergence is valid for any \(\xi\in\mathbb{R}\), this shows in particular that for any fixed \((s,t)\in\mathbb{R}^{2}\), the sequence \(\Gamma[tF_{n}+sG_{n},tF_{n}+sG_{n}]\) converges in distribution (and thus in probability) towards the constant variable \(\left(\lambda t^{2}+\mu s^{2}+2\nu ts\right)\). Choosing \(s=t=1\), we thus get that \(\Gamma[F_{n}+G_{n},F_{n}+G_{n}]\) converges in probability towards \((\lambda+\mu+2\nu)\). Choosing \(s=0\) and \(t=1\), then \(t=0\) and \(s=1\), one deduce in the same manner that \(\Gamma[F_{n},F_{n}]\) and \(\Gamma[G_{n},G_{n}]\) both converge in probability towards \(\lambda\) and \(\mu\) respectively. Finally, by Equation (4), one can conclude that the cross term \[\Gamma[F_{n},G_{n}]=\Gamma(F_{n},-\mathcal{L}^{-1}F_{n})=\hat{\mathbb{E}} \left[\sharp F_{n}\sharp G_{n}\right]\] also converges in probability towards the constant limit variable \(\nu\). ### Gaining some uniform integrability Since our goal is to derive convergence in total variation of \(F_{n}=S_{n}(f)\), the convergence in probability of the term \(\Gamma[F_{n},-\mathcal{L}^{-1}F_{n}]\) is not sufficient. Indeed, with Stein's Equation in mind, the lack of uniform integrability is a problem to deduce the following required asymptotic behavior for any \(\phi\in\mathcal{C}^{1}_{b}(\mathbb{R})\), as \(n\) goes to infinity \[\mathbb{E}\left[\phi^{\prime}(F_{n})\Gamma[F_{n},-\mathcal{L}^{-1}F_{n}] \right]\approx\nu\,\mathbb{E}\left[\phi^{\prime}(F_{n})\right].\] In order to bypass this problem, let us go back to the two-dimensional classical Breuer-Major theorem associated with the functional \(\Psi\) used in the last section. For any integer \(p\geq 1\), let us denote by \(\Psi_{p}\) the projection of \(\Psi\) on the first \(p-th\) chaoses. Applying Theorem 4 and Equation (2.43) of [10], we get that there exists a constant \(C>0\) (which depends only on the covariance structure of the underlying Gaussian process) such that \[\sup_{n\geq 1}\mathbb{E}\hat{\mathbb{E}}\left[\left|\frac{1}{\sqrt{n}}\sum_{k=1} ^{n}(\Psi-\Psi_{p})(X_{k},\hat{X}_{k})\right|^{2}\right]\leq C\times\int_{ \mathbb{R}^{2}}|(\Psi-\Psi_{p})(x)|^{2}\gamma_{2}(dx).\] Since \(\Psi\) belongs to \(L^{2}(\mathbb{R}^{2},\gamma_{2})\), the last term on the right hand side goes to zero as \(p\) goes to infinity. As a result, uniformly in \(n\geq 1\), the two-dimensional process \[\left(\sharp F_{n},\,\sharp G_{n}\,\right)=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\Psi( X_{k},\hat{X_{k}})\] can be approximated arbitrarily closely in \(L^{2}(\mathbb{P}\otimes\hat{\mathbb{P}})\) by the following process which is finitely expanded on the Wiener chaoses \[Z_{n}^{p}:=(Z_{n}^{p,1},Z_{n}^{p,2}):=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\Psi_{p}( X_{k},\hat{X_{k}}).\] Therefore, choosing \(p\geq 1\) large enough, uniformly in \(n\geq 1\), the product \(\sharp F_{n}\times\sharp G_{n}\) can be approximated arbitrarily closely in \(L^{1}(\mathbb{P}\otimes\hat{\mathbb{P}})\) by \(\Delta_{n}^{p}:=Z_{n}^{p,1}\times Z_{n}^{p,2}\). In other words, for any \(\varepsilon>0\) and \(p\geq 1\) large enough, we have \[\sup_{n}\mathbb{E}\left[\left|\hat{\mathbb{E}}\left(\sharp F_{n}\times\sharp G _{n}\right)-\hat{\mathbb{E}}\left(\Delta_{n}^{p}\right)\right|\right]\leq\sup _{n}\mathbb{\hat{E}}\left[\left|\sharp F_{n}\times\sharp G_{n}-\Delta_{n}^{p} \right|\right]<\varepsilon.\] But mimicking the proof detailed in the previous Section 2.3 for the convergence in probability of \(\Gamma[F_{n},G_{n}]\) towards the constant variable \(\nu\), one would then similarly get here that \(\hat{\mathbb{E}}[\Delta_{n}^{p}]\) converges in probability under \(\mathbb{P}\) towards a constant random variable \(\nu_{p}\in\mathbb{R}\), and by construction \(\lim_{p\to+\infty}\nu_{p}=\nu\). The crucial point here is that both random variables \(\Delta_{n}^{p}\) and \(\hat{\mathbb{E}}[\Delta_{n}^{p}]\) are now finitely expanded on the Wiener chaoses under \(\mathbb{P}\otimes\hat{\mathbb{P}}\) and \(\mathbb{P}\) respectively. Therefore, by hypercontractivity, the convergence in probability can be freely upgraded to the convergence in \(L^{q}\) for every \(q\geq 1\). In particular, as \(n\) goes to infinity, the sequence \(\hat{\mathbb{E}}[\Delta_{n}^{p}]\) converges in \(L^{1}\) to the constant variable \(\nu_{p}\). ### Conclusion We go back to Stein's Equation. Let \(\phi\in\mathcal{C}_{b}^{1}(\mathbb{R})\) and \(\varepsilon>0\). Integrating by parts, for \(p\geq 1\) large enough and by the results of the last section, we have \[\left|\mathbb{E}\left[F_{n}\phi(F_{n})\right]-\nu\,\mathbb{E} \left[\phi^{\prime}(F_{n})\right]\right|=\left|\mathbb{E}\left[\phi^{\prime}(F _{n})\Gamma[F_{n},-\mathcal{L}^{-1}F_{n}]\right]-\nu\,\mathbb{E}\left[\phi^{ \prime}(F_{n})\right]\right|\] \[=\left|\mathbb{E}\left[\phi^{\prime}(F_{n})\Gamma[F_{n},G_{n}] \right]-\nu\,\mathbb{E}\left[\phi^{\prime}(F_{n})\right]\right|\] \[=\left|\mathbb{E}\left[\phi^{\prime}(F_{n})\left(\Gamma[F_{n},G_ {n}]-\hat{\mathbb{E}}[\Delta_{n}^{p}]\right)\right]+\mathbb{E}\left[\phi^{ \prime}(F_{n})\left(\hat{\mathbb{E}}[\Delta_{n}^{p}]-\nu_{p}\right)\right]+( \nu_{p}-\nu)\mathbb{E}\left[\phi^{\prime}(F_{n})\right]\right|\] \[\leq||\phi^{\prime}||_{\infty}\varepsilon+||\phi^{\prime}||_{ \infty}\mathbb{E}\left[\left|\hat{\mathbb{E}}[\Delta_{n}^{p}]-\nu_{p}\right| \right]+||\phi^{\prime}||_{\infty}|\nu_{p}-\nu|.\] As a result, letting first \(n\) and then \(p\) go to infinity, we get that uniformly in \(\phi\) such that \(||\phi^{\prime}||_{\infty}\leq C\) \[\limsup_{n\to+\infty}\left|\mathbb{E}\left[F_{n}\phi(F_{n})\right]-\nu\, \mathbb{E}\left[\phi^{\prime}(F_{n})\right]\right|=0.\] One can then classically conclude using Stein's approach for the convergence in total variation.